DGNConvΒΆ

class
dgl.nn.pytorch.conv.
DGNConv
(in_size, out_size, aggregators, scalers, delta, dropout=0.0, num_towers=1, edge_feat_size=0, residual=True)[source]ΒΆ Bases:
dgl.nn.pytorch.conv.pnaconv.PNAConv
Directional Graph Network Layer from Directional Graph Networks
DGN introduces two special directional aggregators according to the vector field \(F\), which is defined as the gradient of the lowfrequency eigenvectors of graph laplacian.
The directional average aggregator is defined as \(h_i' = \sum_{j\in\mathcal{N}(i)}\frac{F_{i,j}\cdot h_j}{F_{i,:}_1+\epsilon}\)
The directional derivative aggregator is defined as \(h_i' = \sum_{j\in\mathcal{N}(i)}\frac{F_{i,j}\cdot h_j}{F_{i,:}_1+\epsilon} h_i\cdot\sum_{j\in\mathcal{N}(i)}\frac{F_{i,j}}{F_{i,:}_1+\epsilon}\)
\(\epsilon\) is the infinitesimal to keep the computation numerically stable.
 Parameters
in_size (int) β Input feature size; i.e. the size of \(h_i^l\).
out_size (int) β Output feature size; i.e. the size of \(h_i^{l+1}\).
aggregators (list of str) β
List of aggregation function names(each aggregator specifies a way to aggregate messages from neighbours), selected from:
mean
: the mean of neighbour messagesmax
: the maximum of neighbour messagesmin
: the minimum of neighbour messagesstd
: the standard deviation of neighbour messagesvar
: the variance of neighbour messagessum
: the sum of neighbour messagesmoment3
,moment4
,moment5
: the normalized moments aggregation
\((E[(XE[X])^n])^{1/n}\)
dir{k}av
: directional average aggregation with directions defined by the kth
smallest eigenvectors. k can be selected from 1, 2, 3.
dir{k}dx
: directional derivative aggregation with directions defined by the kth
smallest eigenvectors. k can be selected from 1, 2, 3.
Note that using directional aggregation requires the LaplacianPE transform on the input graph for eigenvector computation (the PE size must be >= k above).
scalers (list of str) β
List of scaler function names, selected from:
identity
: no scalingamplification
: multiply the aggregated message by \(\log(d+1)/\delta\),
where \(d\) is the indegree of the node.
attenuation
: multiply the aggregated message by \(\delta/\log(d+1)\)
delta (float) β The indegreerelated normalization factor computed over the training set, used by scalers for normalization. \(E[\log(d+1)]\), where \(d\) is the indegree for each node in the training set.
dropout (float, optional) β The dropout ratio. Default: 0.0.
num_towers (int, optional) β The number of towers used. Default: 1. Note that in_size and out_size must be divisible by num_towers.
edge_feat_size (int, optional) β The edge feature size. Default: 0.
residual (bool, optional) β The bool flag that determines whether to add a residual connection for the output. Default: True. If in_size and out_size of the DGN conv layer are not the same, this flag will be set as False forcibly.
Example
>>> import dgl >>> import torch as th >>> from dgl.nn import DGNConv >>> from dgl import LaplacianPE >>> >>> # DGN requires precomputed eigenvectors, with 'eig' as feature name. >>> transform = LaplacianPE(k=3, feat_name='eig') >>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3])) >>> g = transform(g) >>> eig = g.ndata['eig'] >>> feat = th.ones(6, 10) >>> conv = DGNConv(10, 10, ['dir1av', 'dir1dx', 'sum'], ['identity', 'amplification'], 2.5) >>> ret = conv(g, feat, eig_vec=eig)

forward
(graph, node_feat, edge_feat=None, eig_vec=None)[source]ΒΆ Compute DGN layer.
 Parameters
graph (DGLGraph) β The graph.
node_feat (torch.Tensor) β The input feature of shape \((N, h_n)\). \(N\) is the number of nodes, and \(h_n\) must be the same as in_size.
edge_feat (torch.Tensor, optional) β The edge feature of shape \((M, h_e)\). \(M\) is the number of edges, and \(h_e\) must be the same as edge_feat_size.
eig_vec (torch.Tensor, optional) β K smallest nontrivial eigenvectors of Graph Laplacian of shape \((N, K)\). It is only required when
aggregators
contains directional aggregators.
 Returns
The output node feature of shape \((N, h_n')\) where \(h_n'\) should be the same as out_size.
 Return type
torch.Tensor