DGNConvΒΆ

class dgl.nn.pytorch.conv.DGNConv(in_size, out_size, aggregators, scalers, delta, dropout=0.0, num_towers=1, edge_feat_size=0, residual=True)[source]ΒΆ

Bases: dgl.nn.pytorch.conv.pnaconv.PNAConv

Directional Graph Network Layer from Directional Graph Networks

DGN introduces two special directional aggregators according to the vector field \(F\), which is defined as the gradient of the low-frequency eigenvectors of graph laplacian.

The directional average aggregator is defined as \(h_i' = \sum_{j\in\mathcal{N}(i)}\frac{|F_{i,j}|\cdot h_j}{||F_{i,:}||_1+\epsilon}\)

The directional derivative aggregator is defined as \(h_i' = \sum_{j\in\mathcal{N}(i)}\frac{F_{i,j}\cdot h_j}{||F_{i,:}||_1+\epsilon} -h_i\cdot\sum_{j\in\mathcal{N}(i)}\frac{F_{i,j}}{||F_{i,:}||_1+\epsilon}\)

\(\epsilon\) is the infinitesimal to keep the computation numerically stable.

Parameters
  • in_size (int) – Input feature size; i.e. the size of \(h_i^l\).

  • out_size (int) – Output feature size; i.e. the size of \(h_i^{l+1}\).

  • aggregators (list of str) –

    List of aggregation function names(each aggregator specifies a way to aggregate messages from neighbours), selected from:

    • mean: the mean of neighbour messages

    • max: the maximum of neighbour messages

    • min: the minimum of neighbour messages

    • std: the standard deviation of neighbour messages

    • var: the variance of neighbour messages

    • sum: the sum of neighbour messages

    • moment3, moment4, moment5: the normalized moments aggregation

    \((E[(X-E[X])^n])^{1/n}\)

    • dir{k}-av: directional average aggregation with directions defined by the k-th

    smallest eigenvectors. k can be selected from 1, 2, 3.

    • dir{k}-dx: directional derivative aggregation with directions defined by the k-th

    smallest eigenvectors. k can be selected from 1, 2, 3.

    Note that using directional aggregation requires the LaplacianPE transform on the input graph for eigenvector computation (the PE size must be >= k above).

  • scalers (list of str) –

    List of scaler function names, selected from:

    • identity: no scaling

    • amplification: multiply the aggregated message by \(\log(d+1)/\delta\),

    where \(d\) is the in-degree of the node.

    • attenuation: multiply the aggregated message by \(\delta/\log(d+1)\)

  • delta (float) – The in-degree-related normalization factor computed over the training set, used by scalers for normalization. \(E[\log(d+1)]\), where \(d\) is the in-degree for each node in the training set.

  • dropout (float, optional) – The dropout ratio. Default: 0.0.

  • num_towers (int, optional) – The number of towers used. Default: 1. Note that in_size and out_size must be divisible by num_towers.

  • edge_feat_size (int, optional) – The edge feature size. Default: 0.

  • residual (bool, optional) – The bool flag that determines whether to add a residual connection for the output. Default: True. If in_size and out_size of the DGN conv layer are not the same, this flag will be set as False forcibly.

Example

>>> import dgl
>>> import torch as th
>>> from dgl.nn import DGNConv
>>> from dgl import LaplacianPE
>>>
>>> # DGN requires precomputed eigenvectors, with 'eig' as feature name.
>>> transform = LaplacianPE(k=3, feat_name='eig')
>>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3]))
>>> g = transform(g)
>>> eig = g.ndata['eig']
>>> feat = th.ones(6, 10)
>>> conv = DGNConv(10, 10, ['dir1-av', 'dir1-dx', 'sum'], ['identity', 'amplification'], 2.5)
>>> ret = conv(g, feat, eig_vec=eig)
forward(graph, node_feat, edge_feat=None, eig_vec=None)[source]ΒΆ

Compute DGN layer.

Parameters
  • graph (DGLGraph) – The graph.

  • node_feat (torch.Tensor) – The input feature of shape \((N, h_n)\). \(N\) is the number of nodes, and \(h_n\) must be the same as in_size.

  • edge_feat (torch.Tensor, optional) – The edge feature of shape \((M, h_e)\). \(M\) is the number of edges, and \(h_e\) must be the same as edge_feat_size.

  • eig_vec (torch.Tensor, optional) – K smallest non-trivial eigenvectors of Graph Laplacian of shape \((N, K)\). It is only required when aggregators contains directional aggregators.

Returns

The output node feature of shape \((N, h_n')\) where \(h_n'\) should be the same as out_size.

Return type

torch.Tensor