TAGConvΒΆ

class dgl.nn.mxnet.conv.TAGConv(in_feats, out_feats, k=2, bias=True, activation=None)[source]ΒΆ

Bases: mxnet.gluon.block.Block

Topology Adaptive Graph Convolutional layer from Topology Adaptive Graph Convolutional Networks.

\[H^{K} = {\sum}_{k=0}^K (D^{-1/2} A D^{-1/2})^{k} X {\Theta}_{k},\]

where \(A\) denotes the adjacency matrix, \(D_{ii} = \sum_{j=0} A_{ij}\) its diagonal degree matrix, \({\Theta}_{k}\) denotes the linear weights to sum the results of different hops together.

Parameters
  • in_feats (int) – Input feature size. i.e, the number of dimensions of \(X\).

  • out_feats (int) – Output feature size. i.e, the number of dimensions of \(H^{K}\).

  • k (int, optional) – Number of hops \(K\). Default: 2.

  • bias (bool, optional) – If True, adds a learnable bias to the output. Default: True.

  • activation (callable activation function/layer or None, optional) – If not None, applies an activation function to the updated node features. Default: None.

linΒΆ

The learnable linear module.

Type

torch.Module

Example

>>> import dgl
>>> import numpy as np
>>> import mxnet as mx
>>> from mxnet import gluon
>>> from dgl.nn import TAGConv
>>>
>>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3]))
>>> feat = mx.nd.ones((6, 10))
>>> conv = TAGConv(10, 2, k=2)
>>> conv.initialize(ctx=mx.cpu(0))
>>> res = conv(g, feat)
>>> res
[[-0.86147034  0.10089529]
[-0.86147034  0.10089529]
[-0.86147034  0.10089529]
[-0.9707841   0.0360311 ]
[-0.6716844   0.02247889]
[ 0.32964635 -0.7669234 ]]
<NDArray 6x2 @cpu(0)>
forward(graph, feat)[source]ΒΆ

Compute topology adaptive graph convolution.

Parameters
  • graph (DGLGraph) – The graph.

  • feat (mxnet.NDArray) – The input feature of shape \((N, D_{in})\) where \(D_{in}\) is size of input feature, \(N\) is the number of nodes.

Returns

The output feature of shape \((N, D_{out})\) where \(D_{out}\) is size of output feature.

Return type

mxnet.NDArray