ChebConv

class dgl.nn.mxnet.conv.ChebConv(in_feats, out_feats, k, bias=True)[source]

Bases: mxnet.gluon.block.Block

Chebyshev Spectral Graph Convolution layer from Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering

\[ \begin{align}\begin{aligned}h_i^{l+1} &= \sum_{k=0}^{K-1} W^{k, l}z_i^{k, l}\\Z^{0, l} &= H^{l}\\Z^{1, l} &= \tilde{L} \cdot H^{l}\\Z^{k, l} &= 2 \cdot \tilde{L} \cdot Z^{k-1, l} - Z^{k-2, l}\\\tilde{L} &= 2\left(I - \tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2}\right)/\lambda_{max} - I\end{aligned}\end{align} \]

where \(\tilde{A}\) is \(A\) + \(I\), \(W\) is learnable weight.

Parameters
  • in_feats (int) – Dimension of input features; i.e, the number of dimensions of \(h_i^{(l)}\).

  • out_feats (int) – Dimension of output features \(h_i^{(l+1)}\).

  • k (int) – Chebyshev filter size \(K\).

  • activation (function, optional) – Activation function. Default ReLu.

  • bias (bool, optional) – If True, adds a learnable bias to the output. Default: True.

Example

>>> import dgl
>>> import numpy as np
>>> import mxnet as mx
>>> from dgl.nn import ChebConv
>>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3]))
>>> feat = mx.nd.ones((6, 10))
>>> conv = ChebConv(10, 2, 2)
>>> conv.initialize(ctx=mx.cpu(0))
>>> res = conv(g, feat)
>>> res
[[ 0.832592   -0.738757  ]
[ 0.832592   -0.738757  ]
[ 0.832592   -0.738757  ]
[ 0.43377423 -1.0455742 ]
[ 1.1145986  -0.5218046 ]
[ 1.7954229   0.00196505]]
<NDArray 6x2 @cpu(0)>
forward(graph, feat, lambda_max=None)[source]

Compute ChebNet layer.

Parameters
  • graph (DGLGraph) – The graph.

  • feat (mxnet.NDArray) – The input feature of shape \((N, D_{in})\) where \(D_{in}\) is size of input feature, \(N\) is the number of nodes.

  • lambda_max (list or tensor or None, optional.) –

    A list(tensor) with length \(B\), stores the largest eigenvalue of the normalized laplacian of each individual graph in graph, where \(B\) is the batch size of the input graph. Default: None.

    If None, this method would set the default value to 2. One can use dgl.laplacian_lambda_max() to compute this value.

Returns

The output feature of shape \((N, D_{out})\) where \(D_{out}\) is size of output feature.

Return type

mxnet.NDArray