class dgl.nn.mxnet.conv.DenseGraphConv(in_feats, out_feats, norm='both', bias=True, activation=None)[source]ยถ

Bases: mxnet.gluon.block.Block

Graph Convolutional layer from Semi-Supervised Classification with Graph Convolutional Networks

We recommend user to use this module when applying graph convolution on dense graphs.

  • in_feats (int) โ€“ Input feature size; i.e, the number of dimensions of \(h_j^{(l)}\).

  • out_feats (int) โ€“ Output feature size; i.e., the number of dimensions of \(h_i^{(l+1)}\).

  • norm (str, optional) โ€“ How to apply the normalizer. If is โ€˜rightโ€™, divide the aggregated messages by each nodeโ€™s in-degrees, which is equivalent to averaging the received messages. If is โ€˜noneโ€™, no normalization is applied. Default is โ€˜bothโ€™, where the \(c_{ij}\) in the paper is applied.

  • bias (bool, optional) โ€“ If True, adds a learnable bias to the output. Default: True.

  • activation (callable activation function/layer or None, optional) โ€“ If not None, applies an activation function to the updated node features. Default: None.


Zero in-degree nodes will lead to all-zero output. A common practice to avoid this is to add a self-loop for each node in the graph, which can be achieved by setting the diagonal of the adjacency matrix to be 1.

See also


forward(adj, feat)[source]ยถ

Compute (Dense) Graph Convolution layer.

  • adj (mxnet.NDArray) โ€“ The adjacency matrix of the graph to apply Graph Convolution on, when applied to a unidirectional bipartite graph, adj should be of shape should be of shape \((N_{out}, N_{in})\); when applied to a homo graph, adj should be of shape \((N, N)\). In both cases, a row represents a destination node while a column represents a source node.

  • feat (mxnet.NDArray) โ€“ The input feature.


The output feature of shape \((N, D_{out})\) where \(D_{out}\) is size of output feature.

Return type