# DenseGraphConvο

class dgl.nn.pytorch.conv.DenseGraphConv(in_feats, out_feats, norm='both', bias=True, activation=None)[source]ο

Bases: Module

Graph Convolutional layer from Semi-Supervised Classification with Graph Convolutional Networks

We recommend user to use this module when applying graph convolution on dense graphs.

Parameters:
• in_feats (int) β Input feature size; i.e, the number of dimensions of $$h_j^{(l)}$$.

• out_feats (int) β Output feature size; i.e., the number of dimensions of $$h_i^{(l+1)}$$.

• norm (str, optional) β How to apply the normalizer. If is βrightβ, divide the aggregated messages by each nodeβs in-degrees, which is equivalent to averaging the received messages. If is βnoneβ, no normalization is applied. Default is βbothβ, where the $$c_{ij}$$ in the paper is applied.

• bias (bool, optional) β If True, adds a learnable bias to the output. Default: True.

• activation (callable activation function/layer or None, optional) β If not None, applies an activation function to the updated node features. Default: None.

Notes

Zero in-degree nodes will lead to all-zero output. A common practice to avoid this is to add a self-loop for each node in the graph, which can be achieved by setting the diagonal of the adjacency matrix to be 1.

Example

>>> import dgl
>>> import numpy as np
>>> import torch as th
>>> from dgl.nn import DenseGraphConv
>>>
>>> feat = th.ones(6, 10)
>>> adj = th.tensor([[0., 0., 1., 0., 0., 0.],
...         [1., 0., 0., 0., 0., 0.],
...         [0., 1., 0., 0., 0., 0.],
...         [0., 0., 1., 0., 0., 1.],
...         [0., 0., 0., 1., 0., 0.],
...         [0., 0., 0., 0., 0., 0.]])
>>> conv = DenseGraphConv(10, 2)
>>> res
tensor([[0.2159, 1.9027],
[0.3053, 2.6908],
[0.3053, 2.6908],
[0.3685, 3.2481],
[0.3053, 2.6908],


GraphConv

Compute (Dense) Graph Convolution layer.

Parameters:
• adj (torch.Tensor) β The adjacency matrix of the graph to apply Graph Convolution on, when applied to a unidirectional bipartite graph, adj should be of shape should be of shape $$(N_{out}, N_{in})$$; when applied to a homo graph, adj should be of shape $$(N, N)$$. In both cases, a row represents a destination node while a column represents a source node.

• feat (torch.Tensor) β The input feature.

Returns:

The output feature of shape $$(N, D_{out})$$ where $$D_{out}$$ is size of output feature.

Return type:

torch.Tensor

reset_parameters()[source]ο

Reinitialize learnable parameters.