GINConv¶
-
class
dgl.nn.pytorch.conv.
GINConv
(apply_func=None, aggregator_type='sum', init_eps=0, learn_eps=False, activation=None)[source]¶ Bases:
torch.nn.modules.module.Module
Graph Isomorphism Network layer from How Powerful are Graph Neural Networks?
\[h_i^{(l+1)} = f_\Theta \left((1 + \epsilon) h_i^{l} + \mathrm{aggregate}\left(\left\{h_j^{l}, j\in\mathcal{N}(i) \right\}\right)\right)\]If a weight tensor on each edge is provided, the weighted graph convolution is defined as:
\[h_i^{(l+1)} = f_\Theta \left((1 + \epsilon) h_i^{l} + \mathrm{aggregate}\left(\left\{e_{ji} h_j^{l}, j\in\mathcal{N}(i) \right\}\right)\right)\]where \(e_{ji}\) is the weight on the edge from node \(j\) to node \(i\). Please make sure that e_{ji} is broadcastable with h_j^{l}.
- Parameters
apply_func (callable activation function/layer or None) – If not None, apply this function to the updated node feature, the \(f_\Theta\) in the formula, default: None.
aggregator_type (str) – Aggregator type to use (
sum
,max
ormean
), default: ‘sum’.init_eps (float, optional) – Initial \(\epsilon\) value, default:
0
.learn_eps (bool, optional) – If True, \(\epsilon\) will be a learnable parameter. Default:
False
.activation (callable activation function/layer or None, optional) – If not None, applies an activation function to the updated node features. Default:
None
.
Examples
>>> import dgl >>> import numpy as np >>> import torch as th >>> from dgl.nn import GINConv >>> >>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3])) >>> feat = th.ones(6, 10) >>> lin = th.nn.Linear(10, 10) >>> conv = GINConv(lin, 'max') >>> res = conv(g, feat) >>> res tensor([[-0.4821, 0.0207, -0.7665, 0.5721, -0.4682, -0.2134, -0.5236, 1.2855, 0.8843, -0.8764], [-0.4821, 0.0207, -0.7665, 0.5721, -0.4682, -0.2134, -0.5236, 1.2855, 0.8843, -0.8764], [-0.4821, 0.0207, -0.7665, 0.5721, -0.4682, -0.2134, -0.5236, 1.2855, 0.8843, -0.8764], [-0.4821, 0.0207, -0.7665, 0.5721, -0.4682, -0.2134, -0.5236, 1.2855, 0.8843, -0.8764], [-0.4821, 0.0207, -0.7665, 0.5721, -0.4682, -0.2134, -0.5236, 1.2855, 0.8843, -0.8764], [-0.1804, 0.0758, -0.5159, 0.3569, -0.1408, -0.1395, -0.2387, 0.7773, 0.5266, -0.4465]], grad_fn=<AddmmBackward>)
>>> # With activation >>> from torch.nn.functional import relu >>> conv = GINConv(lin, 'max', activation=relu) >>> res = conv(g, feat) >>> res tensor([[5.0118, 0.0000, 0.0000, 3.9091, 1.3371, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [5.0118, 0.0000, 0.0000, 3.9091, 1.3371, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [5.0118, 0.0000, 0.0000, 3.9091, 1.3371, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [5.0118, 0.0000, 0.0000, 3.9091, 1.3371, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [5.0118, 0.0000, 0.0000, 3.9091, 1.3371, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [2.5011, 0.0000, 0.0089, 2.0541, 0.8262, 0.0000, 0.0000, 0.1371, 0.0000, 0.0000]], grad_fn=<ReluBackward0>)
-
forward
(graph, feat, edge_weight=None)[source]¶ Compute Graph Isomorphism Network layer.
- Parameters
graph (DGLGraph) – The graph.
feat (torch.Tensor or pair of torch.Tensor) – If a torch.Tensor is given, the input feature of shape \((N, D_{in})\) where \(D_{in}\) is size of input feature, \(N\) is the number of nodes. If a pair of torch.Tensor is given, the pair must contain two tensors of shape \((N_{in}, D_{in})\) and \((N_{out}, D_{in})\). If
apply_func
is not None, \(D_{in}\) should fit the input dimensionality requirement ofapply_func
.edge_weight (torch.Tensor, optional) – Optional tensor on the edge. If given, the convolution will weight with regard to the message.
- Returns
The output feature of shape \((N, D_{out})\) where \(D_{out}\) is the output dimensionality of
apply_func
. Ifapply_func
is None, \(D_{out}\) should be the same as input dimensionality.- Return type
torch.Tensor