SAGEConvΒΆ

class dgl.nn.mxnet.conv.SAGEConv(in_feats, out_feats, aggregator_type='mean', feat_drop=0.0, bias=True, norm=None, activation=None)[source]ΒΆ

Bases: mxnet.gluon.block.Block

GraphSAGE layer from Inductive Representation Learning on Large Graphs

\[ \begin{align}\begin{aligned}h_{\mathcal{N}(i)}^{(l+1)} &= \mathrm{aggregate} \left(\{h_{j}^{l}, \forall j \in \mathcal{N}(i) \}\right)\\h_{i}^{(l+1)} &= \sigma \left(W \cdot \mathrm{concat} (h_{i}^{l}, h_{\mathcal{N}(i)}^{l+1}) \right)\\h_{i}^{(l+1)} &= \mathrm{norm}(h_{i}^{(l+1)})\end{aligned}\end{align} \]
Parameters
  • in_feats (int, or pair of ints) –

    Input feature size; i.e, the number of dimensions of \(h_i^{(l)}\).

    GATConv can be applied on homogeneous graph and unidirectional bipartite graph. If the layer applies on a unidirectional bipartite graph, in_feats specifies the input feature size on both the source and destination nodes. If a scalar is given, the source and destination node feature size would take the same value.

    If aggregator type is gcn, the feature size of source and destination nodes are required to be the same.

  • out_feats (int) – Output feature size; i.e, the number of dimensions of \(h_i^{(l+1)}\).

  • aggregator_type (str) – Aggregator type to use (mean, gcn, pool, lstm).

  • feat_drop (float) – Dropout rate on features, default: 0.

  • bias (bool) – If True, adds a learnable bias to the output. Default: True.

  • norm (callable activation function/layer or None, optional) – If not None, applies normalization to the updated node features.

  • activation (callable activation function/layer or None, optional) – If not None, applies an activation function to the updated node features. Default: None.

Examples

>>> import dgl
>>> import numpy as np
>>> import mxnet as mx
>>> from dgl.nn import SAGEConv
>>>
>>> # Case 1: Homogeneous graph
>>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3]))
>>> g = dgl.add_self_loop(g)
>>> feat = mx.nd.ones((6, 10))
>>> conv = SAGEConv(10, 2, 'pool')
>>> conv.initialize(ctx=mx.cpu(0))
>>> res = conv(g, feat)
>>> res
[[ 0.32144994 -0.8729614 ]
[ 0.32144994 -0.8729614 ]
[ 0.32144994 -0.8729614 ]
[ 0.32144994 -0.8729614 ]
[ 0.32144994 -0.8729614 ]
[ 0.32144994 -0.8729614 ]]
<NDArray 6x2 @cpu(0)>
>>> # Case 2: Unidirectional bipartite graph
>>> u = [0, 1, 0, 0, 1]
>>> v = [0, 1, 2, 3, 2]
>>> g = dgl.heterograph({('_N', '_E', '_N'):(u, v)})
>>> u_fea = mx.nd.random.randn(2, 5)
>>> v_fea = mx.nd.random.randn(4, 10)
>>> conv = SAGEConv((5, 10), 2, 'pool')
>>> conv.initialize(ctx=mx.cpu(0))
>>> res = conv(g, (u_fea, v_fea))
>>> res
[[-0.60524774  0.7196473 ]
[ 0.8832787  -0.5928619 ]
[-1.8245722   1.159798  ]
[-1.0509381   2.2239418 ]]
<NDArray 4x2 @cpu(0)>
forward(graph, feat)[source]ΒΆ

Compute GraphSAGE layer.

Parameters
  • graph (DGLGraph) – The graph.

  • feat (mxnet.NDArray or pair of mxnet.NDArray) – If a single tensor is given, it represents the input feature of shape \((N, D_{in})\) where \(D_{in}\) is size of input feature, \(N\) is the number of nodes. If a pair of tensors are given, the pair must contain two tensors of shape \((N_{in}, D_{in_{src}})\) and \((N_{out}, D_{in_{dst}})\).

Returns

The output feature of shape \((N, D_{out})\) where \(D_{out}\) is size of output feature.

Return type

mxnet.NDArray