CuGraphSAGEConvο
- class dgl.nn.pytorch.conv.CuGraphSAGEConv(in_feats, out_feats, aggregator_type='mean', feat_drop=0.0, bias=True)[source]ο
Bases:
CuGraphBaseConv
An accelerated GraphSAGE layer from Inductive Representation Learning on Large Graphs that leverages the highly-optimized aggregation primitives in cugraph-ops:
\[ \begin{align}\begin{aligned}h_{\mathcal{N}(i)}^{(l+1)} &= \mathrm{aggregate} \left(\{h_{j}^{l}, \forall j \in \mathcal{N}(i) \}\right)\\h_{i}^{(l+1)} &= W \cdot \mathrm{concat} (h_{i}^{l}, h_{\mathcal{N}(i)}^{(l+1)})\end{aligned}\end{align} \]This module depends on
pylibcugraphops
package, which can be installed viaconda install -c nvidia pylibcugraphops=23.04
.pylibcugraphops
23.04 requires python 3.8.x or 3.10.x.Note
This is an experimental feature.
- Parameters:
Examples
>>> import dgl >>> import torch >>> from dgl.nn import CuGraphSAGEConv >>> device = 'cuda' >>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3])).to(device) >>> g = dgl.add_self_loop(g) >>> feat = torch.ones(6, 10).to(device) >>> conv = CuGraphSAGEConv(10, 2, 'mean').to(device) >>> res = conv(g, feat) >>> res tensor([[-1.1690, 0.1952], [-1.1690, 0.1952], [-1.1690, 0.1952], [-1.1690, 0.1952], [-1.1690, 0.1952], [-1.1690, 0.1952]], device='cuda:0', grad_fn=<AddmmBackward0>)
- forward(g, feat, max_in_degree=None)[source]ο
Forward computation.
- Parameters:
g (DGLGraph) β The graph.
feat (torch.Tensor) β Node features. Shape: \((N, D_{in})\).
max_in_degree (int) β Maximum in-degree of destination nodes. It is only effective when
g
is aDGLBlock
, i.e., bipartite graph. Wheng
is generated from a neighbor sampler, the value should be set to the correspondingfanout
. If not given,max_in_degree
will be calculated on-the-fly.
- Returns:
Output node features. Shape: \((N, D_{out})\).
- Return type:
torch.Tensor