EGATConv¶
-
class
dgl.nn.pytorch.conv.
EGATConv
(in_node_feats, in_edge_feats, out_node_feats, out_edge_feats, num_heads, bias=True)[source]¶ Bases:
torch.nn.modules.module.Module
Graph attention layer that handles edge features from Rossmann-Toolbox (see supplementary data)
The difference lies in how unnormalized attention scores \(e_{ij}\) are obtained:
\[ \begin{align}\begin{aligned}e_{ij} &= \vec{F} (f_{ij}^{\prime})\\f_{ij}^{\prime} &= \mathrm{LeakyReLU}\left(A [ h_{i} \| f_{ij} \| h_{j}]\right)\end{aligned}\end{align} \]where \(f_{ij}^{\prime}\) are edge features, \(\mathrm{A}\) is weight matrix and
- Math
vec{F} is weight vector. After that, resulting node features
\(h_{i}^{\prime}\) are updated in the same way as in regular GAT.
- Parameters
in_node_feats (int) – Input node feature size \(h_{i}\).
in_edge_feats (int) – Input edge feature size \(f_{ij}\).
out_node_feats (int) – Output node feature size.
out_edge_feats (int) – Output edge feature size \(f_{ij}^{\prime}\).
num_heads (int) – Number of attention heads.
bias (bool, optional) – If True, add bias term to :math: f_{ij}^{prime}. Defaults:
True
.
Examples
>>> import dgl >>> import torch as th >>> from dgl.nn import EGATConv
>>> num_nodes, num_edges = 8, 30 >>> # generate a graph >>> graph = dgl.rand_graph(num_nodes,num_edges)
>>> node_feats = th.rand((num_nodes, 20)) >>> edge_feats = th.rand((num_edges, 12)) >>> egat = EGATConv(in_node_feats=20, ... in_edge_feats=12, ... out_node_feats=15, ... out_edge_feats=10, ... num_heads=3) >>> #forward pass >>> new_node_feats, new_edge_feats = egat(graph, node_feats, edge_feats) >>> new_node_feats.shape, new_edge_feats.shape torch.Size([8, 3, 15]) torch.Size([30, 3, 10])
-
forward
(graph, nfeats, efeats, get_attention=False)[source]¶ Compute new node and edge features.
- Parameters
graph (DGLGraph) – The graph.
nfeats (torch.Tensor) –
The input node feature of shape \((N, D_{in})\) where:
\(D_{in}\) is size of input node feature, \(N\) is the number of nodes.
efeats (torch.Tensor) –
The input edge feature of shape \((E, F_{in})\) where:
\(F_{in}\) is size of input node feature, \(E\) is the number of edges.
get_attention (bool, optional) – Whether to return the attention values. Default to False.
- Returns
pair of torch.Tensor – node output features followed by edge output features The node output feature of shape \((N, H, D_{out})\) The edge output feature of shape \((F, H, F_{out})\) where:
\(H\) is the number of heads, \(D_{out}\) is size of output node feature, \(F_{out}\) is size of output edge feature.
torch.Tensor, optional – The attention values of shape \((E, H, 1)\). This is returned only when :attr: get_attention is
True
.