HGTConv

class dgl.nn.pytorch.conv.HGTConv(in_size, head_size, num_heads, num_ntypes, num_etypes, dropout=0.2, use_norm=False)[source]

Bases: torch.nn.modules.module.Module

Heterogeneous graph transformer convolution from Heterogeneous Graph Transformer

Given a graph \(G(V, E)\) and input node features \(H^{(l-1)}\), it computes the new node features as follows:

Compute a multi-head attention score for each edge \((s, e, t)\) in the graph:

\[\begin{split}Attention(s, e, t) = \text{Softmax}\left(||_{i\in[1,h]}ATT-head^i(s, e, t)\right) \\ ATT-head^i(s, e, t) = \left(K^i(s)W^{ATT}_{\phi(e)}Q^i(t)^{\top}\right)\cdot \frac{\mu_{(\tau(s),\phi(e),\tau(t)}}{\sqrt{d}} \\ K^i(s) = \text{K-Linear}^i_{\tau(s)}(H^{(l-1)}[s]) \\ Q^i(t) = \text{Q-Linear}^i_{\tau(t)}(H^{(l-1)}[t]) \\\end{split}\]

Compute the message to send on each edge \((s, e, t)\):

\[\begin{split}Message(s, e, t) = ||_{i\in[1, h]} MSG-head^i(s, e, t) \\ MSG-head^i(s, e, t) = \text{M-Linear}^i_{\tau(s)}(H^{(l-1)}[s])W^{MSG}_{\phi(e)} \\\end{split}\]

Send messages to target nodes \(t\) and aggregate:

\[\tilde{H}^{(l)}[t] = \sum_{\forall s\in \mathcal{N}(t)}\left( Attention(s,e,t) \cdot Message(s,e,t)\right)\]

Compute new node features:

\[H^{(l)}[t]=\text{A-Linear}_{\tau(t)}(\sigma(\tilde(H)^{(l)}[t])) + H^{(l-1)}[t]\]
Parameters
  • in_size (int) – Input node feature size.

  • head_size (int) – Output head size. The output node feature size is head_size * num_heads.

  • num_heads (int) – Number of heads. The output node feature size is head_size * num_heads.

  • num_ntypes (int) – Number of node types.

  • num_etypes (int) – Number of edge types.

  • dropout (optional, float) – Dropout rate.

  • use_norm (optiona, bool) – If true, apply a layer norm on the output node feature.

Examples

forward(g, x, ntype, etype, *, presorted=False)[source]

Forward computation.

Parameters
  • g (DGLGraph) – The input graph.

  • x (torch.Tensor) – A 2D tensor of node features. Shape: \((|V|, D_{in})\).

  • ntype (torch.Tensor) – An 1D integer tensor of node types. Shape: \((|V|,)\).

  • etype (torch.Tensor) – An 1D integer tensor of edge types. Shape: \((|E|,)\).

  • presorted (bool, optional) – Whether both the nodes and the edges of the input graph have been sorted by their types. Forward on pre-sorted graph may be faster. Graphs created by to_homogeneous() automatically satisfy the condition. Also see reorder_graph() for manually reordering the nodes and edges.

Returns

New node features. Shape: \((|V|, D_{head} * N_{head})\).

Return type

torch.Tensor