AtomicConv¶
-
class
dgl.nn.pytorch.conv.
AtomicConv
(interaction_cutoffs, rbf_kernel_means, rbf_kernel_scaling, features_to_use=None)[source]¶ Bases:
torch.nn.modules.module.Module
Atomic Convolution Layer from Atomic Convolutional Networks for Predicting Protein-Ligand Binding Affinity
Denoting the type of atom \(i\) by \(z_i\) and the distance between atom \(i\) and \(j\) by \(r_{ij}\).
Distance Transformation
An atomic convolution layer first transforms distances with radial filters and then perform a pooling operation.
For radial filter indexed by \(k\), it projects edge distances with
\[h_{ij}^{k} = \exp(-\gamma_{k}|r_{ij}-r_{k}|^2)\]If \(r_{ij} < c_k\),
\[f_{ij}^{k} = 0.5 * \cos(\frac{\pi r_{ij}}{c_k} + 1),\]else,
\[f_{ij}^{k} = 0.\]Finally,
\[e_{ij}^{k} = h_{ij}^{k} * f_{ij}^{k}\]Aggregation
For each type \(t\), each atom collects distance information from all neighbor atoms of type \(t\):
\[p_{i, t}^{k} = \sum_{j\in N(i)} e_{ij}^{k} * 1(z_j == t)\]Then concatenate the results for all RBF kernels and atom types.
- Parameters
interaction_cutoffs (float32 tensor of shape (K)) – \(c_k\) in the equations above. Roughly they can be considered as learnable cutoffs and two atoms are considered as connected if the distance between them is smaller than the cutoffs. K for the number of radial filters.
rbf_kernel_means (float32 tensor of shape (K)) – \(r_k\) in the equations above. K for the number of radial filters.
rbf_kernel_scaling (float32 tensor of shape (K)) – \(\gamma_k\) in the equations above. K for the number of radial filters.
features_to_use (None or float tensor of shape (T)) – In the original paper, these are atomic numbers to consider, representing the types of atoms. T for the number of types of atomic numbers. Default to None.
Note
This convolution operation is designed for molecular graphs in Chemistry, but it might be possible to extend it to more general graphs.
There seems to be an inconsistency about the definition of \(e_{ij}^{k}\) in the paper and the author’s implementation. We follow the author’s implementation. In the paper, \(e_{ij}^{k}\) was defined as \(\exp(-\gamma_{k}|r_{ij}-r_{k}|^2 * f_{ij}^{k})\).
\(\gamma_{k}\), \(r_k\) and \(c_k\) are all learnable.
Example
>>> import dgl >>> import numpy as np >>> import torch as th >>> from dgl.nn import AtomicConv
>>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3])) >>> feat = th.ones(6, 1) >>> edist = th.ones(6, 1) >>> interaction_cutoffs = th.ones(3).float() * 2 >>> rbf_kernel_means = th.ones(3).float() >>> rbf_kernel_scaling = th.ones(3).float() >>> conv = AtomicConv(interaction_cutoffs, rbf_kernel_means, rbf_kernel_scaling) >>> res = conv(g, feat, edist) >>> res tensor([[0.5000, 0.5000, 0.5000], [0.5000, 0.5000, 0.5000], [0.5000, 0.5000, 0.5000], [1.0000, 1.0000, 1.0000], [0.5000, 0.5000, 0.5000], [0.0000, 0.0000, 0.0000]], grad_fn=<ViewBackward>)
-
forward
(graph, feat, distances)[source]¶ Apply the atomic convolution layer.
- Parameters
graph (DGLGraph) – Topology based on which message passing is performed.
feat (Float32 tensor of shape \((V, 1)\)) – Initial node features, which are atomic numbers in the paper. \(V\) for the number of nodes.
distances (Float32 tensor of shape \((E, 1)\)) – Distance between end nodes of edges. E for the number of edges.
- Returns
Updated node representations. \(V\) for the number of nodes, \(K\) for the number of radial filters, and \(T\) for the number of types of atomic numbers.
- Return type
Float32 tensor of shape \((V, K * T)\)