TWIRLSUnfoldingAndAttention¶
-
class
dgl.nn.pytorch.conv.
TWIRLSUnfoldingAndAttention
(d, alp, lam, prop_step, attn_aft=-1, tau=0.2, T=-1, p=1, use_eta=False, init_att=False, attn_dropout=0, precond=True)[source]¶ Bases:
torch.nn.modules.module.Module
Combine propagation and attention together.
- Parameters
d (int) – Size of graph feature.
alp (float) – Step size. \(\alpha\) in ther paper.
lam (int) – Coefficient of graph smooth term. \(\lambda\) in ther paper.
prop_step (int) – Number of propagation steps
attn_aft (int) – Where to put attention layer. i.e. number of propagation steps before attention. If set to
-1
, then no attention.tau (float) – The lower thresholding parameter. Correspond to \(\tau\) in the paper.
T (float) – The upper thresholding parameter. Correspond to \(T\) in the paper.
p (float) – Correspond to \(\rho\) in the paper..
use_eta (bool) – If True, learn a weight vector for each dimension when doing attention.
init_att (bool) – If
True
, add an extra attention layer before propagation.attn_dropout (float) – the dropout rate of attention value. Default:
0.0
.precond (bool) – If
True
, use pre-conditioned & reparameterized version propagation (eq.28), else use normalized laplacian (eq.30).
Example
>>> import dgl >>> from dgl.nn import TWIRLSUnfoldingAndAttention >>> import torch as th
>>> g = dgl.graph(([0, 1, 2, 3, 2, 5], [1, 2, 3, 4, 0, 3])).add_self_loop() >>> feat = th.ones(6,5) >>> prop = TWIRLSUnfoldingAndAttention(10, 1, 1, prop_step=3) >>> res = prop(g,feat) >>> res tensor([[2.5000, 2.5000, 2.5000, 2.5000, 2.5000], [2.5000, 2.5000, 2.5000, 2.5000, 2.5000], [2.5000, 2.5000, 2.5000, 2.5000, 2.5000], [3.7656, 3.7656, 3.7656, 3.7656, 3.7656], [2.5217, 2.5217, 2.5217, 2.5217, 2.5217], [4.0000, 4.0000, 4.0000, 4.0000, 4.0000]])