RelGraphConvΒΆ

class dgl.nn.tensorflow.conv.RelGraphConv(*args, **kwargs)[source]ΒΆ

Bases: tensorflow.python.keras.engine.base_layer.Layer

Relational graph convolution layer from Modeling Relational Data with Graph Convolutional Networks

It can be described as below:

\[h_i^{(l+1)} = \sigma(\sum_{r\in\mathcal{R}} \sum_{j\in\mathcal{N}^r(i)}\frac{1}{c_{i,r}}W_r^{(l)}h_j^{(l)}+W_0^{(l)}h_i^{(l)})\]

where \(\mathcal{N}^r(i)\) is the neighbor set of node \(i\) w.r.t. relation \(r\). \(c_{i,r}\) is the normalizer equal to \(|\mathcal{N}^r(i)|\). \(\sigma\) is an activation function. \(W_0\) is the self-loop weight.

The basis regularization decomposes \(W_r\) by:

\[W_r^{(l)} = \sum_{b=1}^B a_{rb}^{(l)}V_b^{(l)}\]

where \(B\) is the number of bases, \(V_b^{(l)}\) are linearly combined with coefficients \(a_{rb}^{(l)}\).

The block-diagonal-decomposition regularization decomposes \(W_r\) into \(B\) number of block diagonal matrices. We refer \(B\) as the number of bases.

The block regularization decomposes \(W_r\) by:

\[W_r^{(l)} = \oplus_{b=1}^B Q_{rb}^{(l)}\]

where \(B\) is the number of bases, \(Q_{rb}^{(l)}\) are block bases with shape \(R^{(d^{(l+1)}/B)*(d^{l}/B)}\).

Parameters
  • in_feat (int) – Input feature size; i.e, the number of dimensions of \(h_j^{(l)}\).

  • out_feat (int) – Output feature size; i.e., the number of dimensions of \(h_i^{(l+1)}\).

  • num_rels (int) – Number of relations. .

  • regularizer (str) – Which weight regularizer to use β€œbasis” or β€œbdd”. β€œbasis” is short for basis-diagonal-decomposition. β€œbdd” is short for block-diagonal-decomposition.

  • num_bases (int, optional) – Number of bases. If is none, use number of relations. Default: None.

  • bias (bool, optional) – True if bias is added. Default: True.

  • activation (callable, optional) – Activation function. Default: None.

  • self_loop (bool, optional) – True to include self loop message. Default: True.

  • low_mem (bool, optional) – True to use low memory implementation of relation message passing function. Default: False. This option trades speed with memory consumption, and will slowdown the forward/backward. Turn it on when you encounter OOM problem during training or evaluation. Default: False.

  • dropout (float, optional) – Dropout rate. Default: 0.0

  • layer_norm (float, optional) – Add layer norm. Default: False

Examples

>>> import dgl
>>> import numpy as np
>>> import tensorflow as tf
>>> from dgl.nn import RelGraphConv
>>>
>>> with tf.device("CPU:0"):
>>>     g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3]))
>>>     feat = tf.ones((6, 10))
>>>     conv = RelGraphConv(10, 2, 3, regularizer='basis', num_bases=2)
>>>     etype = tf.convert_to_tensor(np.array([0,1,2,0,1,2]).astype(np.int64))
>>>     res = conv(g, feat, etype)
>>>     res
<tf.Tensor: shape=(6, 2), dtype=float32, numpy=
array([[-0.02938664,  1.7932655 ],
    [ 0.1146394 ,  0.48319   ],
    [-0.02938664,  1.7932655 ],
    [ 1.2054908 , -0.26098895],
    [ 0.1146394 ,  0.48319   ],
    [ 0.75915515,  1.1454091 ]], dtype=float32)>
>>> # One-hot input
>>> with tf.device("CPU:0"):
>>>     one_hot_feat = tf.convert_to_tensor(np.array([0,1,2,3,4,5]).astype(np.int64))
>>>     res = conv(g, one_hot_feat, etype)
>>>     res
<tf.Tensor: shape=(6, 2), dtype=float32, numpy=
array([[-0.24205256, -0.7922753 ],
    [ 0.62085056,  0.4893622 ],
    [-0.9484881 , -0.26546806],
    [-0.2163915 , -0.12585883],
    [-0.14293689,  0.77483284],
    [ 0.091169  , -0.06761569]], dtype=float32)>