dgl.DGLGraph.send_and_recv

DGLGraph.send_and_recv(edges, message_func, reduce_func, apply_node_func=None, etype=None, inplace=False)

Send messages along the specified edges and reduce them on the destination nodes to update their features.

Parameters
  • edges (edges) –

    The edges to send and receive messages on. The allowed input formats are:

    • int: A single edge ID.

    • Int Tensor: Each element is an edge ID. The tensor must have the same device type and ID data type as the graph’s.

    • iterable[int]: Each element is an edge ID.

    • (Tensor, Tensor): The node-tensors format where the i-th elements of the two tensors specify an edge.

    • (iterable[int], iterable[int]): Similar to the node-tensors format but stores edge endpoints in python iterables.

  • message_func (dgl.function.BuiltinFunction or callable) – The message function to generate messages along the edges. It must be either a DGL Built-in Function or a User-defined Functions.

  • reduce_func (dgl.function.BuiltinFunction or callable) – The reduce function to aggregate the messages. It must be either a DGL Built-in Function or a User-defined Functions.

  • apply_node_func (callable, optional) – An optional apply function to further update the node features after the message reduction. It must be a User-defined Functions.

  • etype (str or (str, str, str), optional) –

    The type name of the edges. The allowed type name formats are:

    • (str, str, str) for source node type, edge type and destination node type.

    • or one str edge type name if the name can uniquely identify a triplet format in the graph.

    Can be omitted if the graph has only one type of edges.

  • inplace (bool, optional) – DEPRECATED.

Notes

DGL recommends using DGL’s bulit-in function for the message_func and the reduce_func arguments, because DGL will invoke efficient kernels that avoids copying node features to edge features in this case.

Examples

>>> import dgl
>>> import dgl.function as fn
>>> import torch

Homogeneous graph

>>> g = dgl.graph(([0, 1, 2, 3], [1, 2, 3, 4]))
>>> g.ndata['x'] = torch.ones(5, 2)
>>> # Specify edges using (Tensor, Tensor).
>>> g.send_and_recv(([1, 2], [2, 3]), fn.copy_u('x', 'm'), fn.sum('m', 'h'))
>>> g.ndata['h']
tensor([[0., 0.],
        [0., 0.],
        [1., 1.],
        [1., 1.],
        [0., 0.]])
>>> # Specify edges using IDs.
>>> g.send_and_recv([0, 2, 3], fn.copy_u('x', 'm'), fn.sum('m', 'h'))
>>> g.ndata['h']
tensor([[0., 0.],
        [1., 1.],
        [0., 0.],
        [1., 1.],
        [1., 1.]])

Heterogeneous graph

>>> g = dgl.heterograph({
...     ('user', 'follows', 'user'): ([0, 1], [1, 2]),
...     ('user', 'plays', 'game'): ([0, 1, 1, 2], [0, 0, 1, 1])
... })
>>> g.nodes['user'].data['h'] = torch.tensor([[0.], [1.], [2.]])
>>> g.send_and_recv(g['follows'].edges(), fn.copy_src('h', 'm'),
...                 fn.sum('m', 'h'), etype='follows')
>>> g.nodes['user'].data['h']
tensor([[0.],
        [0.],
        [1.]])

``send_and_recv`` using user-defined functions

>>> import torch as th
>>> g = dgl.graph(([0, 1], [1, 2]))
>>> g.ndata['x'] = th.tensor([[1.], [2.], [3.]])
>>> # Define the function for sending node features as messages.
>>> def send_source(edges):
...     return {'m': edges.src['x']}
>>> # Sum the messages received and use this to replace the original node feature.
>>> def simple_reduce(nodes):
...     return {'x': nodes.mailbox['m'].sum(1)}

Send and receive messages.

>>> g.send_and_recv(g.edges())
>>> g.ndata['x']
tensor([[1.],
        [1.],
        [2.]])

Note that the feature of node 0 remains the same as it has no incoming edges.