dgl.broadcast_edges

dgl.broadcast_edges(graph, graph_feat, *, etype=None)[source]

Generate an edge feature equal to the graph-level feature graph_feat.

The operation is similar to numpy.repeat (or torch.repeat_interleave). It is commonly used to normalize edge features by a global vector. For example, to normalize edge features across graph to range \([0~1)\):

>>> g = dgl.batch([...])  # batch multiple graphs
>>> g.edata['h'] = ...  # some node features
>>> h_sum = dgl.broadcast_edges(g, dgl.sum_edges(g, 'h'))
>>> g.edata['h'] /= h_sum  # normalize by summation
Parameters
  • graph (DGLGraph) – The graph.

  • graph_feat (tensor) – The feature to broadcast. Tensor shape is \((B, *)\) for batched graph, where \(B\) is the batch size.

  • etype (str, typle of str, optional) – Edge type. Can be omitted if there is only one edge type in the graph.

Returns

The edge features tensor with shape \((M, *)\), where \(M\) is the number of edges.

Return type

Tensor

Examples

>>> import dgl
>>> import torch as th

Create two DGLGraph objects and initialize their edge features.

>>> g1 = dgl.graph(([0], [1]))                    # Graph 1
>>> g2 = dgl.graph(([0, 1], [1, 2]))              # Graph 2
>>> bg = dgl.batch([g1, g2])
>>> feat = th.rand(2, 5)
>>> feat
tensor([[0.4325, 0.7710, 0.5541, 0.0544, 0.9368],
        [0.2721, 0.4629, 0.7269, 0.0724, 0.1014]])

Broadcast feature to all edges in the batched graph, feat[i] is broadcast to edges in the i-th example in the batch.

>>> dgl.broadcast_edges(bg, feat)
tensor([[0.4325, 0.7710, 0.5541, 0.0544, 0.9368],
        [0.2721, 0.4629, 0.7269, 0.0724, 0.1014],
        [0.2721, 0.4629, 0.7269, 0.0724, 0.1014]])

Broadcast feature to all edges in the single graph (the feature tensor shape to broadcast should be \((1, *)\)).

>>> feat1 = th.unsqueeze(feat[1], 0)
>>> dgl.broadcast_edges(g2, feat1)
tensor([[0.2721, 0.4629, 0.7269, 0.0724, 0.1014],
        [0.2721, 0.4629, 0.7269, 0.0724, 0.1014]])