`dgl.``adj_sum_graph`(graphs, weight_name)[source]

Create a weighted graph whose adjacency matrix is the sum of the adjacency matrices of the given graphs, whose rows represent source nodes and columns represent destination nodes.

All the graphs must be simple graphs, and must have only one edge type. They also must have the same metagraph, i.e. have the same source node type and the same destination node type. Moreover, the number of nodes for every graph must also be the same.

The metagraph of the returned graph will be the same as the input graphs.

Unlike `scipy`, if an edge in the result graph has zero weight, it will not be removed from the graph.

Notes

This function works on both CPU and GPU. For GPU, the number of nodes and edges must be less than the maximum of `int32` (i.e. `2 ** 31 - 1`) due to restriction of cuSPARSE.

The edge weights returned by this function is differentiable w.r.t. the input edge weights.

If the graph format is restricted, both graphs must have CSR available.

Parameters
• graphs (list[DGLGraph]) – The list of graphs. Must have at least one element.

• weight_name (str) –

The feature name of edge weight of both graphs.

The corresponding edge feature must be scalar.

Returns

The new graph. The edge weight of the returned graph will have the same feature name as `weight_name`.

Return type

DGLGraph

Examples

The following shows weighted adjacency matrix summation between two bipartite graphs. You can also perform this between homogeneous graphs.

```>>> A = dgl.heterograph(
...     {('A', 'AB', 'B'): ([2, 2, 0, 2, 0, 1], [2, 1, 0, 0, 2, 2])},
...     num_nodes_dict={'A': 3, 'B': 4})
>>> B = dgl.heterograph(
...     {('A', 'AB', 'B'): ([1, 2, 0, 2, 1, 0], [0, 3, 2, 1, 3, 3])},
...     num_nodes_dict={'A': 3, 'B': 4})
```

If your graph is a multigraph, call `dgl.to_simple()` to convert it into a simple graph first.

```>>> A = dgl.to_simple(A)
>>> B = dgl.to_simple(B)
```

Initialize learnable edge weights.

```>>> A.edata['w'] = torch.randn(6).requires_grad_()
```

Take the sum.

```>>> C = dgl.adj_sum_graph([A, B], 'w')
>>> C.edges()
(tensor([0, 0, 0, 1, 1, 1, 2, 2, 2, 2]),
tensor([0, 2, 3, 2, 0, 3, 0, 1, 2, 3]))
```

Note that this function is differentiable:

```>>> C.edata['w'].sum().backward()
```>>> B.edata['w'].grad