graph(data, ntype=None, etype=None, *, num_nodes=None, idtype=None, device=None, **deprecated_kwargs)¶
Create a graph and return.
data (graph data) –
The data for constructing a graph, which takes the form of \((U, V)\). \((U[i], V[i])\) forms the edge with ID \(i\) in the graph. The allowed data formats are:
(Tensor, Tensor): Each tensor must be a 1D tensor containing node IDs. DGL calls this format “tuple of node-tensors”. The tensors should have the same data type of int32/int64 and device context (see below the descriptions of
(iterable[int], iterable[int]): Similar to the tuple of node-tensors format, but stores node IDs in two sequences (e.g. list, tuple, numpy.ndarray).
num_nodes (int, optional) – The number of nodes in the graph. If not given, this will be the largest node ID plus 1 from the
dataargument. If given and the value is no greater than the largest node ID from the
dataargument, DGL will raise an error.
idtype (int32 or int64, optional) – The data type for storing the structure-related graph information such as node and edge IDs. It should be a framework-specific data type object (e.g.,
None(default), DGL infers the ID type from the
dataargument. See “Notes” for more details.
device (device context, optional) – The device of the returned graph, which should be a framework-specific device object (e.g.,
None(default), DGL uses the device of the tensors of the
datais not a tuple of node-tensors, the returned graph is on CPU. If the specified
devicediffers from that of the provided tensors, it casts the given tensors to the specified device first.
The created graph.
- Return type
idtypeargument is not given then:
in the case of the tuple of node-tensor format, DGL uses the data type of the given ID tensors.
in the case of the tuple of sequence format, DGL uses int64.
If the specified
idtypeargument differs from the data type of the provided tensors, it casts the given tensors to the specified data type first.
The most efficient construction approach is to provide a tuple of node tensors without specifying
device. This is because the returned graph shares the storage with the input node-tensors in this case.
DGL internally maintains multiple copies of the graph structure in different sparse formats and chooses the most efficient one depending on the computation invoked. If memory usage becomes an issue in the case of large graphs, use
dgl.DGLGraph.formats()to restrict the allowed formats.
The following example uses PyTorch backend.
>>> import dgl >>> import torch
Create a small three-edge graph.
>>> # Source nodes for edges (2, 1), (3, 2), (4, 3) >>> src_ids = torch.tensor([2, 3, 4]) >>> # Destination nodes for edges (2, 1), (3, 2), (4, 3) >>> dst_ids = torch.tensor([1, 2, 3]) >>> g = dgl.graph((src_ids, dst_ids))
Explicitly specify the number of nodes in the graph.
>>> g = dgl.graph((src_ids, dst_ids), num_nodes=100)
Create a graph on the first GPU with data type int32.
>>> g = dgl.graph((src_ids, dst_ids), idtype=torch.int32, device='cuda:0')