dgl.DGLGraph.toο
- DGLGraph.to(device, **kwargs)[source]ο
Move ndata, edata and graph structure to the targeted device (cpu/gpu).
If the graph is already on the specified device, the function directly returns it. Otherwise, it returns a cloned graph on the specified device.
Note that data of node and edge features are not moved to the specified device before being accessed or materialize_data() is called.
- Parameters:
device (Framework-specific device context object) β The context to move data to (e.g.,
torch.device
).kwargs (Key-word arguments.) β Key-word arguments fed to the framework copy function.
- Returns:
The graph on the specified device.
- Return type:
Examples
The following example uses PyTorch backend.
>>> import dgl >>> import torch
>>> g = dgl.graph((torch.tensor([1, 0]), torch.tensor([1, 2]))) >>> g.ndata['h'] = torch.ones(3, 1) >>> g.edata['h'] = torch.zeros(2, 2) >>> g1 = g.to(torch.device('cuda:0')) >>> print(g1.device) device(type='cuda', index=0) >>> print(g1.ndata['h'].device) device(type='cuda', index=0) >>> print(g1.nodes().device) device(type='cuda', index=0)
The original graph is still on CPU.
>>> print(g.device) device(type='cpu') >>> print(g.ndata['h'].device) device(type='cpu') >>> print(g.nodes().device) device(type='cpu')
The case of heterogeneous graphs is the same.