dgl.DGLHeteroGraph.local_var¶

DGLHeteroGraph.local_var()[source]

Return a heterograph object that can be used in a local function scope.

The returned graph object shares the feature data and graph structure of this graph. However, any out-place mutation to the feature data will not reflect to this graph, thus making it easier to use in a function scope.

If set, the local graph object will use same initializers for node features and edge features.

Returns: The graph object that can be used as a local variable. DGLHeteroGraph

Notes

Internally, the returned graph shares the same feature tensors, but construct a new dictionary structure (aka. Frame) so adding/removing feature tensors from the returned graph will not reflect to the original graph. However, inplace operations do change the shared tensor values, so will be reflected to the original graph. This function also has little overhead when the number of feature tensors in this graph is small.

Examples

The following example uses PyTorch backend.

Avoid accidentally overriding existing feature data. This is quite common when implementing a NN module:

>>> def foo(g):
>>>     g = g.local_var()
>>>     g.edata['h'] = torch.ones((g.number_of_edges(), 3))
>>>     return g.edata['h']
>>>
>>> g = dgl.bipartite([(0, 0), (1, 0), (1, 2)], 'user', 'plays', 'game')
>>> g.edata['h'] = torch.zeros((g.number_of_edges(), 3))
>>> newh = foo(g)        # get tensor of all ones
>>> print(g.edata['h'])  # still get tensor of all zeros


Automatically garbage collect locally-defined tensors without the need to manually pop the tensors.

>>> def foo(g):
>>>     g = g.local_var()
>>>     # This 'h' feature will stay local and be GCed when the function exits
>>>     g.edata['h'] = torch.ones((g.number_of_edges(), 3))
>>>     return g.edata['h']
>>>
>>> g = dgl.bipartite([(0, 0), (1, 0), (1, 2)], 'user', 'plays', 'game')
>>> h = foo(g)
>>> print('h' in g.edata)
False