dgl.DGLHeteroGraph.local_var

DGLHeteroGraph.local_var()[source]

Return a graph object for usage in a local function scope.

The returned graph object shares the feature data and graph structure of this graph. However, any out-place mutation to the feature data will not reflect to this graph, thus making it easier to use in a function scope (e.g. forward computation of a model).

If set, the local graph object will use same initializers for node features and edge features.

Returns

The graph object for a local variable.

Return type

DGLGraph

Notes

Inplace operations do reflect to the original graph. This function also has little overhead when the number of feature tensors in this graph is small.

Examples

The following example uses PyTorch backend.

>>> import dgl
>>> import torch

Create a function for computation on graphs.

>>> def foo(g):
...     g = g.local_var()
...     g.edata['h'] = torch.ones((g.num_edges(), 3))
...     g.edata['h2'] = torch.ones((g.num_edges(), 3))
...     return g.edata['h']

local_var avoids changing the graph features when exiting the function.

>>> g = dgl.graph((torch.tensor([0, 1, 1]), torch.tensor([0, 0, 2])))
>>> g.edata['h'] = torch.zeros((g.num_edges(), 3))
>>> newh = foo(g)
>>> print(g.edata['h'])  # still get tensor of all zeros
tensor([[0., 0., 0.],
        [0., 0., 0.],
        [0., 0., 0.]])
>>> 'h2' in g.edata      # new feature set in the function scope is not found
False

In-place operations will still reflect to the original graph.

>>> def foo(g):
...     g = g.local_var()
...     # in-place operation
...     g.edata['h'] += 1
...     return g.edata['h']
>>> g = dgl.graph((torch.tensor([0, 1, 1]), torch.tensor([0, 0, 2])))
>>> g.edata['h'] = torch.zeros((g.num_edges(), 1))
>>> newh = foo(g)
>>> print(g.edata['h'])  # the result changes
tensor([[1.],
        [1.],
        [1.]])

See also

local_scope()