Enter a local scope context for this graph.

By entering a local scope, any out-place mutation to the feature data will not reflect to the original graph, thus making it easier to use in a function scope.

If set, the local scope will use same initializers for node features and edge features.


The following example uses PyTorch backend.

Avoid accidentally overriding existing feature data. This is quite common when implementing a NN module:

>>> def foo(g):
>>>     with g.local_scope():
>>>         g.ndata['h'] = torch.ones((g.number_of_nodes(), 3))
>>>         return g.ndata['h']
>>> g = ... # some graph
>>> g.ndata['h'] = torch.zeros((g.number_of_nodes(), 3))
>>> newh = foo(g)  # get tensor of all ones
>>> print(g.ndata['h'])  # still get tensor of all zeros

Automatically garbage collect locally-defined tensors without the need to manually pop the tensors.

>>> def foo(g):
>>>     with g.local_scope():
>>>     # This 'xxx' feature will stay local and be GCed when the function exits
>>>         g.ndata['xxx'] = torch.ones((g.number_of_nodes(), 3))
>>>         return g.ndata['xxx']
>>> g = ... # some graph
>>> xxx = foo(g)
>>> print('xxx' in g.ndata)

See also