class dgl.graphbolt.GPUCachedFeature(fallback_feature: dgl.graphbolt.feature_store.Feature, cache_size: int)[source]ΒΆ

Bases: dgl.graphbolt.feature_store.Feature

GPU cached feature wrapping a fallback feature.

Places the GPU cache to torch.cuda.current_device().

  • fallback_feature (Feature) – The fallback feature.

  • cache_size (int) – The capacity of the GPU cache, the number of features to store.


>>> import torch
>>> from dgl import graphbolt as gb
>>> torch_feat = torch.arange(10).reshape(2, -1).to("cuda")
>>> cache_size = 5
>>> fallback_feature = gb.TorchBasedFeature(torch_feat)
>>> feature = gb.GPUCachedFeature(fallback_feature, cache_size)
tensor([[0, 1, 2, 3, 4],
        [5, 6, 7, 8, 9]], device='cuda:0')
tensor([[0, 1, 2, 3, 4]], device='cuda:0')
>>> feature.update(torch.tensor([[1 for _ in range(5)]]).to("cuda"),
...                torch.tensor([1]).to("cuda"))
>>>[0, 1]).to("cuda"))
tensor([[0, 1, 2, 3, 4],
        [1, 1, 1, 1, 1]], device='cuda:0')
>>> feature.size()
read(ids: Optional[torch.Tensor] = None)[source]ΒΆ

Read the feature by index.

The returned tensor is always in GPU memory, no matter whether the fallback feature is in memory or on disk.


ids (torch.Tensor, optional) – The index of the feature. If specified, only the specified indices of the feature are read. If None, the entire feature is returned.


The read feature.

Return type



Get the size of the feature.


The size of the feature.

Return type


update(value: torch.Tensor, ids: Optional[torch.Tensor] = None)[source]ΒΆ

Update the feature.

  • value (torch.Tensor) – The updated value of the feature.

  • ids (torch.Tensor, optional) – The indices of the feature to update. If specified, only the specified indices of the feature will be updated. For the feature, the ids[i] row is updated to value[i]. So the indices and value must have the same length. If None, the entire feature will be updated.