class dgl.graphbolt.GPUCachedFeature(fallback_feature: Feature, max_cache_size_in_bytes: int)[source]

Bases: Feature

GPU cached feature wrapping a fallback feature.

Places the GPU cache to torch.cuda.current_device().

  • fallback_feature (Feature) – The fallback feature.

  • max_cache_size_in_bytes (int) – The capacity of the GPU cache in bytes.


>>> import torch
>>> from dgl import graphbolt as gb
>>> torch_feat = torch.arange(10).reshape(2, -1).to("cuda")
>>> cache_size = 5
>>> fallback_feature = gb.TorchBasedFeature(torch_feat)
>>> feature = gb.GPUCachedFeature(fallback_feature, cache_size)
tensor([[0, 1, 2, 3, 4],
        [5, 6, 7, 8, 9]], device='cuda:0')
tensor([[0, 1, 2, 3, 4]], device='cuda:0')
>>> feature.update(torch.tensor([[1 for _ in range(5)]]).to("cuda"),
...                torch.tensor([1]).to("cuda"))
>>>[0, 1]).to("cuda"))
tensor([[0, 1, 2, 3, 4],
        [1, 1, 1, 1, 1]], device='cuda:0')
>>> feature.size()
read(ids: Tensor | None = None)[source]

Read the feature by index.

The returned tensor is always in GPU memory, no matter whether the fallback feature is in memory or on disk.


ids (torch.Tensor, optional) – The index of the feature. If specified, only the specified indices of the feature are read. If None, the entire feature is returned.


The read feature.

Return type:



Get the size of the feature.


The size of the feature.

Return type:


update(value: Tensor, ids: Tensor | None = None)[source]

Update the feature.

  • value (torch.Tensor) – The updated value of the feature.

  • ids (torch.Tensor, optional) – The indices of the feature to update. If specified, only the specified indices of the feature will be updated. For the feature, the ids[i] row is updated to value[i]. So the indices and value must have the same length. If None, the entire feature will be updated.