OnDiskDataset for Homogeneous Graph

Open In Colab GitHub

This tutorial shows how to create OnDiskDataset for homogeneous graph that could be used in GraphBolt framework.

By the end of this tutorial, you will be able to

  • organize graph structure data.

  • organize feature data.

  • organize training/validation/test set for specific tasks.

To create an OnDiskDataset object, you need to organize all the data including graph structure, feature data and tasks into a directory. The directory should contain a metadata.yaml file that describes the metadata of the dataset.

Now let’s generate various data step by step and organize them together to instantiate OnDiskDataset finally.

Install DGL package

[1]:
# Install required packages.
import os
import torch
import numpy as np
os.environ['TORCH'] = torch.__version__
os.environ['DGLBACKEND'] = "pytorch"

# Install the CPU version.
device = torch.device("cpu")
!pip install --pre dgl -f https://data.dgl.ai/wheels-test/repo.html

try:
    import dgl
    import dgl.graphbolt as gb
    installed = True
except ImportError as error:
    installed = False
    print(error)
print("DGL installed!" if installed else "DGL not found!")
Looking in links: https://data.dgl.ai/wheels-test/repo.html
Requirement already satisfied: dgl in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages/dgl-2.0.0-py3.7-linux-x86_64.egg (2.0.0)
Requirement already satisfied: numpy>=1.14.0 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from dgl) (1.21.6)
Requirement already satisfied: scipy>=1.1.0 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from dgl) (1.7.3)
Requirement already satisfied: networkx>=2.1 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from dgl) (2.6.3)
Requirement already satisfied: requests>=2.19.0 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from dgl) (2.31.0)
Requirement already satisfied: tqdm in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from dgl) (4.66.1)
Requirement already satisfied: psutil>=5.8.0 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from dgl) (5.9.7)
Requirement already satisfied: torchdata>=0.5.0 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from dgl) (0.5.1)
Requirement already satisfied: charset-normalizer<4,>=2 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from requests>=2.19.0->dgl) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from requests>=2.19.0->dgl) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from requests>=2.19.0->dgl) (2.0.7)
Requirement already satisfied: certifi>=2017.4.17 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from requests>=2.19.0->dgl) (2023.11.17)
Requirement already satisfied: portalocker>=2.0.0 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from torchdata>=0.5.0->dgl) (2.7.0)
Requirement already satisfied: torch==1.13.1 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from torchdata>=0.5.0->dgl) (1.13.1)
Requirement already satisfied: typing-extensions in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from torch==1.13.1->torchdata>=0.5.0->dgl) (4.7.1)
Requirement already satisfied: nvidia-cuda-runtime-cu11==11.7.99 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from torch==1.13.1->torchdata>=0.5.0->dgl) (11.7.99)
Requirement already satisfied: nvidia-cudnn-cu11==8.5.0.96 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from torch==1.13.1->torchdata>=0.5.0->dgl) (8.5.0.96)
Requirement already satisfied: nvidia-cublas-cu11==11.10.3.66 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from torch==1.13.1->torchdata>=0.5.0->dgl) (11.10.3.66)
Requirement already satisfied: nvidia-cuda-nvrtc-cu11==11.7.99 in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from torch==1.13.1->torchdata>=0.5.0->dgl) (11.7.99)
Requirement already satisfied: setuptools in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from nvidia-cublas-cu11==11.10.3.66->torch==1.13.1->torchdata>=0.5.0->dgl) (68.0.0)
Requirement already satisfied: wheel in /home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/2.0.x/lib/python3.7/site-packages (from nvidia-cublas-cu11==11.10.3.66->torch==1.13.1->torchdata>=0.5.0->dgl) (0.42.0)
WARNING:root:The OGB package is out of date. Your version is 1.2.4, while the latest version is 1.3.6.
DGL installed!

Data preparation

In order to demonstrate how to organize various data, let’s create a base directory first.

[2]:
base_dir = './ondisk_dataset_homograph'
os.makedirs(base_dir, exist_ok=True)
print(f"Created base directory: {base_dir}")
Created base directory: ./ondisk_dataset_homograph

Generate graph structure data

For homogeneous graph, we just need to save edges(namely node pairs) into CSV file.

Note: when saving to file, do not save index and header.

[3]:
import numpy as np
import pandas as pd
num_nodes = 1000
num_edges = 10 * num_nodes
edges_path = os.path.join(base_dir, "edges.csv")
edges = np.random.randint(0, num_nodes, size=(num_edges, 2))

print(f"Part of edges: {edges[:5, :]}")

df = pd.DataFrame(edges)
df.to_csv(edges_path, index=False, header=False)

print(f"Edges are saved into {edges_path}")
Part of edges: [[511 276]
 [768 983]
 [791 649]
 [138 304]
 [986  99]]
Edges are saved into ./ondisk_dataset_homograph/edges.csv

Generate feature data for graph

For feature data, numpy arrays and torch tensors are supported for now.

[4]:
# Generate node feature in numpy array.
node_feat_0_path = os.path.join(base_dir, "node-feat-0.npy")
node_feat_0 = np.random.rand(num_nodes, 5)
print(f"Part of node feature [feat_0]: {node_feat_0[:3, :]}")
np.save(node_feat_0_path, node_feat_0)
print(f"Node feature [feat_0] is saved to {node_feat_0_path}\n")

# Generate another node feature in torch tensor
node_feat_1_path = os.path.join(base_dir, "node-feat-1.pt")
node_feat_1 = torch.rand(num_nodes, 5)
print(f"Part of node feature [feat_1]: {node_feat_1[:3, :]}")
torch.save(node_feat_1, node_feat_1_path)
print(f"Node feature [feat_1] is saved to {node_feat_1_path}\n")

# Generate edge feature in numpy array.
edge_feat_0_path = os.path.join(base_dir, "edge-feat-0.npy")
edge_feat_0 = np.random.rand(num_edges, 5)
print(f"Part of edge feature [feat_0]: {edge_feat_0[:3, :]}")
np.save(edge_feat_0_path, edge_feat_0)
print(f"Edge feature [feat_0] is saved to {edge_feat_0_path}\n")

# Generate another edge feature in torch tensor
edge_feat_1_path = os.path.join(base_dir, "edge-feat-1.pt")
edge_feat_1 = torch.rand(num_edges, 5)
print(f"Part of edge feature [feat_1]: {edge_feat_1[:3, :]}")
torch.save(edge_feat_1, edge_feat_1_path)
print(f"Edge feature [feat_1] is saved to {edge_feat_1_path}\n")

Part of node feature [feat_0]: [[0.34264952 0.54184146 0.62416396 0.34306115 0.63196642]
 [0.97958698 0.91290976 0.4286828  0.07011558 0.28538392]
 [0.37035088 0.66730786 0.96222583 0.60748833 0.44128676]]
Node feature [feat_0] is saved to ./ondisk_dataset_homograph/node-feat-0.npy

Part of node feature [feat_1]: tensor([[0.8437, 0.5860, 0.1824, 0.9904, 0.6559],
        [0.8956, 0.9627, 0.9531, 0.6877, 0.7792],
        [0.7870, 0.1097, 0.6645, 0.9247, 0.9039]])
Node feature [feat_1] is saved to ./ondisk_dataset_homograph/node-feat-1.pt

Part of edge feature [feat_0]: [[0.59179833 0.86057781 0.63702875 0.19703896 0.43919325]
 [0.46155835 0.11792761 0.15045723 0.02588569 0.33156533]
 [0.09723602 0.35894941 0.63994803 0.319833   0.7706895 ]]
Edge feature [feat_0] is saved to ./ondisk_dataset_homograph/edge-feat-0.npy

Part of edge feature [feat_1]: tensor([[0.1005, 0.6618, 0.0024, 0.1507, 0.6464],
        [0.0847, 0.5830, 0.0130, 0.1212, 0.7445],
        [0.0751, 0.6894, 0.5892, 0.7233, 0.0931]])
Edge feature [feat_1] is saved to ./ondisk_dataset_homograph/edge-feat-1.pt

Generate tasks

OnDiskDataset supports multiple tasks. For each task, we need to prepare training/validation/test sets respectively. Such sets usually vary among different tasks. In this tutorial, let’s create a Node Classification task and Link Prediction task.

Node Classification Task

For node classification task, we need node IDs and corresponding labels for each training/validation/test set. Like feature data, numpy arrays and torch tensors are supported for these sets.

[5]:
num_trains = int(num_nodes * 0.6)
num_vals = int(num_nodes * 0.2)
num_tests = num_nodes - num_trains - num_vals

ids = np.arange(num_nodes)
np.random.shuffle(ids)

nc_train_ids_path = os.path.join(base_dir, "nc-train-ids.npy")
nc_train_ids = ids[:num_trains]
print(f"Part of train ids for node classification: {nc_train_ids[:3]}")
np.save(nc_train_ids_path, nc_train_ids)
print(f"NC train ids are saved to {nc_train_ids_path}\n")

nc_train_labels_path = os.path.join(base_dir, "nc-train-labels.pt")
nc_train_labels = torch.randint(0, 10, (num_trains,))
print(f"Part of train labels for node classification: {nc_train_labels[:3]}")
torch.save(nc_train_labels, nc_train_labels_path)
print(f"NC train labels are saved to {nc_train_labels_path}\n")

nc_val_ids_path = os.path.join(base_dir, "nc-val-ids.npy")
nc_val_ids = ids[num_trains:num_trains+num_vals]
print(f"Part of val ids for node classification: {nc_val_ids[:3]}")
np.save(nc_val_ids_path, nc_val_ids)
print(f"NC val ids are saved to {nc_val_ids_path}\n")

nc_val_labels_path = os.path.join(base_dir, "nc-val-labels.pt")
nc_val_labels = torch.randint(0, 10, (num_vals,))
print(f"Part of val labels for node classification: {nc_val_labels[:3]}")
torch.save(nc_val_labels, nc_val_labels_path)
print(f"NC val labels are saved to {nc_val_labels_path}\n")

nc_test_ids_path = os.path.join(base_dir, "nc-test-ids.npy")
nc_test_ids = ids[-num_tests:]
print(f"Part of test ids for node classification: {nc_test_ids[:3]}")
np.save(nc_test_ids_path, nc_test_ids)
print(f"NC test ids are saved to {nc_test_ids_path}\n")

nc_test_labels_path = os.path.join(base_dir, "nc-test-labels.pt")
nc_test_labels = torch.randint(0, 10, (num_tests,))
print(f"Part of test labels for node classification: {nc_test_labels[:3]}")
torch.save(nc_test_labels, nc_test_labels_path)
print(f"NC test labels are saved to {nc_test_labels_path}\n")
Part of train ids for node classification: [204 236 219]
NC train ids are saved to ./ondisk_dataset_homograph/nc-train-ids.npy

Part of train labels for node classification: tensor([5, 3, 1])
NC train labels are saved to ./ondisk_dataset_homograph/nc-train-labels.pt

Part of val ids for node classification: [862 836 682]
NC val ids are saved to ./ondisk_dataset_homograph/nc-val-ids.npy

Part of val labels for node classification: tensor([6, 4, 1])
NC val labels are saved to ./ondisk_dataset_homograph/nc-val-labels.pt

Part of test ids for node classification: [483 834 198]
NC test ids are saved to ./ondisk_dataset_homograph/nc-test-ids.npy

Part of test labels for node classification: tensor([9, 0, 3])
NC test labels are saved to ./ondisk_dataset_homograph/nc-test-labels.pt

Organize Data into YAML File

Now we need to create a metadata.yaml file which contains the paths, dadta types of graph structure, feature data, training/validation/test sets.

Notes: - all path should be relative to metadata.yaml. - Below fields are optional and not specified in below example. - in_memory: indicates whether to load dada into memory or mmap. Default is True.

Please refer to YAML specification for more details.

[7]:
yaml_content = f"""
    dataset_name: homogeneous_graph_nc_lp
    graph:
      nodes:
        - num: {num_nodes}
      edges:
        - format: csv
          path: {os.path.basename(edges_path)}
    feature_data:
      - domain: node
        name: feat_0
        format: numpy
        path: {os.path.basename(node_feat_0_path)}
      - domain: node
        name: feat_1
        format: torch
        path: {os.path.basename(node_feat_1_path)}
      - domain: edge
        name: feat_0
        format: numpy
        path: {os.path.basename(edge_feat_0_path)}
      - domain: edge
        name: feat_1
        format: torch
        path: {os.path.basename(edge_feat_1_path)}
    tasks:
      - name: node_classification
        num_classes: 10
        train_set:
          - data:
              - name: seed_nodes
                format: numpy
                path: {os.path.basename(nc_train_ids_path)}
              - name: labels
                format: torch
                path: {os.path.basename(nc_train_labels_path)}
        validation_set:
          - data:
              - name: seed_nodes
                format: numpy
                path: {os.path.basename(nc_val_ids_path)}
              - name: labels
                format: torch
                path: {os.path.basename(nc_val_labels_path)}
        test_set:
          - data:
              - name: seed_nodes
                format: numpy
                path: {os.path.basename(nc_test_ids_path)}
              - name: labels
                format: torch
                path: {os.path.basename(nc_test_labels_path)}
      - name: link_prediction
        num_classes: 10
        train_set:
          - data:
              - name: node_pairs
                format: numpy
                path: {os.path.basename(lp_train_node_pairs_path)}
        validation_set:
          - data:
              - name: node_pairs
                format: numpy
                path: {os.path.basename(lp_val_node_pairs_path)}
              - name: negative_dsts
                format: torch
                path: {os.path.basename(lp_val_neg_dsts_path)}
        test_set:
          - data:
              - name: node_pairs
                format: numpy
                path: {os.path.basename(lp_test_node_pairs_path)}
              - name: negative_dsts
                format: torch
                path: {os.path.basename(lp_test_neg_dsts_path)}
"""
metadata_path = os.path.join(base_dir, "metadata.yaml")
with open(metadata_path, "w") as f:
  f.write(yaml_content)

Instantiate OnDiskDataset

Now we’re ready to load dataset via dgl.graphbolt.OnDiskDataset. When instantiating, we just pass in the base directory where metadata.yaml file lies.

During first instantiation, GraphBolt preprocesses the raw data such as constructing FusedCSCSamplingGraph from edges. All data including graph, feature data, training/validation/test sets are put into preprocessed directory after preprocessing. Any following dataset loading will skip the preprocess stage.

After preprocessing, load() is required to be called explicitly in order to load graph, feature data and tasks.

[8]:
dataset = gb.OnDiskDataset(base_dir).load()
graph = dataset.graph
print(f"Loaded graph: {graph}\n")

feature = dataset.feature
print(f"Loaded feature store: {feature}\n")

tasks = dataset.tasks
nc_task = tasks[0]
print(f"Loaded node classification task: {nc_task}\n")
lp_task = tasks[1]
print(f"Loaded link prediction task: {lp_task}\n")
Start to preprocess the on-disk dataset.
Finish preprocessing the on-disk dataset.
Loaded graph: FusedCSCSamplingGraph(csc_indptr=tensor([    0,     7,    24,  ...,  9984,  9992, 10000]),
                      indices=tensor([141, 231,  30,  ..., 433, 854, 447]),
                      num_nodes=1000, num_edges=10000, node_attributes={}, edge_attributes={})

Loaded feature store: TorchBasedFeatureStore{(<OnDiskFeatureDataDomain.NODE: 'node'>, None, 'feat_0'): TorchBasedFeature(feature=tensor([[0.3426, 0.5418, 0.6242, 0.3431, 0.6320],
                                                        [0.9796, 0.9129, 0.4287, 0.0701, 0.2854],
                                                        [0.3704, 0.6673, 0.9622, 0.6075, 0.4413],
                                                        ...,
                                                        [0.0552, 0.0194, 0.2035, 0.1091, 0.9077],
                                                        [0.1454, 0.7640, 0.2041, 0.4569, 0.1337],
                                                        [0.1896, 0.4865, 0.2411, 0.2699, 0.5579]], dtype=torch.float64),
                                        metadata={},
                      ), (<OnDiskFeatureDataDomain.NODE: 'node'>, None, 'feat_1'): TorchBasedFeature(feature=tensor([[0.8437, 0.5860, 0.1824, 0.9904, 0.6559],
                                                        [0.8956, 0.9627, 0.9531, 0.6877, 0.7792],
                                                        [0.7870, 0.1097, 0.6645, 0.9247, 0.9039],
                                                        ...,
                                                        [0.3470, 0.1221, 0.0923, 0.7490, 0.7659],
                                                        [0.7873, 0.1553, 0.0451, 0.4700, 0.3233],
                                                        [0.0792, 0.3056, 0.2461, 0.9549, 0.4493]]),
                                        metadata={},
                      ), (<OnDiskFeatureDataDomain.EDGE: 'edge'>, None, 'feat_0'): TorchBasedFeature(feature=tensor([[0.5918, 0.8606, 0.6370, 0.1970, 0.4392],
                                                        [0.4616, 0.1179, 0.1505, 0.0259, 0.3316],
                                                        [0.0972, 0.3589, 0.6399, 0.3198, 0.7707],
                                                        ...,
                                                        [0.8286, 0.0435, 0.1113, 0.5498, 0.5045],
                                                        [0.1844, 0.2182, 0.1859, 0.0660, 0.3687],
                                                        [0.4950, 0.5105, 0.2440, 0.9353, 0.6116]], dtype=torch.float64),
                                        metadata={},
                      ), (<OnDiskFeatureDataDomain.EDGE: 'edge'>, None, 'feat_1'): TorchBasedFeature(feature=tensor([[0.1005, 0.6618, 0.0024, 0.1507, 0.6464],
                                                        [0.0847, 0.5830, 0.0130, 0.1212, 0.7445],
                                                        [0.0751, 0.6894, 0.5892, 0.7233, 0.0931],
                                                        ...,
                                                        [0.8091, 0.3235, 0.3115, 0.4452, 0.3816],
                                                        [0.9405, 0.9265, 0.9447, 0.5083, 0.1025],
                                                        [0.3032, 0.4758, 0.3354, 0.7372, 0.2092]]),
                                        metadata={},
                      )}

Loaded node classification task: OnDiskTask(validation_set=ItemSet(items=(tensor([862, 836, 682, 977, 338, 309, 793,  74, 344, 621, 790, 337,  14, 590,
                                                765, 215,  85, 896, 407, 360, 850, 493, 324, 430, 603, 687, 106, 140,
                                                868, 642, 202, 432, 116,  33, 465, 880,  18, 989, 542, 350,  51, 269,
                                                518, 442,  78, 721, 781, 297, 898, 217, 943, 391, 646,  21, 547, 814,
                                                563, 805, 947, 150, 602, 494, 697, 453, 267, 777, 822, 544,  20,  44,
                                                794, 888, 477, 690, 813, 625, 331, 760, 713, 312, 618, 613, 720, 158,
                                                247,  11, 644,   9, 901, 294, 624, 553, 741, 187, 102, 492, 964, 948,
                                                551, 389, 849, 371,  89, 357,  63, 633, 415, 280, 899, 851, 366, 568,
                                                218, 866, 488,   7, 472, 258, 110,  54, 374,  91, 316, 356,  22, 804,
                                                524, 968, 883,  62, 658, 506, 138, 886, 193, 727, 681, 677, 911, 534,
                                                448, 627, 776, 417, 736,  58, 744, 594, 361,  97, 922, 473, 872, 952,
                                                860, 503, 652, 293, 255,  26, 398, 412, 499, 129, 955, 678, 747,  67,
                                                685, 229,  75, 377, 873, 751, 702, 457, 935, 485, 513, 153, 798, 117,
                                                495, 177, 787, 464, 811, 160, 230, 208, 287, 757, 560, 200, 118, 754,
                                                587, 303, 703, 671]), tensor([6, 4, 1, 5, 6, 3, 1, 1, 3, 4, 5, 4, 9, 9, 6, 8, 3, 9, 9, 0, 0, 2, 4, 8,
                                                2, 1, 8, 6, 1, 6, 2, 9, 5, 7, 5, 2, 4, 7, 8, 1, 8, 3, 1, 8, 5, 3, 3, 7,
                                                2, 6, 6, 3, 0, 7, 8, 4, 2, 9, 7, 8, 6, 1, 5, 9, 5, 9, 2, 9, 6, 1, 2, 7,
                                                3, 0, 2, 2, 8, 0, 6, 2, 7, 6, 4, 6, 2, 3, 2, 1, 0, 5, 3, 9, 9, 0, 4, 8,
                                                3, 1, 3, 0, 4, 2, 8, 3, 9, 2, 3, 9, 2, 0, 4, 7, 2, 8, 2, 1, 4, 6, 2, 7,
                                                2, 6, 7, 3, 6, 1, 6, 8, 8, 1, 0, 6, 7, 1, 2, 6, 3, 6, 7, 7, 1, 3, 2, 8,
                                                5, 9, 9, 6, 7, 9, 2, 0, 4, 8, 6, 6, 8, 8, 9, 6, 6, 5, 9, 4, 8, 9, 2, 8,
                                                2, 4, 5, 8, 5, 5, 2, 7, 3, 0, 8, 5, 3, 1, 0, 8, 7, 9, 6, 9, 7, 3, 7, 1,
                                                7, 7, 1, 6, 3, 7, 3, 3])),
                                  names=('seed_nodes', 'labels'),
                          ),
           train_set=ItemSet(items=(tensor([204, 236, 219,  98, 406, 526, 598, 620, 567, 470, 446, 928, 124,  61,
                                           322, 895, 890,  60,  87, 969, 249, 700, 340, 242, 254, 966, 423, 186,
                                           718, 731, 114, 369,  81, 232, 452, 233,  47, 932, 636, 788, 226,  93,
                                            73,  13, 833, 940, 458, 265, 801, 241, 212, 910,  35, 197, 614, 758,
                                           918,  23, 729, 176, 970, 283, 146, 469, 779, 698, 130, 684,  52,  55,
                                           259,  28, 752,  48, 816, 509, 120, 723, 404, 311, 844, 656, 454, 144,
                                           649,  77, 163, 735, 600, 379, 818, 519, 597, 916,  56, 157, 183, 405,
                                            43, 508, 632, 462, 795, 271, 427, 562, 864, 640, 455, 829, 769, 953,
                                           132, 561, 529, 168, 497, 739, 817, 441, 923, 876, 843, 222, 546, 882,
                                           859, 571, 328, 650, 288, 641, 301, 675, 264, 974, 808, 306, 596,  59,
                                           290, 855, 239, 732, 610, 365,  38, 893, 881, 879, 579, 206, 174, 353,
                                           592, 576, 722, 557, 209, 782, 166, 956, 907, 631, 565, 141, 165, 929,
                                           307, 651, 990, 522, 586, 468,  34, 942, 156, 701, 897, 178, 370, 383,
                                           832, 985, 915, 314, 507,  49, 604, 763, 355, 575,  36, 480, 332, 960,
                                           387, 399, 121, 629,  46, 559, 128, 145, 523, 386, 749, 616, 841, 543,
                                           167, 645, 775, 638, 745, 402, 995, 767, 591, 549, 505, 712, 517, 694,
                                           837, 707, 831, 706, 201, 376, 552, 540, 123, 122, 963, 589, 936, 125,
                                           321, 411, 431, 256, 716, 574, 724, 951, 998, 525, 858, 996, 315, 979,
                                           601, 535, 941, 426, 743, 225, 270, 555, 821, 730,  70, 665, 865, 326,
                                            40, 528, 930, 190,  83, 358, 663, 711, 725, 819,  92, 450, 474, 191,
                                           291, 172, 512, 884, 894, 175,  69, 189, 537,  45, 538,  39, 352, 224,
                                           982, 857,   2, 693,  64, 734, 924, 847, 203, 115, 420, 373, 927, 746,
                                            86, 978, 902,   6, 380, 133, 637, 714, 104, 173, 339,  95, 653, 939,
                                           216, 152,  72, 108, 986, 162, 971, 210, 484, 279, 710, 802, 300, 648,
                                           182, 959, 699, 113, 657, 696, 275,  16, 227, 588, 390, 845, 962,  50,
                                           634, 878, 251,  12,  96, 606, 482, 199, 409, 983, 372, 585, 838, 988,
                                           921,  94,  29, 308, 750, 292, 262, 323, 504, 733, 478, 134, 799, 319,
                                           667, 257, 421, 639, 756, 260, 184, 335, 501, 510, 531, 599, 330,  80,
                                           852, 196,  32, 891, 396,  15, 234, 487, 419, 511, 437, 240, 302, 807,
                                           954, 572, 449, 892,  84, 434,  10, 305, 408,  68, 846, 839, 211, 276,
                                           875, 142, 364, 686, 628, 143, 246, 570, 164, 976, 869, 388, 824, 958,
                                           382, 949, 660, 900, 155, 471, 313, 797, 683, 298, 147, 676, 429,  99,
                                           498,  27, 874, 381, 654, 418, 987, 917, 490, 347, 250, 235, 447, 806,
                                           655, 400, 520, 205, 748, 444, 583, 223, 566, 461, 237, 674, 912, 137,
                                           887, 695, 608, 341, 785, 789, 159, 828, 367, 349, 877, 278,  37, 285,
                                           773, 318, 659, 425, 545, 914, 994, 445, 282, 325, 392, 753, 617, 435,
                                           863, 783, 378, 359, 809, 245, 467, 299,  82, 626, 213, 139, 486, 502,
                                           514, 842, 905, 919,  88, 532, 107, 973, 320, 904, 304, 112, 428, 980,
                                           101, 244, 577, 440, 840, 856, 738, 100, 835, 823,  76, 238, 933, 704,
                                           334, 170, 533, 281,   3, 268, 778, 111, 661, 619, 786, 870, 726, 647,
                                           263, 903, 463, 161, 266, 272, 228, 395, 666, 715, 286, 615, 908, 351,
                                           774, 991, 792, 384, 885, 742,  42, 997, 705, 481, 149, 479, 348, 992,
                                           491, 672, 248,   4, 569, 853, 595, 456, 126, 277, 243, 416]), tensor([5, 3, 1, 9, 1, 2, 5, 5, 1, 6, 7, 3, 7, 4, 1, 7, 7, 3, 2, 5, 6, 8, 8, 4,
                                           5, 9, 9, 9, 3, 3, 7, 6, 8, 8, 9, 6, 2, 4, 6, 7, 6, 7, 5, 4, 6, 7, 1, 3,
                                           7, 7, 3, 1, 1, 4, 7, 3, 3, 4, 8, 1, 5, 2, 0, 5, 6, 1, 7, 2, 6, 6, 1, 9,
                                           5, 8, 6, 1, 8, 0, 3, 3, 6, 0, 3, 5, 9, 4, 9, 8, 1, 6, 0, 0, 6, 8, 3, 1,
                                           9, 8, 3, 9, 7, 3, 1, 1, 6, 7, 8, 4, 6, 8, 2, 6, 9, 1, 2, 5, 3, 4, 8, 2,
                                           9, 7, 6, 7, 7, 3, 0, 5, 1, 4, 4, 2, 6, 9, 7, 4, 9, 0, 9, 5, 0, 0, 7, 6,
                                           4, 3, 3, 2, 9, 8, 6, 0, 8, 0, 6, 4, 7, 9, 6, 2, 5, 1, 3, 3, 6, 5, 8, 4,
                                           4, 1, 1, 2, 5, 5, 2, 7, 6, 5, 4, 6, 4, 3, 8, 9, 3, 5, 9, 5, 2, 0, 9, 1,
                                           7, 9, 4, 0, 6, 8, 5, 0, 8, 5, 0, 7, 4, 7, 7, 6, 0, 1, 9, 6, 4, 7, 6, 2,
                                           9, 1, 7, 6, 9, 8, 4, 1, 6, 3, 9, 0, 2, 1, 4, 0, 5, 2, 9, 5, 9, 8, 6, 2,
                                           1, 3, 3, 1, 9, 4, 7, 7, 4, 9, 4, 5, 1, 8, 2, 8, 4, 8, 2, 9, 1, 5, 1, 9,
                                           4, 7, 4, 1, 9, 5, 3, 1, 4, 7, 8, 1, 8, 6, 1, 5, 4, 6, 0, 0, 3, 8, 2, 7,
                                           7, 0, 7, 6, 6, 9, 3, 2, 7, 6, 4, 6, 9, 1, 9, 3, 0, 6, 2, 2, 6, 4, 7, 5,
                                           3, 8, 1, 6, 5, 2, 6, 7, 7, 1, 2, 1, 5, 6, 9, 8, 5, 6, 3, 0, 8, 2, 2, 9,
                                           8, 5, 6, 2, 9, 0, 4, 5, 0, 7, 7, 6, 0, 4, 3, 3, 6, 9, 2, 5, 5, 4, 6, 5,
                                           1, 7, 4, 5, 2, 3, 4, 0, 6, 5, 9, 2, 7, 1, 6, 4, 0, 2, 1, 5, 1, 9, 2, 5,
                                           3, 2, 2, 9, 0, 6, 2, 7, 4, 7, 6, 8, 0, 3, 8, 3, 1, 4, 8, 2, 1, 2, 9, 5,
                                           4, 5, 2, 6, 1, 8, 2, 2, 6, 9, 6, 0, 6, 9, 0, 8, 0, 9, 5, 0, 3, 4, 0, 8,
                                           7, 9, 7, 7, 8, 8, 2, 6, 1, 9, 8, 4, 1, 9, 7, 2, 4, 7, 6, 7, 7, 7, 3, 2,
                                           2, 1, 7, 5, 3, 5, 5, 6, 3, 1, 7, 7, 2, 5, 2, 7, 3, 9, 7, 3, 8, 0, 1, 5,
                                           3, 4, 2, 9, 0, 1, 1, 1, 5, 0, 4, 8, 9, 3, 4, 1, 9, 4, 7, 3, 3, 3, 3, 0,
                                           9, 7, 9, 8, 9, 8, 2, 7, 7, 0, 1, 1, 3, 7, 1, 2, 6, 5, 6, 1, 5, 1, 0, 7,
                                           3, 8, 1, 5, 6, 1, 0, 2, 0, 2, 3, 2, 7, 1, 6, 3, 9, 2, 7, 6, 0, 3, 4, 0,
                                           2, 4, 6, 1, 2, 1, 7, 4, 8, 3, 1, 5, 5, 5, 2, 1, 0, 7, 0, 4, 7, 8, 6, 6,
                                           6, 9, 4, 8, 9, 5, 2, 4, 9, 2, 9, 3, 2, 4, 1, 4, 2, 1, 8, 5, 8, 9, 8, 6])),
                             names=('seed_nodes', 'labels'),
                     ),
           test_set=ItemSet(items=(tensor([483, 834, 198, 410, 185, 768, 393, 668, 820, 169, 861, 609, 708, 909,
                                          496, 195, 737, 558, 363, 296, 342, 622,   1, 582, 385, 466, 719, 605,
                                          680, 607, 759, 521, 780, 127,  57, 171, 906, 764, 635, 284, 489, 261,
                                          920, 131, 192, 527, 181, 827, 925, 439, 274, 194, 109, 135, 289, 938,
                                          119, 766, 460, 151, 761, 515, 500, 728,  31, 688,  19, 317, 926, 375,
                                          689, 691, 424,   0, 548, 810, 459, 220, 327,  65, 800, 578, 972, 221,
                                          945, 397, 771, 812,  79,  41, 762, 554, 993, 252, 103, 148, 871, 981,
                                          436,  53, 310, 105, 961, 815, 179, 422, 975, 612, 581, 475, 536, 950,
                                          679, 611, 944, 580, 673, 136, 438, 717, 670, 669,  24, 564, 403, 692,
                                            8, 516,  66, 934, 967, 433, 784, 214, 273, 343, 253, 593,   5, 539,
                                          541, 573, 207, 414, 803, 336, 556, 867, 333, 368, 826, 443, 848, 630,
                                           25,  30, 476, 931, 295, 854, 937, 451, 329, 999, 946, 345, 984, 413,
                                          755, 965, 825, 709, 740, 394, 231, 913, 664, 188, 830,  90, 154,  17,
                                          772, 550, 346, 354, 530,  71, 180, 791, 889, 643, 401, 770, 662, 584,
                                          796, 623, 362, 957]), tensor([9, 0, 3, 6, 8, 2, 3, 4, 8, 7, 7, 0, 7, 9, 9, 7, 1, 5, 9, 1, 5, 1, 2, 3,
                                          4, 8, 6, 8, 0, 2, 2, 8, 0, 8, 4, 6, 6, 9, 1, 4, 8, 6, 0, 5, 8, 7, 9, 2,
                                          8, 4, 7, 4, 2, 8, 8, 4, 6, 6, 8, 2, 2, 8, 9, 0, 2, 1, 6, 5, 9, 4, 0, 6,
                                          2, 8, 5, 6, 5, 7, 4, 5, 4, 8, 3, 7, 8, 9, 2, 2, 3, 0, 0, 8, 9, 6, 9, 8,
                                          2, 3, 3, 2, 5, 9, 2, 1, 5, 3, 2, 6, 7, 1, 0, 1, 9, 7, 3, 3, 1, 4, 7, 3,
                                          8, 2, 4, 6, 1, 6, 8, 9, 7, 9, 9, 0, 9, 5, 8, 6, 4, 9, 3, 0, 3, 3, 9, 5,
                                          5, 0, 5, 6, 8, 0, 9, 1, 0, 5, 9, 0, 6, 1, 0, 8, 2, 1, 2, 9, 8, 9, 3, 0,
                                          0, 3, 3, 6, 9, 0, 0, 7, 1, 2, 0, 5, 4, 9, 8, 3, 0, 1, 9, 8, 0, 5, 7, 5,
                                          5, 0, 2, 5, 0, 5, 7, 1])),
                            names=('seed_nodes', 'labels'),
                    ),
           metadata={'name': 'node_classification', 'num_classes': 10},
)

Loaded link prediction task: OnDiskTask(validation_set=ItemSet(items=(tensor([[172, 566],
                                                [146,  84],
                                                [513,  89],
                                                ...,
                                                [910,  69],
                                                [387, 420],
                                                [226, 561]]), tensor([[291,  63, 592,  ..., 566,  88, 340],
                                                [536, 763,  30,  ..., 750, 926, 918],
                                                [625, 312, 200,  ..., 305, 576, 486],
                                                ...,
                                                [604, 741, 863,  ..., 979, 832, 650],
                                                [598, 288, 783,  ..., 871, 619, 180],
                                                [441, 128, 836,  ..., 422,  37, 577]])),
                                  names=('node_pairs', 'negative_dsts'),
                          ),
           train_set=ItemSet(items=(tensor([[511, 276],
                                           [768, 983],
                                           [791, 649],
                                           ...,
                                           [ 93, 655],
                                           [403, 296],
                                           [512, 344]]),),
                             names=('node_pairs',),
                     ),
           test_set=ItemSet(items=(tensor([307, 851]), tensor([[287,  42, 462,  ..., 537, 423, 215],
                                          [539, 399, 762,  ...,  56, 886, 340],
                                          [691, 379, 358,  ..., 833, 341, 861],
                                          ...,
                                          [723, 664, 193,  ..., 452, 389, 731],
                                          [487, 372, 163,  ..., 451, 908, 793],
                                          [528, 337,  26,  ..., 830, 265, 404]])),
                            names=('node_pairs', 'negative_dsts'),
                    ),
           metadata={'name': 'link_prediction', 'num_classes': 10},
)