Single Machine Multi-GPU Minibatch Node Classification

In this tutorial, you will learn how to use multiple GPUs in training a graph neural network (GNN) for node classification.

(Time estimate: 8 minutes)

This tutorial assumes that you have read the Training GNN with Neighbor Sampling for Node Classification tutorial. It also assumes that you know the basics of training general models with multi-GPU with DistributedDataParallel.

Note

See this tutorial from PyTorch for general multi-GPU training with DistributedDataParallel. Also, see the first section of the multi-GPU graph classification tutorial for an overview of using DistributedDataParallel with DGL.

Loading Dataset

OGB already prepared the data as a DGLGraph object. The following code is copy-pasted from the Training GNN with Neighbor Sampling for Node Classification tutorial.

import dgl
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
from dgl.nn import SAGEConv
from ogb.nodeproppred import DglNodePropPredDataset
import tqdm
import sklearn.metrics

dataset = DglNodePropPredDataset('ogbn-arxiv')

graph, node_labels = dataset[0]
# Add reverse edges since ogbn-arxiv is unidirectional.
graph = dgl.add_reverse_edges(graph)
graph.ndata['label'] = node_labels[:, 0]

node_features = graph.ndata['feat']
num_features = node_features.shape[1]
num_classes = (node_labels.max() + 1).item()

idx_split = dataset.get_idx_split()
train_nids = idx_split['train']
valid_nids = idx_split['valid']
test_nids = idx_split['test']    # Test node IDs, not used in the tutorial though.

Out:

Downloading https://snap.stanford.edu/ogb/data/nodeproppred/arxiv.zip

  0%|          | 0/81 [00:00<?, ?it/s]
Downloaded 0.00 GB:   0%|          | 0/81 [00:00<?, ?it/s]
Downloaded 0.00 GB:   1%|1         | 1/81 [00:00<00:20,  3.97it/s]
Downloaded 0.00 GB:   1%|1         | 1/81 [00:00<00:20,  3.97it/s]
Downloaded 0.00 GB:   2%|2         | 2/81 [00:00<00:17,  4.46it/s]
Downloaded 0.00 GB:   2%|2         | 2/81 [00:00<00:17,  4.46it/s]
Downloaded 0.00 GB:   4%|3         | 3/81 [00:00<00:17,  4.45it/s]
Downloaded 0.00 GB:   4%|3         | 3/81 [00:00<00:17,  4.45it/s]
Downloaded 0.00 GB:   5%|4         | 4/81 [00:00<00:16,  4.58it/s]
Downloaded 0.00 GB:   5%|4         | 4/81 [00:01<00:16,  4.58it/s]
Downloaded 0.00 GB:   6%|6         | 5/81 [00:01<00:16,  4.67it/s]
Downloaded 0.01 GB:   6%|6         | 5/81 [00:01<00:16,  4.67it/s]
Downloaded 0.01 GB:   7%|7         | 6/81 [00:01<00:17,  4.32it/s]
Downloaded 0.01 GB:   7%|7         | 6/81 [00:01<00:17,  4.32it/s]
Downloaded 0.01 GB:   9%|8         | 7/81 [00:01<00:18,  4.11it/s]
Downloaded 0.01 GB:   9%|8         | 7/81 [00:01<00:18,  4.11it/s]
Downloaded 0.01 GB:  10%|9         | 8/81 [00:01<00:18,  3.87it/s]
Downloaded 0.01 GB:  10%|9         | 8/81 [00:02<00:18,  3.87it/s]
Downloaded 0.01 GB:  11%|#1        | 9/81 [00:02<00:20,  3.54it/s]
Downloaded 0.01 GB:  11%|#1        | 9/81 [00:02<00:20,  3.54it/s]
Downloaded 0.01 GB:  12%|#2        | 10/81 [00:02<00:22,  3.19it/s]
Downloaded 0.01 GB:  12%|#2        | 10/81 [00:03<00:22,  3.19it/s]
Downloaded 0.01 GB:  14%|#3        | 11/81 [00:03<00:24,  2.87it/s]
Downloaded 0.01 GB:  14%|#3        | 11/81 [00:03<00:24,  2.87it/s]
Downloaded 0.01 GB:  15%|#4        | 12/81 [00:03<00:24,  2.78it/s]
Downloaded 0.01 GB:  15%|#4        | 12/81 [00:03<00:24,  2.78it/s]
Downloaded 0.01 GB:  16%|#6        | 13/81 [00:03<00:24,  2.78it/s]
Downloaded 0.01 GB:  16%|#6        | 13/81 [00:04<00:24,  2.78it/s]
Downloaded 0.01 GB:  17%|#7        | 14/81 [00:04<00:23,  2.88it/s]
Downloaded 0.01 GB:  17%|#7        | 14/81 [00:04<00:23,  2.88it/s]
Downloaded 0.01 GB:  19%|#8        | 15/81 [00:04<00:22,  2.96it/s]
Downloaded 0.02 GB:  19%|#8        | 15/81 [00:04<00:22,  2.96it/s]
Downloaded 0.02 GB:  20%|#9        | 16/81 [00:04<00:21,  3.08it/s]
Downloaded 0.02 GB:  20%|#9        | 16/81 [00:04<00:21,  3.08it/s]
Downloaded 0.02 GB:  21%|##        | 17/81 [00:05<00:20,  3.18it/s]
Downloaded 0.02 GB:  21%|##        | 17/81 [00:05<00:20,  3.18it/s]
Downloaded 0.02 GB:  22%|##2       | 18/81 [00:05<00:18,  3.32it/s]
Downloaded 0.02 GB:  22%|##2       | 18/81 [00:05<00:18,  3.32it/s]
Downloaded 0.02 GB:  23%|##3       | 19/81 [00:05<00:17,  3.49it/s]
Downloaded 0.02 GB:  23%|##3       | 19/81 [00:05<00:17,  3.49it/s]
Downloaded 0.02 GB:  25%|##4       | 20/81 [00:05<00:16,  3.64it/s]
Downloaded 0.02 GB:  25%|##4       | 20/81 [00:06<00:16,  3.64it/s]
Downloaded 0.02 GB:  26%|##5       | 21/81 [00:06<00:15,  3.82it/s]
Downloaded 0.02 GB:  26%|##5       | 21/81 [00:06<00:15,  3.82it/s]
Downloaded 0.02 GB:  27%|##7       | 22/81 [00:06<00:15,  3.91it/s]
Downloaded 0.02 GB:  27%|##7       | 22/81 [00:06<00:15,  3.91it/s]
Downloaded 0.02 GB:  28%|##8       | 23/81 [00:06<00:14,  4.05it/s]
Downloaded 0.02 GB:  28%|##8       | 23/81 [00:06<00:14,  4.05it/s]
Downloaded 0.02 GB:  30%|##9       | 24/81 [00:06<00:13,  4.25it/s]
Downloaded 0.02 GB:  30%|##9       | 24/81 [00:06<00:13,  4.25it/s]
Downloaded 0.02 GB:  31%|###       | 25/81 [00:06<00:13,  4.06it/s]
Downloaded 0.03 GB:  31%|###       | 25/81 [00:07<00:13,  4.06it/s]
Downloaded 0.03 GB:  32%|###2      | 26/81 [00:07<00:14,  3.87it/s]
Downloaded 0.03 GB:  32%|###2      | 26/81 [00:07<00:14,  3.87it/s]
Downloaded 0.03 GB:  33%|###3      | 27/81 [00:07<00:13,  3.89it/s]
Downloaded 0.03 GB:  33%|###3      | 27/81 [00:07<00:13,  3.89it/s]
Downloaded 0.03 GB:  35%|###4      | 28/81 [00:07<00:13,  3.93it/s]
Downloaded 0.03 GB:  35%|###4      | 28/81 [00:07<00:13,  3.93it/s]
Downloaded 0.03 GB:  36%|###5      | 29/81 [00:07<00:13,  3.94it/s]
Downloaded 0.03 GB:  36%|###5      | 29/81 [00:08<00:13,  3.94it/s]
Downloaded 0.03 GB:  37%|###7      | 30/81 [00:08<00:13,  3.70it/s]
Downloaded 0.03 GB:  37%|###7      | 30/81 [00:08<00:13,  3.70it/s]
Downloaded 0.03 GB:  38%|###8      | 31/81 [00:08<00:13,  3.58it/s]
Downloaded 0.03 GB:  38%|###8      | 31/81 [00:08<00:13,  3.58it/s]
Downloaded 0.03 GB:  40%|###9      | 32/81 [00:08<00:13,  3.52it/s]
Downloaded 0.03 GB:  40%|###9      | 32/81 [00:09<00:13,  3.52it/s]
Downloaded 0.03 GB:  41%|####      | 33/81 [00:09<00:13,  3.57it/s]
Downloaded 0.03 GB:  41%|####      | 33/81 [00:09<00:13,  3.57it/s]
Downloaded 0.03 GB:  42%|####1     | 34/81 [00:09<00:13,  3.60it/s]
Downloaded 0.03 GB:  42%|####1     | 34/81 [00:09<00:13,  3.60it/s]
Downloaded 0.03 GB:  43%|####3     | 35/81 [00:09<00:12,  3.72it/s]
Downloaded 0.04 GB:  43%|####3     | 35/81 [00:09<00:12,  3.72it/s]
Downloaded 0.04 GB:  44%|####4     | 36/81 [00:09<00:11,  3.81it/s]
Downloaded 0.04 GB:  44%|####4     | 36/81 [00:10<00:11,  3.81it/s]
Downloaded 0.04 GB:  46%|####5     | 37/81 [00:10<00:11,  3.77it/s]
Downloaded 0.04 GB:  46%|####5     | 37/81 [00:10<00:11,  3.77it/s]
Downloaded 0.04 GB:  47%|####6     | 38/81 [00:10<00:11,  3.61it/s]
Downloaded 0.04 GB:  47%|####6     | 38/81 [00:10<00:11,  3.61it/s]
Downloaded 0.04 GB:  48%|####8     | 39/81 [00:10<00:11,  3.59it/s]
Downloaded 0.04 GB:  48%|####8     | 39/81 [00:11<00:11,  3.59it/s]
Downloaded 0.04 GB:  49%|####9     | 40/81 [00:11<00:11,  3.62it/s]
Downloaded 0.04 GB:  49%|####9     | 40/81 [00:11<00:11,  3.62it/s]
Downloaded 0.04 GB:  51%|#####     | 41/81 [00:11<00:10,  3.69it/s]
Downloaded 0.04 GB:  51%|#####     | 41/81 [00:11<00:10,  3.69it/s]
Downloaded 0.04 GB:  52%|#####1    | 42/81 [00:11<00:10,  3.78it/s]
Downloaded 0.04 GB:  52%|#####1    | 42/81 [00:11<00:10,  3.78it/s]
Downloaded 0.04 GB:  53%|#####3    | 43/81 [00:11<00:09,  3.85it/s]
Downloaded 0.04 GB:  53%|#####3    | 43/81 [00:12<00:09,  3.85it/s]
Downloaded 0.04 GB:  54%|#####4    | 44/81 [00:12<00:09,  3.94it/s]
Downloaded 0.04 GB:  54%|#####4    | 44/81 [00:12<00:09,  3.94it/s]
Downloaded 0.04 GB:  56%|#####5    | 45/81 [00:12<00:08,  4.02it/s]
Downloaded 0.04 GB:  56%|#####5    | 45/81 [00:12<00:08,  4.02it/s]
Downloaded 0.04 GB:  57%|#####6    | 46/81 [00:12<00:08,  4.12it/s]
Downloaded 0.05 GB:  57%|#####6    | 46/81 [00:12<00:08,  4.12it/s]
Downloaded 0.05 GB:  58%|#####8    | 47/81 [00:12<00:08,  4.21it/s]
Downloaded 0.05 GB:  58%|#####8    | 47/81 [00:12<00:08,  4.21it/s]
Downloaded 0.05 GB:  59%|#####9    | 48/81 [00:12<00:07,  4.34it/s]
Downloaded 0.05 GB:  59%|#####9    | 48/81 [00:13<00:07,  4.34it/s]
Downloaded 0.05 GB:  60%|######    | 49/81 [00:13<00:07,  4.43it/s]
Downloaded 0.05 GB:  60%|######    | 49/81 [00:13<00:07,  4.43it/s]
Downloaded 0.05 GB:  62%|######1   | 50/81 [00:13<00:06,  4.55it/s]
Downloaded 0.05 GB:  62%|######1   | 50/81 [00:13<00:06,  4.55it/s]
Downloaded 0.05 GB:  63%|######2   | 51/81 [00:13<00:06,  4.67it/s]
Downloaded 0.05 GB:  63%|######2   | 51/81 [00:13<00:06,  4.67it/s]
Downloaded 0.05 GB:  64%|######4   | 52/81 [00:13<00:06,  4.75it/s]
Downloaded 0.05 GB:  64%|######4   | 52/81 [00:13<00:06,  4.75it/s]
Downloaded 0.05 GB:  65%|######5   | 53/81 [00:13<00:05,  4.87it/s]
Downloaded 0.05 GB:  65%|######5   | 53/81 [00:14<00:05,  4.87it/s]
Downloaded 0.05 GB:  67%|######6   | 54/81 [00:14<00:05,  4.97it/s]
Downloaded 0.05 GB:  67%|######6   | 54/81 [00:14<00:05,  4.97it/s]
Downloaded 0.05 GB:  68%|######7   | 55/81 [00:14<00:05,  5.10it/s]
Downloaded 0.05 GB:  68%|######7   | 55/81 [00:14<00:05,  5.10it/s]
Downloaded 0.05 GB:  69%|######9   | 56/81 [00:14<00:04,  5.16it/s]
Downloaded 0.06 GB:  69%|######9   | 56/81 [00:14<00:04,  5.16it/s]
Downloaded 0.06 GB:  70%|#######   | 57/81 [00:14<00:04,  4.89it/s]
Downloaded 0.06 GB:  70%|#######   | 57/81 [00:15<00:04,  4.89it/s]
Downloaded 0.06 GB:  72%|#######1  | 58/81 [00:15<00:04,  4.64it/s]
Downloaded 0.06 GB:  72%|#######1  | 58/81 [00:15<00:04,  4.64it/s]
Downloaded 0.06 GB:  73%|#######2  | 59/81 [00:15<00:04,  4.61it/s]
Downloaded 0.06 GB:  73%|#######2  | 59/81 [00:15<00:04,  4.61it/s]
Downloaded 0.06 GB:  74%|#######4  | 60/81 [00:15<00:04,  4.63it/s]
Downloaded 0.06 GB:  74%|#######4  | 60/81 [00:15<00:04,  4.63it/s]
Downloaded 0.06 GB:  75%|#######5  | 61/81 [00:15<00:04,  4.71it/s]
Downloaded 0.06 GB:  75%|#######5  | 61/81 [00:15<00:04,  4.71it/s]
Downloaded 0.06 GB:  77%|#######6  | 62/81 [00:15<00:03,  4.77it/s]
Downloaded 0.06 GB:  77%|#######6  | 62/81 [00:16<00:03,  4.77it/s]
Downloaded 0.06 GB:  78%|#######7  | 63/81 [00:16<00:03,  4.85it/s]
Downloaded 0.06 GB:  78%|#######7  | 63/81 [00:16<00:03,  4.85it/s]
Downloaded 0.06 GB:  79%|#######9  | 64/81 [00:16<00:03,  4.93it/s]
Downloaded 0.06 GB:  79%|#######9  | 64/81 [00:16<00:03,  4.93it/s]
Downloaded 0.06 GB:  80%|########  | 65/81 [00:16<00:03,  5.03it/s]
Downloaded 0.06 GB:  80%|########  | 65/81 [00:16<00:03,  5.03it/s]
Downloaded 0.06 GB:  81%|########1 | 66/81 [00:16<00:02,  5.13it/s]
Downloaded 0.07 GB:  81%|########1 | 66/81 [00:16<00:02,  5.13it/s]
Downloaded 0.07 GB:  83%|########2 | 67/81 [00:16<00:02,  5.20it/s]
Downloaded 0.07 GB:  83%|########2 | 67/81 [00:16<00:02,  5.20it/s]
Downloaded 0.07 GB:  84%|########3 | 68/81 [00:16<00:02,  5.26it/s]
Downloaded 0.07 GB:  84%|########3 | 68/81 [00:17<00:02,  5.26it/s]
Downloaded 0.07 GB:  85%|########5 | 69/81 [00:17<00:02,  5.34it/s]
Downloaded 0.07 GB:  85%|########5 | 69/81 [00:17<00:02,  5.34it/s]
Downloaded 0.07 GB:  86%|########6 | 70/81 [00:17<00:02,  5.39it/s]
Downloaded 0.07 GB:  86%|########6 | 70/81 [00:17<00:02,  5.39it/s]
Downloaded 0.07 GB:  88%|########7 | 71/81 [00:17<00:01,  5.43it/s]
Downloaded 0.07 GB:  88%|########7 | 71/81 [00:17<00:01,  5.43it/s]
Downloaded 0.07 GB:  89%|########8 | 72/81 [00:17<00:01,  5.46it/s]
Downloaded 0.07 GB:  89%|########8 | 72/81 [00:17<00:01,  5.46it/s]
Downloaded 0.07 GB:  90%|######### | 73/81 [00:17<00:01,  5.48it/s]
Downloaded 0.07 GB:  90%|######### | 73/81 [00:18<00:01,  5.48it/s]
Downloaded 0.07 GB:  91%|#########1| 74/81 [00:18<00:01,  5.51it/s]
Downloaded 0.07 GB:  91%|#########1| 74/81 [00:18<00:01,  5.51it/s]
Downloaded 0.07 GB:  93%|#########2| 75/81 [00:18<00:01,  5.58it/s]
Downloaded 0.07 GB:  93%|#########2| 75/81 [00:18<00:01,  5.58it/s]
Downloaded 0.07 GB:  94%|#########3| 76/81 [00:18<00:00,  5.64it/s]
Downloaded 0.08 GB:  94%|#########3| 76/81 [00:18<00:00,  5.64it/s]
Downloaded 0.08 GB:  95%|#########5| 77/81 [00:18<00:00,  5.74it/s]
Downloaded 0.08 GB:  95%|#########5| 77/81 [00:18<00:00,  5.74it/s]
Downloaded 0.08 GB:  96%|#########6| 78/81 [00:18<00:00,  5.11it/s]
Downloaded 0.08 GB:  96%|#########6| 78/81 [00:18<00:00,  5.11it/s]
Downloaded 0.08 GB:  98%|#########7| 79/81 [00:19<00:00,  5.41it/s]
Downloaded 0.08 GB:  98%|#########7| 79/81 [00:19<00:00,  5.41it/s]
Downloaded 0.08 GB:  98%|#########7| 79/81 [00:19<00:00,  5.41it/s]
Downloaded 0.08 GB: 100%|##########| 81/81 [00:19<00:00,  4.26it/s]
Extracting dataset/arxiv.zip
Loading necessary files...
This might take a while.
Processing graphs...

  0%|          | 0/1 [00:00<?, ?it/s]
100%|##########| 1/1 [00:00<00:00, 7839.82it/s]
Converting graphs into DGL objects...

  0%|          | 0/1 [00:00<?, ?it/s]
100%|##########| 1/1 [00:00<00:00, 86.28it/s]
Saving...

Defining Model

The model will be again identical to the Training GNN with Neighbor Sampling for Node Classification tutorial.

class Model(nn.Module):
    def __init__(self, in_feats, h_feats, num_classes):
        super(Model, self).__init__()
        self.conv1 = SAGEConv(in_feats, h_feats, aggregator_type='mean')
        self.conv2 = SAGEConv(h_feats, num_classes, aggregator_type='mean')
        self.h_feats = h_feats

    def forward(self, mfgs, x):
        h_dst = x[:mfgs[0].num_dst_nodes()]
        h = self.conv1(mfgs[0], (x, h_dst))
        h = F.relu(h)
        h_dst = h[:mfgs[1].num_dst_nodes()]
        h = self.conv2(mfgs[1], (h, h_dst))
        return h

Defining Training Procedure

The training procedure will be slightly different from what you saw previously, in the sense that you will need to

  • Initialize a distributed training context with torch.distributed.

  • Wrap your model with torch.nn.parallel.DistributedDataParallel.

  • Add a use_ddp=True argument to the DGL dataloader you wish to run together with DDP.

You will also need to wrap the training loop inside a function so that you can spawn subprocesses to run it.

def run(proc_id, devices):
    # Initialize distributed training context.
    dev_id = devices[proc_id]
    dist_init_method = 'tcp://{master_ip}:{master_port}'.format(master_ip='127.0.0.1', master_port='12345')
    if torch.cuda.device_count() < 1:
        device = torch.device('cpu')
        torch.distributed.init_process_group(
            backend='gloo', init_method=dist_init_method, world_size=len(devices), rank=proc_id)
    else:
        torch.cuda.set_device(dev_id)
        device = torch.device('cuda:' + str(dev_id))
        torch.distributed.init_process_group(
            backend='nccl', init_method=dist_init_method, world_size=len(devices), rank=proc_id)

    # Define training and validation dataloader, copied from the previous tutorial
    # but with one line of difference: use_ddp to enable distributed data parallel
    # data loading.
    sampler = dgl.dataloading.MultiLayerNeighborSampler([4, 4])
    train_dataloader = dgl.dataloading.NodeDataLoader(
        # The following arguments are specific to NodeDataLoader.
        graph,              # The graph
        train_nids,         # The node IDs to iterate over in minibatches
        sampler,            # The neighbor sampler
        device=device,      # Put the sampled MFGs on CPU or GPU
        use_ddp=True,       # Make it work with distributed data parallel
        # The following arguments are inherited from PyTorch DataLoader.
        batch_size=1024,    # Per-device batch size.
                            # The effective batch size is this number times the number of GPUs.
        shuffle=True,       # Whether to shuffle the nodes for every epoch
        drop_last=False,    # Whether to drop the last incomplete batch
        num_workers=0       # Number of sampler processes
    )
    valid_dataloader = dgl.dataloading.NodeDataLoader(
        graph, valid_nids, sampler,
        device=device,
        use_ddp=False,
        batch_size=1024,
        shuffle=False,
        drop_last=False,
        num_workers=0,
    )

    model = Model(num_features, 128, num_classes).to(device)
    # Wrap the model with distributed data parallel module.
    if device == torch.device('cpu'):
        model = torch.nn.parallel.DistributedDataParallel(model, device_ids=None, output_device=None)
    else:
        model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[device], output_device=device)

    # Define optimizer
    opt = torch.optim.Adam(model.parameters())

    best_accuracy = 0
    best_model_path = './model.pt'

    # Copied from previous tutorial with changes highlighted.
    for epoch in range(10):
        train_dataloader.set_epoch(epoch)    # <--- necessary for dataloader with DDP.
        model.train()

        with tqdm.tqdm(train_dataloader) as tq:
            for step, (input_nodes, output_nodes, mfgs) in enumerate(tq):
                # feature copy from CPU to GPU takes place here
                inputs = mfgs[0].srcdata['feat']
                labels = mfgs[-1].dstdata['label']

                predictions = model(mfgs, inputs)

                loss = F.cross_entropy(predictions, labels)
                opt.zero_grad()
                loss.backward()
                opt.step()

                accuracy = sklearn.metrics.accuracy_score(labels.cpu().numpy(), predictions.argmax(1).detach().cpu().numpy())

                tq.set_postfix({'loss': '%.03f' % loss.item(), 'acc': '%.03f' % accuracy}, refresh=False)

        model.eval()

        # Evaluate on only the first GPU.
        if proc_id == 0:
            predictions = []
            labels = []
            with tqdm.tqdm(valid_dataloader) as tq, torch.no_grad():
                for input_nodes, output_nodes, mfgs in tq:
                    inputs = mfgs[0].srcdata['feat']
                    labels.append(mfgs[-1].dstdata['label'].cpu().numpy())
                    predictions.append(model(mfgs, inputs).argmax(1).cpu().numpy())
                predictions = np.concatenate(predictions)
                labels = np.concatenate(labels)
                accuracy = sklearn.metrics.accuracy_score(labels, predictions)
                print('Epoch {} Validation Accuracy {}'.format(epoch, accuracy))
                if best_accuracy < accuracy:
                    best_accuracy = accuracy
                    torch.save(model.state_dict(), best_model_path)

        # Note that this tutorial does not train the whole model to the end.
        break

Spawning Trainer Processes

A typical scenario for multi-GPU training with DDP is to replicate the model once per GPU, and spawn one trainer process per GPU.

PyTorch tutorials recommend using multiprocessing.spawn to spawn multiple processes. This however is undesirable for training node classification or link prediction models on a single large graph, especially on Linux. The reason is that a single large graph itself may take a lot of memory, and mp.spawn will duplicate all objects in the program, including the large graph. Consequently, the large graph will be duplicated as many times as the number of GPUs.

To alleviate the problem we recommend using multiprocessing.Process, which forks from the main process and allows sharing the same graph object to trainer processes via copy-on-write. This can greatly reduce the memory consumption.

Normally, DGL maintains only one sparse matrix representation (usually COO) for each graph, and will create new formats when some APIs are called for efficiency. For instance, calling in_degrees will create a CSC representation for the graph, and calling out_degrees will create a CSR representation. A consequence is that if a graph is shared to trainer processes via copy-on-write before having its CSC/CSR created, each trainer will create its own CSC/CSR replica once in_degrees or out_degrees is called. To avoid this, you need to create all sparse matrix representations beforehand using the create_formats_ method:

graph.create_formats_()

Then you can spawn the subprocesses to train with multiple GPUs.

Note

You will need to use dgl.multiprocessing instead of the Python multiprocessing package. dgl.multiprocessing is identical to Python’s built-in multiprocessing except that it handles the subtleties between forking and multithreading in Python.

# Say you have four GPUs.
num_gpus = 4
import dgl.multiprocessing as mp
devices = list(range(num_gpus))
procs = []
for proc_id in range(num_gpus):
    p = mp.Process(target=run, args=(proc_id, devices))
    p.start()
    procs.append(p)
for p in procs:
    p.join()

# Thumbnail credits: Stanford CS224W Notes
# sphinx_gallery_thumbnail_path = '_static/blitz_1_introduction.png'

Total running time of the script: ( 0 minutes 24.065 seconds)

Gallery generated by Sphinx-Gallery