Graph Convolutional Network

Author: Qi Huang, Minjie Wang, Yu Gai, Quan Gan, Zheng Zhang

Warning

The tutorial aims at gaining insights into the paper, with code as a mean of explanation. The implementation thus is NOT optimized for running efficiency. For recommended implementation, please refer to the official examples.

This is a gentle introduction of using DGL to implement Graph Convolutional Networks (Kipf & Welling et al., Semi-Supervised Classification with Graph Convolutional Networks). We explain what is under the hood of the GraphConv module. The reader is expected to learn how to define a new GNN layer using DGL’s message passing APIs.

Model Overview

GCN from the perspective of message passing

We describe a layer of graph convolutional neural network from a message passing perspective; the math can be found here. It boils down to the following step, for each node \(u\):

1) Aggregate neighbors’ representations \(h_{v}\) to produce an intermediate representation \(\hat{h}_u\). 2) Transform the aggregated representation \(\hat{h}_{u}\) with a linear projection followed by a non-linearity: \(h_{u} = f(W_{u} \hat{h}_u)\).

We will implement step 1 with DGL message passing, and step 2 by PyTorch nn.Module.

GCN implementation with DGL

We first define the message and reduce function as usual. Since the aggregation on a node \(u\) only involves summing over the neighbors’ representations \(h_v\), we can simply use builtin functions:

import dgl
import dgl.function as fn
import torch as th
import torch.nn as nn
import torch.nn.functional as F
from dgl import DGLGraph

gcn_msg = fn.copy_u(u='h', out='m')
gcn_reduce = fn.sum(msg='m', out='h')

We then proceed to define the GCNLayer module. A GCNLayer essentially performs message passing on all the nodes then applies a fully-connected layer.

Note

This is showing how to implement a GCN from scratch. DGL provides a more efficient builtin GCN layer module.

class GCNLayer(nn.Module):
    def __init__(self, in_feats, out_feats):
        super(GCNLayer, self).__init__()
        self.linear = nn.Linear(in_feats, out_feats)

    def forward(self, g, feature):
        # Creating a local scope so that all the stored ndata and edata
        # (such as the `'h'` ndata below) are automatically popped out
        # when the scope exits.
        with g.local_scope():
            g.ndata['h'] = feature
            g.update_all(gcn_msg, gcn_reduce)
            h = g.ndata['h']
            return self.linear(h)

The forward function is essentially the same as any other commonly seen NNs model in PyTorch. We can initialize GCN like any nn.Module. For example, let’s define a simple neural network consisting of two GCN layers. Suppose we are training the classifier for the cora dataset (the input feature size is 1433 and the number of classes is 7). The last GCN layer computes node embeddings, so the last layer in general does not apply activation.

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.layer1 = GCNLayer(1433, 16)
        self.layer2 = GCNLayer(16, 7)

    def forward(self, g, features):
        x = F.relu(self.layer1(g, features))
        x = self.layer2(g, x)
        return x
net = Net()
print(net)

Out:

Net(
  (layer1): GCNLayer(
    (linear): Linear(in_features=1433, out_features=16, bias=True)
  )
  (layer2): GCNLayer(
    (linear): Linear(in_features=16, out_features=7, bias=True)
  )
)

We load the cora dataset using DGL’s built-in data module.

from dgl.data import CoraGraphDataset
def load_cora_data():
    dataset = CoraGraphDataset()
    g = dataset[0]
    features = g.ndata['feat']
    labels = g.ndata['label']
    train_mask = g.ndata['train_mask']
    test_mask = g.ndata['test_mask']
    return g, features, labels, train_mask, test_mask

When a model is trained, we can use the following method to evaluate the performance of the model on the test dataset:

def evaluate(model, g, features, labels, mask):
    model.eval()
    with th.no_grad():
        logits = model(g, features)
        logits = logits[mask]
        labels = labels[mask]
        _, indices = th.max(logits, dim=1)
        correct = th.sum(indices == labels)
        return correct.item() * 1.0 / len(labels)

We then train the network as follows:

import time
import numpy as np
g, features, labels, train_mask, test_mask = load_cora_data()
# Add edges between each node and itself to preserve old node representations
g.add_edges(g.nodes(), g.nodes())
optimizer = th.optim.Adam(net.parameters(), lr=1e-2)
dur = []
for epoch in range(50):
    if epoch >=3:
        t0 = time.time()

    net.train()
    logits = net(g, features)
    logp = F.log_softmax(logits, 1)
    loss = F.nll_loss(logp[train_mask], labels[train_mask])

    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

    if epoch >=3:
        dur.append(time.time() - t0)

    acc = evaluate(net, g, features, labels, test_mask)
    print("Epoch {:05d} | Loss {:.4f} | Test Acc {:.4f} | Time(s) {:.4f}".format(
            epoch, loss.item(), acc, np.mean(dur)))

Out:

  NumNodes: 2708
  NumEdges: 10556
  NumFeats: 1433
  NumClasses: 7
  NumTrainingSamples: 140
  NumValidationSamples: 500
  NumTestSamples: 1000
Done loading data from cached files.
/home/ubuntu/.pyenv/versions/miniconda3-latest/lib/python3.7/site-packages/torch/autocast_mode.py:141: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
  warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
/home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/0.9.x/lib/python3.7/site-packages/numpy/core/fromnumeric.py:3441: RuntimeWarning: Mean of empty slice.
  out=out, **kwargs)
/home/ubuntu/prod-doc/readthedocs.org/user_builds/dgl/envs/0.9.x/lib/python3.7/site-packages/numpy/core/_methods.py:189: RuntimeWarning: invalid value encountered in double_scalars
  ret = ret.dtype.type(ret / rcount)
Epoch 00000 | Loss 1.9516 | Test Acc 0.1890 | Time(s) nan
Epoch 00001 | Loss 1.7861 | Test Acc 0.2730 | Time(s) nan
Epoch 00002 | Loss 1.6461 | Test Acc 0.3760 | Time(s) nan
Epoch 00003 | Loss 1.5241 | Test Acc 0.4500 | Time(s) 0.0225
Epoch 00004 | Loss 1.4203 | Test Acc 0.5150 | Time(s) 0.0197
Epoch 00005 | Loss 1.3213 | Test Acc 0.5670 | Time(s) 0.0187
Epoch 00006 | Loss 1.2219 | Test Acc 0.6070 | Time(s) 0.0194
Epoch 00007 | Loss 1.1260 | Test Acc 0.6460 | Time(s) 0.0189
Epoch 00008 | Loss 1.0361 | Test Acc 0.6780 | Time(s) 0.0196
Epoch 00009 | Loss 0.9520 | Test Acc 0.7000 | Time(s) 0.0192
Epoch 00010 | Loss 0.8747 | Test Acc 0.7140 | Time(s) 0.0194
Epoch 00011 | Loss 0.8041 | Test Acc 0.7200 | Time(s) 0.0191
Epoch 00012 | Loss 0.7368 | Test Acc 0.7290 | Time(s) 0.0196
Epoch 00013 | Loss 0.6756 | Test Acc 0.7340 | Time(s) 0.0193
Epoch 00014 | Loss 0.6216 | Test Acc 0.7350 | Time(s) 0.0197
Epoch 00015 | Loss 0.5719 | Test Acc 0.7370 | Time(s) 0.0195
Epoch 00016 | Loss 0.5250 | Test Acc 0.7430 | Time(s) 0.0192
Epoch 00017 | Loss 0.4818 | Test Acc 0.7470 | Time(s) 0.0194
Epoch 00018 | Loss 0.4433 | Test Acc 0.7480 | Time(s) 0.0195
Epoch 00019 | Loss 0.4081 | Test Acc 0.7530 | Time(s) 0.0195
Epoch 00020 | Loss 0.3748 | Test Acc 0.7480 | Time(s) 0.0198
Epoch 00021 | Loss 0.3445 | Test Acc 0.7510 | Time(s) 0.0196
Epoch 00022 | Loss 0.3173 | Test Acc 0.7550 | Time(s) 0.0194
Epoch 00023 | Loss 0.2920 | Test Acc 0.7570 | Time(s) 0.0196
Epoch 00024 | Loss 0.2684 | Test Acc 0.7580 | Time(s) 0.0195
Epoch 00025 | Loss 0.2468 | Test Acc 0.7580 | Time(s) 0.0196
Epoch 00026 | Loss 0.2273 | Test Acc 0.7580 | Time(s) 0.0198
Epoch 00027 | Loss 0.2094 | Test Acc 0.7580 | Time(s) 0.0197
Epoch 00028 | Loss 0.1927 | Test Acc 0.7580 | Time(s) 0.0197
Epoch 00029 | Loss 0.1774 | Test Acc 0.7560 | Time(s) 0.0196
Epoch 00030 | Loss 0.1635 | Test Acc 0.7530 | Time(s) 0.0195
Epoch 00031 | Loss 0.1508 | Test Acc 0.7490 | Time(s) 0.0194
Epoch 00032 | Loss 0.1391 | Test Acc 0.7480 | Time(s) 0.0193
Epoch 00033 | Loss 0.1283 | Test Acc 0.7510 | Time(s) 0.0195
Epoch 00034 | Loss 0.1184 | Test Acc 0.7510 | Time(s) 0.0194
Epoch 00035 | Loss 0.1092 | Test Acc 0.7540 | Time(s) 0.0195
Epoch 00036 | Loss 0.1009 | Test Acc 0.7550 | Time(s) 0.0194
Epoch 00037 | Loss 0.0933 | Test Acc 0.7540 | Time(s) 0.0194
Epoch 00038 | Loss 0.0863 | Test Acc 0.7560 | Time(s) 0.0195
Epoch 00039 | Loss 0.0799 | Test Acc 0.7560 | Time(s) 0.0194
Epoch 00040 | Loss 0.0739 | Test Acc 0.7540 | Time(s) 0.0195
Epoch 00041 | Loss 0.0685 | Test Acc 0.7550 | Time(s) 0.0194
Epoch 00042 | Loss 0.0635 | Test Acc 0.7540 | Time(s) 0.0195
Epoch 00043 | Loss 0.0590 | Test Acc 0.7550 | Time(s) 0.0195
Epoch 00044 | Loss 0.0547 | Test Acc 0.7540 | Time(s) 0.0195
Epoch 00045 | Loss 0.0508 | Test Acc 0.7530 | Time(s) 0.0196
Epoch 00046 | Loss 0.0472 | Test Acc 0.7510 | Time(s) 0.0195
Epoch 00047 | Loss 0.0439 | Test Acc 0.7500 | Time(s) 0.0196
Epoch 00048 | Loss 0.0408 | Test Acc 0.7500 | Time(s) 0.0196
Epoch 00049 | Loss 0.0380 | Test Acc 0.7520 | Time(s) 0.0196

GCN in one formula

Mathematically, the GCN model follows this formula:

\(H^{(l+1)} = \sigma(\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}H^{(l)}W^{(l)})\)

Here, \(H^{(l)}\) denotes the \(l^{th}\) layer in the network, \(\sigma\) is the non-linearity, and \(W\) is the weight matrix for this layer. \(\tilde{D}\) and \(\tilde{A}\) are separately the degree and adjacency matrices for the graph. With the superscript ~, we are referring to the variant where we add additional edges between each node and itself to preserve its old representation in graph convolutions. The shape of the input \(H^{(0)}\) is \(N \times D\), where \(N\) is the number of nodes and \(D\) is the number of input features. We can chain up multiple layers as such to produce a node-level representation output with shape \(N \times F\), where \(F\) is the dimension of the output node feature vector.

The equation can be efficiently implemented using sparse matrix multiplication kernels (such as Kipf’s pygcn code). The above DGL implementation in fact has already used this trick due to the use of builtin functions.

Note that the tutorial code implements a simplified version of GCN where we replace \(\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}\) with \(\tilde{A}\). For a full implementation, see our example here.

Total running time of the script: ( 0 minutes 1.790 seconds)

Gallery generated by Sphinx-Gallery