#### Note

To create a practice binary-community dataset from Cora, first extract\n all two-class pairs from the original Cora seven classes. For each pair, you\n treat each class as one community, and find the largest subgraph that\n at least contains one cross-community edge as the training example. As\n a result, there are a total of 21 training samples in this small dataset.

\n\nWith the following code, you can visualize one of the training samples and its community structure.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import networkx as nx\nimport matplotlib.pyplot as plt\n\ntrain_set = dgl.data.CoraBinary()\nG1, pmpd1, label1 = train_set[1]\nnx_G1 = G1.to_networkx()\n\ndef visualize(labels, g):\n pos = nx.spring_layout(g, seed=1)\n plt.figure(figsize=(8, 8))\n plt.axis('off')\n nx.draw_networkx(g, pos=pos, node_size=50, cmap=plt.get_cmap('coolwarm'),\n node_color=labels, edge_color='k',\n arrows=False, width=0.5, style='dotted', with_labels=False)\nvisualize(label1, nx_G1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To learn more, go the original research paper to see how to generalize\nto multiple communities case.\n\nCommunity detection in a supervised setting\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe community detection problem could be tackled with both supervised and\nunsupervised approaches. You can formulate\ncommunity detection in a supervised setting as follows:\n\n- Each training example consists of $(G, L)$, where $G$ is a\n directed graph $(V, E)$. For each node $v$ in $V$, we\n assign a ground truth community label $z_v \\in \\{0,1\\}$.\n- The parameterized model $f(G, \\theta)$ predicts a label set\n $\\tilde{Z} = f(G)$ for nodes $V$.\n- For each example $(G,L)$, the model learns to minimize a specially\n designed loss function (equivariant loss) $L_{equivariant} =\n (\\tilde{Z}\uff0cZ)$\n\n#### Note

In this supervised setting, the model naturally predicts a label for\n each community. However, community assignment should be equivariant to\n label permutations. To achieve this, in each forward process, we take\n the minimum among losses calculated from all possible permutations of\n labels.\n\n Mathematically, this means\n $L_{equivariant} = \\underset{\\pi \\in S_c} {min}-\\log(\\hat{\\pi}, \\pi)$,\n where $S_c$ is the set of all permutations of labels, and\n $\\hat{\\pi}$ is the set of predicted labels,\n $- \\log(\\hat{\\pi},\\pi)$ denotes negative log likelihood.\n\n For instance, for a sample graph with node $\\{1,2,3,4\\}$ and\n community assignment $\\{A, A, A, B\\}$, with each node's label\n $l \\in \\{0,1\\}$,The group of all possible permutations\n $S_c = \\{\\{0,0,0,1\\}, \\{1,1,1,0\\}\\}$.

\n\nLine graph neural network key ideas\n------------------------------------\nAn key innovation in this topic is the use of a line graph.\nUnlike models in previous tutorials, message passing happens not only on the\noriginal graph, e.g. the binary community subgraph from Cora, but also on the\nline graph associated with the original graph.\n\nWhat is a line-graph?\n~~~~~~~~~~~~~~~~~~~~~\nIn graph theory, line graph is a graph representation that encodes the\nedge adjacency structure in the original graph.\n\nSpecifically, a line-graph $L(G)$ turns an edge of the original graph `G`\ninto a node. This is illustrated with the graph below (taken from the\nresearch paper).\n\n.. figure:: https://i.imgur.com/4WO5jEm.png\n :alt: lg\n :align: center\n\nHere, $e_{A}:= \uff08i\\rightarrow j\uff09$ and $e_{B}:= (j\\rightarrow k)$\nare two edges in the original graph $G$. In line graph $G_L$,\nthey correspond to nodes $v^{l}_{A}, v^{l}_{B}$.\n\nThe next natural question is, how to connect nodes in line-graph\uff1f How to\nconnect two edges? Here, we use the following connection rule:\n\nTwo nodes $v^{l}_{A}$, $v^{l}_{B}$ in `lg` are connected if\nthe corresponding two edges $e_{A}, e_{B}$ in `g` share one and only \none node:\n$e_{A}$'s destination node is $e_{B}$'s source node\n($j$).\n\n#### Note

Mathematically, this definition corresponds to a notion called non-backtracking\n operator:\n $B_{(i \\rightarrow j), (\\hat{i} \\rightarrow \\hat{j})}$\n $= \\begin{cases}\n 1 \\text{ if } j = \\hat{i}, \\hat{j} \\neq i\\\\\n 0 \\text{ otherwise} \\end{cases}$\n where an edge is formed if $B_{node1, node2} = 1$.

\n\n\nOne layer in LGNN, algorithm structure\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nLGNN chains together a series of line graph neural network layers. The graph\nrepresentation $x$ and its line graph companion $y$ evolve with\nthe dataflow as follows.\n\n.. figure:: https://i.imgur.com/bZGGIGp.png\n :alt: alg\n :align: center\n\nAt the $k$-th layer, the $i$-th neuron of the $l$-th\nchannel updates its embedding $x^{(k+1)}_{i,l}$ with:\n\n\\begin{align}\\begin{split}\n x^{(k+1)}_{i,l} ={}&\\rho[x^{(k)}_{i}\\theta^{(k)}_{1,l}\n +(Dx^{(k)})_{i}\\theta^{(k)}_{2,l} \\\\\n &+\\sum^{J-1}_{j=0}(A^{2^{j}}x^{k})_{i}\\theta^{(k)}_{3+j,l}\\\\\n &+[\\{\\text{Pm},\\text{Pd}\\}y^{(k)}]_{i}\\theta^{(k)}_{3+J,l}] \\\\\n &+\\text{skip-connection}\n \\qquad i \\in V, l = 1,2,3, ... b_{k+1}/2\n \\end{split}\\end{align}\n\nThen, the line-graph representation $y^{(k+1)}_{i,l}$ with,\n\n\\begin{align}\\begin{split}\n y^{(k+1)}_{i',l^{'}} = {}&\\rho[y^{(k)}_{i^{'}}\\gamma^{(k)}_{1,l^{'}}+\n (D_{L(G)}y^{(k)})_{i^{'}}\\gamma^{(k)}_{2,l^{'}}\\\\\n &+\\sum^{J-1}_{j=0}(A_{L(G)}^{2^{j}}y^{k})_{i}\\gamma^{(k)}_{3+j,l^{'}}\\\\\n &+[\\{\\text{Pm},\\text{Pd}\\}^{T}x^{(k+1)}]_{i^{'}}\\gamma^{(k)}_{3+J,l^{'}}]\\\\\n &+\\text{skip-connection}\n \\qquad i^{'} \\in V_{l}, l^{'} = 1,2,3, ... b^{'}_{k+1}/2\n \\end{split}\\end{align}\n\nWhere $\\text{skip-connection}$ refers to performing the same operation without the non-linearity\n$\\rho$, and with linear projection $\\theta_\\{\\frac{b_{k+1}}{2} + 1, ..., b_{k+1}-1, b_{k+1}\\}$\nand $\\gamma_\\{\\frac{b_{k+1}}{2} + 1, ..., b_{k+1}-1, b_{k+1}\\}$.\n\nImplement LGNN in DGL\n---------------------\nEven though the equations in the previous section might seem intimidating, \nit helps to understand the following information before you implement the LGNN.\n\nThe two equations are symmetric and can be implemented as two instances\nof the same class with different parameters.\nThe first equation operates on graph representation $x$,\nwhereas the second operates on line-graph\nrepresentation $y$. Let us denote this abstraction as $f$. Then\nthe first is $f(x,y; \\theta_x)$, and the second\nis $f(y,x, \\theta_y)$. That is, they are parameterized to compute\nrepresentations of the original graph and its\ncompanion line graph, respectively.\n\nEach equation consists of four terms. Take the first one as an example, which follows.\n\n - $x^{(k)}\\theta^{(k)}_{1,l}$, a linear projection of previous\n layer's output $x^{(k)}$, denote as $\\text{prev}(x)$.\n - $(Dx^{(k)})\\theta^{(k)}_{2,l}$, a linear projection of degree\n operator on $x^{(k)}$, denote as $\\text{deg}(x)$.\n - $\\sum^{J-1}_{j=0}(A^{2^{j}}x^{(k)})\\theta^{(k)}_{3+j,l}$,\n a summation of $2^{j}$ adjacency operator on $x^{(k)}$,\n denote as $\\text{radius}(x)$\n - $[\\{Pm,Pd\\}y^{(k)}]\\theta^{(k)}_{3+J,l}$, fusing another\n graph's embedding information using incidence matrix\n $\\{Pm, Pd\\}$, followed with a linear projection,\n denote as $\\text{fuse}(y)$.\n\nEach of the terms are performed again with different\nparameters, and without the nonlinearity after the sum.\nTherefore, $f$ could be written as:\n\n .. math::\n \\begin{split}\n f(x^{(k)},y^{(k)}) = {}\\rho[&\\text{prev}(x^{(k-1)}) + \\text{deg}(x^{(k-1)}) +\\text{radius}(x^{k-1})\n +\\text{fuse}(y^{(k)})]\\\\\n +&\\text{prev}(x^{(k-1)}) + \\text{deg}(x^{(k-1)}) +\\text{radius}(x^{k-1}) +\\text{fuse}(y^{(k)})\n \\end{split}\n\nTwo equations are chained-up in the following order:\n\n .. math::\n \\begin{split}\n x^{(k+1)} = {}& f(x^{(k)}, y^{(k)})\\\\\n y^{(k+1)} = {}& f(y^{(k)}, x^{(k+1)})\n \\end{split}\n\nKeep in mind the listed observations in this overview and proceed to implementation.\nAn important point is that you use different strategies for the noted terms.\n\n#### Note

You can understand $\\{Pm, Pd\\}$ more thoroughly with this explanation. \n Roughly speaking, there is a relationship between how $g$ and\n $lg$ (the line graph) work together with loopy brief propagation.\n Here, you implement $\\{Pm, Pd\\}$ as a SciPy COO sparse matrix in the dataset,\n and stack them as tensors when batching. Another batching solution is to\n treat $\\{Pm, Pd\\}$ as the adjacency matrix of a bipartite graph, which maps\n line graph's feature to graph's, and vice versa.

\n\nImplementing $\\text{prev}$ and $\\text{deg}$ as tensor operation\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nLinear projection and degree operation are both simply matrix\nmultiplication. Write them as PyTorch tensor operations.\n\nIn ``__init__``, you define the projection variables.\n\n::\n\n self.linear_prev = nn.Linear(in_feats, out_feats)\n self.linear_deg = nn.Linear(in_feats, out_feats)\n\n\nIn ``forward()``, $\\text{prev}$ and $\\text{deg}$ are the same\nas any other PyTorch tensor operations.\n\n::\n\n prev_proj = self.linear_prev(feat_a)\n deg_proj = self.linear_deg(deg * feat_a)\n\nImplementing $\\text{radius}$ as message passing in DGL\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nAs discussed in GCN tutorial, you can formulate one adjacency operator as\ndoing one-step message passing. As a generalization, $2^j$ adjacency\noperations can be formulated as performing $2^j$ step of message\npassing. Therefore, the summation is equivalent to summing nodes'\nrepresentation of $2^j, j=0, 1, 2..$ step message passing, i.e.\ngathering information in $2^{j}$ neighborhood of each node.\n\nIn ``__init__``, define the projection variables used in each\n$2^j$ steps of message passing.\n\n::\n\n self.linear_radius = nn.ModuleList(\n [nn.Linear(in_feats, out_feats) for i in range(radius)])\n\nIn ``__forward__``, use following function ``aggregate_radius()`` to\ngather data from multiple hops. This can be seen in the following code. \nNote that the ``update_all`` is called multiple times.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Return a list containing features gathered from multiple radius.\nimport dgl.function as fn\ndef aggregate_radius(radius, g, z):\n # initializing list to collect message passing result\n z_list = []\n g.ndata['z'] = z\n # pulling message from 1-hop neighbourhood\n g.update_all(fn.copy_src(src='z', out='m'), fn.sum(msg='m', out='z'))\n z_list.append(g.ndata['z'])\n for i in range(radius - 1):\n for j in range(2 ** i):\n #pulling message from 2^j neighborhood\n g.update_all(fn.copy_src(src='z', out='m'), fn.sum(msg='m', out='z'))\n z_list.append(g.ndata['z'])\n return z_list"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Implementing $\\text{fuse}$ as sparse matrix multiplication\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n$\\{Pm, Pd\\}$ is a sparse matrix with only two non-zero entries on\neach column. Therefore, you construct it as a sparse matrix in the dataset,\nand implement $\\text{fuse}$ as a sparse matrix multiplication.\n\nin ``__forward__``:\n\n::\n\n fuse = self.linear_fuse(th.mm(pm_pd, feat_b))\n\nCompleting $f(x, y)$\n~~~~~~~~~~~~~~~~~~~~~~~~~~\nFinally, the following shows how to sum up all the terms together, pass it to skip connection, and\nbatch norm.\n\n::\n\n result = prev_proj + deg_proj + radius_proj + fuse\n\nPass result to skip connection. \n\n::\n\n result = th.cat([result[:, :n], F.relu(result[:, n:])], 1)\n\nThen pass the result to batch norm.\n\n::\n\n result = self.bn(result) #Batch Normalization.\n\n\nHere is the complete code for one LGNN layer's abstraction $f(x,y)$\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"class LGNNCore(nn.Module):\n def __init__(self, in_feats, out_feats, radius):\n super(LGNNCore, self).__init__()\n self.out_feats = out_feats\n self.radius = radius\n\n self.linear_prev = nn.Linear(in_feats, out_feats)\n self.linear_deg = nn.Linear(in_feats, out_feats)\n self.linear_radius = nn.ModuleList(\n [nn.Linear(in_feats, out_feats) for i in range(radius)])\n self.linear_fuse = nn.Linear(in_feats, out_feats)\n self.bn = nn.BatchNorm1d(out_feats)\n\n def forward(self, g, feat_a, feat_b, deg, pm_pd):\n # term \"prev\"\n prev_proj = self.linear_prev(feat_a)\n # term \"deg\"\n deg_proj = self.linear_deg(deg * feat_a)\n\n # term \"radius\"\n # aggregate 2^j-hop features\n hop2j_list = aggregate_radius(self.radius, g, feat_a)\n # apply linear transformation\n hop2j_list = [linear(x) for linear, x in zip(self.linear_radius, hop2j_list)]\n radius_proj = sum(hop2j_list)\n\n # term \"fuse\"\n fuse = self.linear_fuse(th.mm(pm_pd, feat_b))\n\n # sum them together\n result = prev_proj + deg_proj + radius_proj + fuse\n\n # skip connection and batch norm\n n = self.out_feats // 2\n result = th.cat([result[:, :n], F.relu(result[:, n:])], 1)\n result = self.bn(result)\n\n return result"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Chain-up LGNN abstractions as an LGNN layer\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nTo implement:\n\n\\begin{align}\\begin{split}\n x^{(k+1)} = {}& f(x^{(k)}, y^{(k)})\\\\\n y^{(k+1)} = {}& f(y^{(k)}, x^{(k+1)})\n \\end{split}\\end{align}\n\nChain-up two ``LGNNCore`` instances, as in the example code, with different parameters in the forward pass.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"class LGNNLayer(nn.Module):\n def __init__(self, in_feats, out_feats, radius):\n super(LGNNLayer, self).__init__()\n self.g_layer = LGNNCore(in_feats, out_feats, radius)\n self.lg_layer = LGNNCore(in_feats, out_feats, radius)\n\n def forward(self, g, lg, x, lg_x, deg_g, deg_lg, pm_pd):\n next_x = self.g_layer(g, x, lg_x, deg_g, pm_pd)\n pm_pd_y = th.transpose(pm_pd, 0, 1)\n next_lg_x = self.lg_layer(lg, lg_x, x, deg_lg, pm_pd_y)\n return next_x, next_lg_x"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Chain-up LGNN layers\n~~~~~~~~~~~~~~~~~~~~\nDefine an LGNN with three hidden layers, as in the following example.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"class LGNN(nn.Module):\n def __init__(self, radius):\n super(LGNN, self).__init__()\n self.layer1 = LGNNLayer(1, 16, radius) # input is scalar feature\n self.layer2 = LGNNLayer(16, 16, radius) # hidden size is 16\n self.layer3 = LGNNLayer(16, 16, radius)\n self.linear = nn.Linear(16, 2) # predice two classes\n\n def forward(self, g, lg, pm_pd):\n # compute the degrees\n deg_g = g.in_degrees().float().unsqueeze(1)\n deg_lg = lg.in_degrees().float().unsqueeze(1)\n # use degree as the input feature\n x, lg_x = deg_g, deg_lg\n x, lg_x = self.layer1(g, lg, x, lg_x, deg_g, deg_lg, pm_pd)\n x, lg_x = self.layer2(g, lg, x, lg_x, deg_g, deg_lg, pm_pd)\n x, lg_x = self.layer3(g, lg, x, lg_x, deg_g, deg_lg, pm_pd)\n return self.linear(x)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Training and inference\n-----------------------\nFirst load the data.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from torch.utils.data import DataLoader\ntraining_loader = DataLoader(train_set,\n batch_size=1,\n collate_fn=train_set.collate_fn,\n drop_last=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, define the main training loop. Note that each training sample contains\nthree objects: A :class:`~dgl.DGLGraph`, a SciPy sparse matrix ``pmpd``, and a label\narray in ``numpy.ndarray``. Generate the line graph by using this command:\n\n::\n\n lg = g.line_graph(backtracking=False)\n\nNote that ``backtracking=False`` is required to correctly simulate non-backtracking\noperation. We also define a utility function to convert the SciPy sparse matrix to\ntorch sparse tensor.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Create the model\nmodel = LGNN(radius=3)\n# define the optimizer\noptimizer = th.optim.Adam(model.parameters(), lr=1e-2)\n\n# A utility function to convert a scipy.coo_matrix to torch.SparseFloat\ndef sparse2th(mat):\n value = mat.data\n indices = th.LongTensor([mat.row, mat.col])\n tensor = th.sparse.FloatTensor(indices, th.from_numpy(value).float(), mat.shape)\n return tensor\n\n# Train for 20 epochs\nfor i in range(20):\n all_loss = []\n all_acc = []\n for [g, pmpd, label] in training_loader:\n # Generate the line graph.\n lg = g.line_graph(backtracking=False)\n # Create torch tensors\n pmpd = sparse2th(pmpd)\n label = th.from_numpy(label)\n \n # Forward\n z = model(g, lg, pmpd)\n\n # Calculate loss:\n # Since there are only two communities, there are only two permutations\n # of the community labels.\n loss_perm1 = F.cross_entropy(z, label)\n loss_perm2 = F.cross_entropy(z, 1 - label)\n loss = th.min(loss_perm1, loss_perm2)\n\n # Calculate accuracy:\n _, pred = th.max(z, 1)\n acc_perm1 = (pred == label).float().mean()\n acc_perm2 = (pred == 1 - label).float().mean()\n acc = th.max(acc_perm1, acc_perm2)\n all_loss.append(loss.item())\n all_acc.append(acc.item())\n\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n niters = len(all_loss)\n print(\"Epoch %d | loss %.4f | accuracy %.4f\" % (i,\n sum(all_loss) / niters, sum(all_acc) / niters))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Visualize training progress\n-----------------------------\nYou can visualize the network's community prediction on one training example,\ntogether with the ground truth. Start this with the following code example.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"pmpd1 = sparse2th(pmpd1)\nLG1 = G1.line_graph(backtracking=False)\nz = model(G1, LG1, pmpd1)\n_, pred = th.max(z, 1)\nvisualize(pred, nx_G1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Compared with the ground truth. Note that the color might be reversed for the\ntwo communities because the model is for correctly predicting the partitioning.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"visualize(label1, nx_G1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Here is an animation to better understand the process. (40 epochs)\n\n.. figure:: https://i.imgur.com/KDUyE1S.gif \n :alt: lgnn-anim\n\nBatching graphs for parallelism\n--------------------------------\n\nLGNN takes a collection of different graphs.\nYou might consider whether batching can be used for parallelism.\n\nBatching has been into the data loader itself.\nIn the ``collate_fn`` for PyTorch data loader, graphs are batched using DGL's\nbatched_graph API. DGL batches graphs by merging them\ninto a large graph, with each smaller graph's adjacency matrix being a block\nalong the diagonal of the large graph's adjacency matrix. Concatenate\n:math`\\{Pm,Pd\\}` as block diagonal matrix in correspondence to DGL batched\ngraph API.\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"def collate_fn(batch):\n graphs, pmpds, labels = zip(*batch)\n batched_graphs = dgl.batch(graphs)\n batched_pmpds = sp.block_diag(pmpds)\n batched_labels = np.concatenate(labels, axis=0)\n return batched_graphs, batched_pmpds, batched_labels"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can find the complete code on Github at \n`Community Detection with Graph Neural Networks (CDGNN)