:orphan: Paper Study with DGL ========================================= .. raw:: html
.. raw:: html
Graph neural networks and its variants -------------------------------------------- * **Graph convolutional network (GCN)** `[research paper] `__ `[tutorial] <1_gnn/1_gcn.html>`__ `[Pytorch code] `__ `[MXNet code] `__: * **Graph attention network (GAT)** `[research paper] `__ `[tutorial] <1_gnn/9_gat.html>`__ `[Pytorch code] `__ `[MXNet code] `__: GAT extends the GCN functionality by deploying multi-head attention among neighborhood of a node. This greatly enhances the capacity and expressiveness of the model. * **Relational-GCN** `[research paper] `__ `[tutorial] <1_gnn/4_rgcn.html>`__ `[Pytorch code] `__ `[MXNet code] `__: Relational-GCN allows multiple edges among two entities of a graph. Edges with distinct relationships are encoded differently. * **Line graph neural network (LGNN)** `[research paper] `__ `[tutorial] <1_gnn/6_line_graph.html>`__ `[Pytorch code] `__: This network focuses on community detection by inspecting graph structures. It uses representations of both the original graph and its line-graph companion. In addition to demonstrating how an algorithm can harness multiple graphs, this implementation shows how you can judiciously mix simple tensor operations and sparse-matrix tensor operations, along with message-passing with DGL. .. raw:: html
.. raw:: html
.. only:: html .. image:: /tutorials/models/1_gnn/images/thumb/sphx_glr_1_gcn_thumb.png :alt: :ref:`sphx_glr_tutorials_models_1_gnn_1_gcn.py` .. raw:: html
Graph Convolutional Network
.. raw:: html
.. only:: html .. image:: /tutorials/models/1_gnn/images/thumb/sphx_glr_4_rgcn_thumb.png :alt: :ref:`sphx_glr_tutorials_models_1_gnn_4_rgcn.py` .. raw:: html
Relational Graph Convolutional Network
.. raw:: html
.. only:: html .. image:: /tutorials/models/1_gnn/images/thumb/sphx_glr_6_line_graph_thumb.png :alt: :ref:`sphx_glr_tutorials_models_1_gnn_6_line_graph.py` .. raw:: html
Line Graph Neural Network
.. raw:: html
.. only:: html .. image:: /tutorials/models/1_gnn/images/thumb/sphx_glr_9_gat_thumb.png :alt: :ref:`sphx_glr_tutorials_models_1_gnn_9_gat.py` .. raw:: html
Understand Graph Attention Network
.. raw:: html
Batching many small graphs ------------------------------- * **Tree-LSTM** `[paper] `__ `[tutorial] <2_small_graph/3_tree-lstm.html>`__ `[PyTorch code] `__: Sentences have inherent structures that are thrown away by treating them simply as sequences. Tree-LSTM is a powerful model that learns the representation by using prior syntactic structures such as a parse-tree. The challenge in training is that simply by padding a sentence to the maximum length no longer works. Trees of different sentences have different sizes and topologies. DGL solves this problem by adding the trees to a bigger container graph, and then using message-passing to explore maximum parallelism. Batching is a key API for this. .. raw:: html
.. raw:: html
.. only:: html .. image:: /tutorials/models/2_small_graph/images/thumb/sphx_glr_3_tree-lstm_thumb.png :alt: :ref:`sphx_glr_tutorials_models_2_small_graph_3_tree-lstm.py` .. raw:: html
Tree-LSTM in DGL
.. raw:: html
Generative models -------------------- * **DGMG** `[paper] `__ `[tutorial] <3_generative_model/5_dgmg.html>`__ `[PyTorch code] `__: This model belongs to the family that deals with structural generation. Deep generative models of graphs (DGMG) uses a state-machine approach. It is also very challenging because, unlike Tree-LSTM, every sample has a dynamic, probability-driven structure that is not available before training. You can progressively leverage intra- and inter-graph parallelism to steadily improve the performance. .. raw:: html
.. raw:: html
.. only:: html .. image:: /tutorials/models/3_generative_model/images/thumb/sphx_glr_5_dgmg_thumb.png :alt: :ref:`sphx_glr_tutorials_models_3_generative_model_5_dgmg.py` .. raw:: html
Generative Models of Graphs
.. raw:: html
Revisit classic models from a graph perspective ------------------------------------------------------- * **Capsule** `[paper] `__ `[tutorial] <4_old_wines/2_capsule.html>`__ `[PyTorch code] `__: This new computer vision model has two key ideas. First, enhancing the feature representation in a vector form (instead of a scalar) called *capsule*. Second, replacing max-pooling with dynamic routing. The idea of dynamic routing is to integrate a lower level capsule to one or several higher level capsules with non-parametric message-passing. A tutorial shows how the latter can be implemented with DGL APIs. * **Transformer** `[paper] `__ `[tutorial] <4_old_wines/7_transformer.html>`__ `[PyTorch code] `__ and **Universal Transformer** `[paper] `__ `[tutorial] <4_old_wines/7_transformer.html>`__ `[PyTorch code] `__: These two models replace recurrent neural networks (RNNs) with several layers of multi-head attention to encode and discover structures among tokens of a sentence. These attention mechanisms are similarly formulated as graph operations with message-passing. .. raw:: html
.. raw:: html
.. only:: html .. image:: /tutorials/models/4_old_wines/images/thumb/sphx_glr_2_capsule_thumb.png :alt: :ref:`sphx_glr_tutorials_models_4_old_wines_2_capsule.py` .. raw:: html
Capsule Network
.. raw:: html
.. only:: html .. image:: /tutorials/models/4_old_wines/images/thumb/sphx_glr_7_transformer_thumb.png :alt: :ref:`sphx_glr_tutorials_models_4_old_wines_7_transformer.py` .. raw:: html
Transformer as a Graph Neural Network
.. raw:: html
.. toctree:: :hidden: :includehidden: /tutorials/models//1_gnn/index.rst /tutorials/models//2_small_graph/index.rst /tutorials/models//3_generative_model/index.rst /tutorials/models//4_old_wines/index.rst .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_