KNNGraph¶
-
class
dgl.nn.pytorch.factory.
KNNGraph
(k)[source]¶ Bases:
torch.nn.modules.module.Module
Layer that transforms one point set into a graph, or a batch of point sets with the same number of points into a union of those graphs.
The KNNGraph is implemented in the following steps:
Compute an NxN matrix of pairwise distance for all points.
Pick the k points with the smallest distance for each point as their k-nearest neighbors.
Construct a graph with edges to each point as a node from its k-nearest neighbors.
The overall computational complexity is \(O(N^2(logN + D)\).
If a batch of point sets is provided, the point \(j\) in point set \(i\) is mapped to graph node ID: \(i \times M + j\), where \(M\) is the number of nodes in each point set.
The predecessors of each node are the k-nearest neighbors of the corresponding point.
- Parameters
k (int) – The number of neighbors.
Notes
The nearest neighbors found for a node include the node itself.
Examples
The following example uses PyTorch backend.
>>> import torch >>> from dgl.nn.pytorch.factory import KNNGraph >>> >>> kg = KNNGraph(2) >>> x = torch.tensor([[0,1], [1,2], [1,3], [100, 101], [101, 102], [50, 50]]) >>> g = kg(x) >>> print(g.edges()) (tensor([0, 1, 1, 1, 2, 2, 2, 3, 3, 4, 4, 5]), tensor([0, 0, 1, 2, 1, 2, 5, 3, 4, 3, 4, 5]))
-
forward
(x, algorithm='bruteforce-blas', dist='euclidean')[source]¶ Forward computation.
- Parameters
x (Tensor) – \((M, D)\) or \((N, M, D)\) where \(N\) means the number of point sets, \(M\) means the number of points in each point set, and \(D\) means the size of features.
algorithm (str, optional) –
Algorithm used to compute the k-nearest neighbors.
’bruteforce-blas’ will first compute the distance matrix using BLAS matrix multiplication operation provided by backend frameworks. Then use topk algorithm to get k-nearest neighbors. This method is fast when the point set is small but has \(O(N^2)\) memory complexity where \(N\) is the number of points.
’bruteforce’ will compute distances pair by pair and directly select the k-nearest neighbors during distance computation. This method is slower than ‘bruteforce-blas’ but has less memory overhead (i.e., \(O(Nk)\) where \(N\) is the number of points, \(k\) is the number of nearest neighbors per node) since we do not need to store all distances.
’bruteforce-sharemem’ (CUDA only) is similar to ‘bruteforce’ but use shared memory in CUDA devices for buffer. This method is faster than ‘bruteforce’ when the dimension of input points is not large. This method is only available on CUDA device.
’kd-tree’ will use the kd-tree algorithm (CPU only). This method is suitable for low-dimensional data (e.g. 3D point clouds)
’nn-descent’ is a approximate approach from paper Efficient k-nearest neighbor graph construction for generic similarity measures. This method will search for nearest neighbor candidates in “neighbors’ neighbors”.
(default: ‘bruteforce-blas’)
dist (str, optional) –
The distance metric used to compute distance between points. It can be the following metrics: * ‘euclidean’: Use Euclidean distance (L2 norm)
\(\sqrt{\sum_{i} (x_{i} - y_{i})^{2}}\).
’cosine’: Use cosine distance.
(default: ‘euclidean’)
- Returns
A DGLGraph without features.
- Return type