dgl.segmented_knn_graph¶

dgl.
segmented_knn_graph
(x, k, segs, algorithm='bruteforceblas', dist='euclidean')[source]¶ Construct multiple graphs from multiple sets of points according to knearestneighbor (KNN) and return.
Compared with
dgl.knn_graph()
, this allows multiple point sets with different capacity. The points from different sets are stored contiguously in thex
tensor.segs
specifies the number of points in each point set. The function constructs a KNN graph for each point set, where the predecessors of each point are its knearest neighbors measured by the Euclidean distance. DGL then composes all KNN graphs into a graph with multiple connected components. Parameters
x (Tensor) – Coordinates/features of points. Must be 2D. It can be either on CPU or GPU.
k (int) – The number of nearest neighbors per node.
segs (list[int]) – Number of points in each point set. The numbers in
segs
must sum up to the number of rows inx
.algorithm (str, optional) –
Algorithm used to compute the knearest neighbors.
’bruteforceblas’ will first compute the distance matrix using BLAS matrix multiplication operation provided by backend frameworks. Then use topk algorithm to get knearest neighbors. This method is fast when the point set is small but has \(O(N^2)\) memory complexity where \(N\) is the number of points.
’bruteforce’ will compute distances pair by pair and directly select the knearest neighbors during distance computation. This method is slower than ‘bruteforceblas’ but has less memory overhead (i.e., \(O(Nk)\) where \(N\) is the number of points, \(k\) is the number of nearest neighbors per node) since we do not need to store all distances.
’bruteforcesharemem’ (CUDA only) is similar to ‘bruteforce’ but use shared memory in CUDA devices for buffer. This method is faster than ‘bruteforce’ when the dimension of input points is not large. This method is only available on CUDA device.
’kdtree’ will use the kdtree algorithm (CPU only). This method is suitable for lowdimensional data (e.g. 3D point clouds)
’nndescent’ is an approximate approach from paper Efficient knearest neighbor graph construction for generic similarity measures. This method will search for nearest neighbor candidates in “neighbors’ neighbors”.
(default: ‘bruteforceblas’)
dist (str, optional) – The distance metric used to compute distance between points. It can be the following metrics: * ‘euclidean’: Use Euclidean distance (L2 norm) \(\sqrt{\sum_{i} (x_{i}  y_{i})^{2}}\). * ‘cosine’: Use cosine distance. (default: ‘euclidean’)
 Returns
The graph. The node IDs are in the same order as
x
. Return type
Examples
The following examples use PyTorch backend.
>>> import dgl >>> import torch
In the example below, the first point set has three points and the second point set has four points.
>>> # Features/coordinates of the first point set >>> x1 = torch.tensor([[0.0, 0.5, 0.2], ... [0.1, 0.3, 0.2], ... [0.4, 0.2, 0.2]]) >>> # Features/coordinates of the second point set >>> x2 = torch.tensor([[0.3, 0.2, 0.1], ... [0.5, 0.2, 0.3], ... [0.1, 0.1, 0.2], ... [0.6, 0.3, 0.3]]) >>> x = torch.cat([x1, x2], dim=0) >>> segs = [x1.shape[0], x2.shape[0]] >>> knn_g = dgl.segmented_knn_graph(x, 2, segs) >>> knn_g.edges() (tensor([0, 0, 1, 1, 1, 2, 3, 3, 4, 4, 5, 5, 6, 6]), tensor([0, 1, 0, 1, 2, 2, 3, 5, 4, 6, 3, 5, 4, 6]))