kaolin.ops.gcn

API

class kaolin.ops.gcn.GraphConv(input_dim, output_dim, self_layer=True, bias=True)

Bases: torch.nn.modules.module.Module

A simple graph convolution layer, similar to the one defined by Kipf et al. in Semi-Supervised Classification with Graph Convolutional Networks ICLR 2017

This operation with self_layer=False is equivalent to \((A H W)\) where: - \(H\) is the node features with shape (batch_size, num_nodes, input_dim) - \(W\) is a weight matrix of shape (input_dim, output_dim) - \(A\) is the adjacency matrix of shape (num_nodes, num_nodes). It can include self-loop.

With normalize_adj=True, it is equivalent to \((D^{-1} A H W)\), where: - \(D\) is a diagonal matrix with \(D_{ii}\) = the sum of the i-th row of A. In other words, \(D\) is the incoming degree of each node.

With self_layer=True, it is equivalent to the above plus \((H W_{\text{self}})\), where: - \(W_{\text{self}}\) is a separate weight matrix to filter each node’s self features.

Note that when self_layer is True, A should not include self-loop.

Parameters
  • input_dim (int) – The number of features in each input node.

  • output_dim (int) – The number of features in each output node.

  • bias (bool) – Whether to add bias after the node-wise linear layer.

Example

>>> node_feat = torch.rand(1, 3, 5)
>>> i = torch.LongTensor(
...     [[0, 1, 1, 2, 2, 0], [1, 0, 2, 1, 0, 2]])
>>> v = torch.FloatTensor([1, 1, 1, 1, 1, 1])
>>> adj = torch.sparse.FloatTensor(i, v, torch.Size([3, 3]))
>>> model = GraphConv(5, 10)
>>> output = model(node_feat, adj)
>>> # pre-normalize adj
>>> adj = normalize_adj(adj)
>>> output = model(node_feat, adj, normalize_adj=False)
forward(node_feat, adj, normalize_adj=True)
Parameters
  • node_feat (torch.FloatTensor) – Shape = (batch_size, num_nodes, input_dim) The input features of each node.

  • adj (torch.sparse.FloatTensor or torch.FloatTensor) – Shape = (num_nodes, num_nodes) The adjacency matrix. adj[i, j] is non-zero if there’s an incoming edge from j to i. Should not include self-loop if self_layer is True.

  • normalize_adj (bool) – Set this to true to apply normalization to adjacency; that is, each output feature will be divided by the number of incoming neighbors. If normalization is not desired, or if the adjacency matrix is pre-normalized, set this to False to improve performance.

Returns

The output features of each node. Shape = (batch_size, num_nodes, output_dim)

Return type

(torch.FloatTensor)

initialize()
training: bool
kaolin.ops.gcn.normalize_adj(adj)

Normalize the adjacency matrix with shape = (num_nodes, num_nodes) such that the sum of each row is 1.

This operation is slow, so it should be done only once for a graph and then reused.

This supports both sparse tensor and regular tensor. The return type will be the same as the input type. For example, if the input is a sparse tensor, the normalized matrix will also be a sparse tensor.

Parameters

adj (torch.sparse.FloatTensor or torch.FloatTensor) – Shape = (num_nodes, num_nodes) The adjacency matrix.

Returns

A new adjacency matrix with the same connectivity as the input, but with the sum of each row normalized to 1.

Return type

(torch.sparse.FloatTensor or torch.FloatTensor)

kaolin.ops.gcn.sparse_bmm(sparse_matrix, dense_matrix_batch)

Perform torch.bmm on an unbatched sparse matrix and a batched dense matrix.

Parameters
  • sparse_matrix (torch.sparse.FloatTensor) – Shape = (m, n)

  • dense_matrix_batch (torch.FloatTensor) – Shape = (b, n, p)

Returns

Result of the batched matrix multiplication. Shape = (b, n, p)

Return type

(torch.FloatTensor)