kaolin.ops.gcn¶
API¶
- class kaolin.ops.gcn.GraphConv(input_dim, output_dim, self_layer=True, bias=True)¶
Bases:
Module
A simple graph convolution layer, similar to the one defined by Kipf et al. in Semi-Supervised Classification with Graph Convolutional Networks ICLR 2017
This operation with
self_layer=False
is equivalent to \((A H W)\) where:\(H\) is the node features, of shape \((\text{batch_size}, \text{num_nodes}, \text{input_dim})\).
\(W\) is a weight matrix, of shape \((\text{input_dim}, \text{output_dim})\).
\(A\) is the adjacency matrix, of shape \((\text{num_nodes}, \text{num_nodes})\). It can include self-loop.
With
normalize_adj=True
, it is equivalent to \((D^{-1} A H W)\), where:\(D\) is a diagonal matrix with \(D_{ii}\) = the sum of the i-th row of \(A\). In other words, \(D\) is the incoming degree of each node.
With
self_layer=True
, it is equivalent to the above plus \((H W_{\text{self}})\), where:\(W_{\text{self}}\) is a separate weight matrix to filter each node’s self features.
Note that when
self_layer=True
, A should not include self-loop.- Parameters
Example
>>> node_feat = torch.rand(1, 3, 5) >>> i = torch.LongTensor( ... [[0, 1, 1, 2, 2, 0], [1, 0, 2, 1, 0, 2]]) >>> v = torch.FloatTensor([1, 1, 1, 1, 1, 1]) >>> adj = torch.sparse.FloatTensor(i, v, torch.Size([3, 3])) >>> model = GraphConv(5, 10) >>> output = model(node_feat, adj) >>> # pre-normalize adj >>> adj = normalize_adj(adj) >>> output = model(node_feat, adj, normalize_adj=False)
- forward(node_feat, adj, normalize_adj=True)¶
- Parameters
node_feat (torch.FloatTensor) – The input features of each node, of shape \((\text{batch_size}, \text{num_nodes}, \text{input_dim})\).
adj (torch.sparse.FloatTensor or torch.FloatTensor) – The adjacency matrix.
adj[i, j]
is non-zero if there’s an incoming edge fromj
toi
. Should not include self-loop ifself_layer
isTrue
, of shape \((\text{num_nodes}, \text{num_nodes})\).normalize_adj (optional, bool) – Set this to true to apply normalization to adjacency; that is, each output feature will be divided by the number of incoming neighbors. If normalization is not desired, or if the adjacency matrix is pre-normalized, set this to False to improve performance. Default: True.
- Returns
The output features of each node, of shape :math:(text{batch_size}, text{num_nodes}, text{output_dim})`.
- Return type
(torch.FloatTensor)
- initialize()¶
- kaolin.ops.gcn.normalize_adj(adj)¶
Normalize the adjacency matrix such that the sum of each row is 1.
This operation is slow, so it should be done only once for a graph and then reused.
This supports both sparse tensor and regular tensor. The return type will be the same as the input type. For example, if the input is a sparse tensor, the normalized matrix will also be a sparse tensor.
- Parameters
adj (torch.sparse.FloatTensor or torch.FloatTensor) – Input adjacency matrix, of shape \((\text{num_nodes}, \text{num_nodes})\).
- Returns
A new adjacency matrix with the same connectivity as the input, but with the sum of each row normalized to 1.
- Return type
(torch.sparse.FloatTensor or torch.FloatTensor)
- kaolin.ops.gcn.sparse_bmm(sparse_matrix, dense_matrix_batch)¶
Perform torch.bmm on an unbatched sparse matrix and a batched dense matrix.
- Parameters
sparse_matrix (torch.sparse.FloatTensor) – Input sparse matrix, of shape \((\text{M}, \text{N})\).
dense_matrix_batch (torch.FloatTensor) – Input batched dense matrix, of shape \((\text{batch_size}, \text{N}, \text{P})\).
- Returns
Result of the batched matrix multiplication, of shape, \((\text{batch_size}, \text{N}, \text{P})\).
- Return type
(torch.FloatTensor)