kaolin.metrics.tetmesh

API

kaolin.metrics.tetmesh.amips(tet_vertices, inverse_offset_matrix)

Compute the AMIPS (Advanced MIPS) loss as devised by Fu et al. in Computing Locally Injective Mappings by Advanced MIPS. ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2015.

The Jacobian can be derived as: \(J = (g(x) - g(x_0)) / (x - x_0)\)

Only components where the determinant of the Jacobian is positive, are included in the calculation of AMIPS. This is because the AMIPS Loss is only defined for tetrahedrons whose determinant of the Jacobian is positive.

Parameters
  • tet_vertices (torch.Tensor) – Batched tetrahedrons, of shape \((\text{batch_size}, \text{num_tetrahedrons}, 4, 3)\).

  • inverse_offset_matrix (torch.LongTensor) – The inverse of the offset matrix is of shape \((\text{batch_size}, \text{num_tetrahedrons}, 3, 3)\). Refer to kaolin.ops.mesh.tetmesh.inverse_vertices_offset().

Returns

AMIPS loss for each mesh, of shape \((\text{batch_size})\).

Return type

(torch.Tensor)

Example

>>> tet_vertices = torch.tensor([[[[1.7000, 2.3000, 4.4500],
...                                [3.4800, 0.2000, 5.3000],
...                                [4.9000, 9.4500, 6.4500],
...                                [6.2000, 8.5000, 7.1000]],
...                               [[-1.3750, 1.4500, 3.2500],
...                                [4.9000, 1.8000, 2.7000],
...                                [3.6000, 1.9000, 2.3000],
...                                [1.5500, 1.3500, 2.9000]]],
...                              [[[1.7000, 2.3000, 4.4500],
...                                [3.4800, 0.2000, 5.3000],
...                                [4.9000, 9.4500, 6.4500],
...                                [6.2000, 8.5000, 7.1000]],
...                               [[-1.3750, 1.4500, 3.2500],
...                                [4.9000, 1.8000, 2.7000],
...                                [3.6000, 1.9000, 2.3000],
...                                [1.5500, 1.3500, 2.9000]]]])
>>> inverse_offset_matrix = torch.tensor([[[[ -1.1561, -1.1512, -1.9049],
...                                         [1.5138,  1.0108,  3.4302],
...                                         [1.6538, 1.0346,  4.2223]],
...                                        [[ 2.9020,  -1.0995, -1.8744],
...                                         [ 1.1554,  1.1519, 1.7780],
...                                         [-0.0766, 1.6350,  1.1064]]],
...                                        [[[-0.9969,  1.4321, -0.3075],
...                                         [-1.3414,  1.5795, -1.6571],
...                                         [-0.1775, -0.4349,  1.1772]],
...                                        [[-1.1077, -1.2441,  1.8037],
...                                         [-0.5722, 0.1755, -2.4364],
...                                         [-0.5263,  1.5765,  1.5607]]]])
>>> amips(tet_vertices, inverse_offset_matrix)
tensor([[13042.3408],
        [ 2376.2517]])
kaolin.metrics.tetmesh.equivolume(tet_vertices, tetrahedrons_mean=None, pow=4)

Compute the EquiVolume loss as devised by Gao et al. in Learning Deformable Tetrahedral Meshes for 3D Reconstruction NeurIPS 2020. See supplementary material for the definition of the loss function.

Parameters
  • tet_vertices (torch.Tensor) – Batched tetrahedrons, of shape \((\text{batch_size}, \text{num_tetrahedrons}, 4, 3)\).

  • tetrahedrons_mean (torch.Tensor) – Mean volume of all tetrahedrons in a grid, of shape \((\text{batch_size})\) or \((1,)\) (broadcasting). Default: Compute torch.mean(tet_vertices, dim=-1).

  • pow (int) – Power for the equivolume loss. Increasing power puts more emphasis on the larger tetrahedron deformation. Default: 4.

Returns

EquiVolume loss for each mesh, of shape \((\text{batch_size})\).

Return type

(torch.Tensor)

Example

>>> tet_vertices = torch.tensor([[[[0.5000, 0.5000, 0.7500],
...                                [0.4500, 0.8000, 0.6000],
...                                [0.4750, 0.4500, 0.2500],
...                                [0.5000, 0.3000, 0.3000]],
...                               [[0.4750, 0.4500, 0.2500],
...                                [0.5000, 0.9000, 0.3000],
...                                [0.4500, 0.4000, 0.9000],
...                                [0.4500, 0.4500, 0.7000]]],
...                              [[[0.7000, 0.3000, 0.4500],
...                                [0.4800, 0.2000, 0.3000],
...                                [0.9000, 0.4500, 0.4500],
...                                [0.2000, 0.5000, 0.1000]],
...                               [[0.3750, 0.4500, 0.2500],
...                                [0.9000, 0.8000, 0.7000],
...                                [0.6000, 0.9000, 0.3000],
...                                [0.5500, 0.3500, 0.9000]]]])
>>> equivolume(tet_vertices, pow=4)
tensor([[2.2961e-10],
        [7.7704e-10]])
kaolin.metrics.tetmesh.tetrahedron_volume(tet_vertices)

Compute the volume of tetrahedrons.

Parameters

tet_vertices (torch.Tensor) – Batched tetrahedrons, of shape \((\text{batch_size}, \text{num_tetrahedrons}, 4, 3)\).

Returns

volume of each tetrahedron in each mesh, of shape \((\text{batch_size}, \text{num_tetrahedrons})\).

Return type

(torch.Tensor)

Example

>>> tet_vertices = torch.tensor([[[[0.5000, 0.5000, 0.4500],
...                                [0.4500, 0.5000, 0.5000],
...                                [0.4750, 0.4500, 0.4500],
...                                [0.5000, 0.5000, 0.5000]]]])
>>> tetrahedron_volume(tet_vertices)
tensor([[-2.0833e-05]])