kaolin.render.spc¶
API¶
- kaolin.render.spc.cumprod(feats, boundaries, exclusive=False, reverse=False)¶
Cumulative product across packs of features.
This function is similar to
tf.math.cumprod()
with the same options, but for packed tensors. Refer to the TensorFlow docs for numerical examples of the options.Note that the backward gradient follows the same behaviour in TensorFlow, which is to replace NaNs by zeros, which is different from the behaviour in PyTorch. To be safe, add an epsilon to feats which will make the behaviour consistent.
- Parameters
feats (torch.FloatTensor) – features of shape \((\text{num_rays}, \text{num_feats})\).
boundaries (torch.BoolTensor) – bools of shape \((\text{num_rays})\). Given some index array marking the pack IDs, the boundaries can be calculated with
mark_pack_boundaries()
.exclusive (bool) – Compute exclusive cumprod if true. Exclusive means the current index won’t be used for the calculation of the cumulative product. (Default: False)
reverse (bool) – Compute reverse cumprod if true, i.e. the cumulative product will start from the end of each pack, not from the beginning. (Default: False)
- Returns
features of shape \((\text{num_rays}, \text{num_feats})\).
- Return type
(torch.FloatTensor)
- kaolin.render.spc.cumsum(feats, boundaries, exclusive=False, reverse=False)¶
Cumulative sum across packs of features.
This function is similar to
tf.math.cumsum()
with the same options, but for packed tensors. Refer to the TensorFlow docs for numerical examples of the options.- Parameters
feats (torch.FloatTensor) – features of shape \((\text{num_rays}, \text{num_feats})\).
boundaries (torch.BoolTensor) – bools of shape \((\text{num_rays})\). Given some index array marking the pack IDs, the boundaries can be calculated with
mark_pack_boundaries()
.exclusive (bool) – Compute exclusive cumsum if true. Exclusive means the current index won’t be used for the calculation of the cumulative sum. (Default: False)
reverse (bool) – Compute reverse cumsum if true, i.e. the cumulative sum will start from the end of each pack, not from the beginning. (Default: False)
- Returns
features of shape \((\text{num_rays}\, \text{num_feats})\).
- Return type
(torch.FloatTensor)
- kaolin.render.spc.diff(feats, boundaries)¶
Find the delta between each of the features in a pack.
The deltas are given by out[i] = feats[i+1] - feats[i]
The behavior is similar to
torch.diff()
for non-packed tensors, buttorch.diff()
will reduce the number of features by 1. This function will instead populate the last diff with 0.- Parameters
feats (torch.FloatTensor) – features of shape \((\text{num_rays}, \text{num_feats})\)
boundaries (torch.BoolTensor) – bools of shape \((\text{num_rays})\) Given some index array marking the pack IDs, the boundaries can be calculated with
mark_pack_boundaries()
- Returns
diffed features of shape \((\text{num_rays}, \text{num_feats})\)
- Return type
(torch.FloatTensor)
- kaolin.render.spc.exponential_integration(feats, tau, boundaries, exclusive=True)¶
Exponential transmittance integration across packs using the optical thickness (tau).
Exponential transmittance is derived from the Beer-Lambert law. Typical implementations of exponential transmittance is calculated with
cumprod()
, but the exponential allows a reformulation as acumsum()
which its gradient is more stable and faster to compute. We opt to use thecumsum()
formulation.For more details, we recommend “Monte Carlo Methods for Volumetric Light Transport” by Novak et al.
- Parameters
feats (torch.FloatTensor) – features of shape \((\text{num_rays}, \text{num_feats})\).
tau (torch.FloatTensor) – optical thickness of shape \((\text{num_rays}, 1)\).
boundaries (torch.BoolTensor) – bools of shape \((\text{num_rays})\). Given some index array marking the pack IDs, the boundaries can be calculated with
mark_pack_boundaries()
.exclusive (bool) – Compute exclusive exponential integration if true. (default: True)
- Returns
(torch.FloatTensor, torch.FloatTensor) - Integrated features of shape \((\text{num_packs}, \text{num_feats})\). - Transmittance of shape \((\text{num_rays}, 1)\).
- kaolin.render.spc.mark_first_hit(ridx)¶
Mark the first hit in the nuggets.
Deprecated since version 0.10.0: This function is deprecated. Use
mark_pack_boundaries()
.The nuggets are a packed tensor containing correspondences from ray index to point index, sorted within each ray pack by depth. This will mark true for each first hit (by depth) for a pack of nuggets.
- Returns
the boolean mask marking the first hit by depth.
- Return type
first_hits (torch.BoolTensor)
- kaolin.render.spc.mark_pack_boundaries(pack_ids)¶
Mark the boundaries of pack IDs.
Pack IDs are sorted tensors which mark the ID of the pack each element belongs in.
For example, the SPC ray trace kernel will return the ray index tensor which marks the ID of the ray that each intersection belongs in. This kernel will mark the beginning of each of those packs of intersections with a boolean mask (true where the beginning is).
- Parameters
pack_ids (torch.Tensor) – pack ids of shape \((\text{num_elems})\) This can be any integral (n-bit integer) type.
- Returns
the boolean mask marking the boundaries.
- Return type
first_hits (torch.BoolTensor)
Examples
>>> pack_ids = torch.IntTensor([1,1,1,1,2,2,2]).to('cuda:0') >>> mark_pack_boundaries(pack_ids) tensor([ True, False, False, False, True, False, False], device='cuda:0')
- kaolin.render.spc.sum_reduce(feats, boundaries)¶
Sum the features of packs.
- Parameters
feats (torch.FloatTensor) – features of shape \((\text{num_rays}, \text{num_feats})\).
boundaries (torch.BoolTensor) – bools to mark pack boundaries of shape \((\text{num_rays})\). Given some index array marking the pack IDs, the boundaries can be calculated with
mark_pack_boundaries()
.
- Returns
summed features of shape \((\text{num_packs}, \text{num_feats})\).
- Return type
(torch.FloatTensor)
- kaolin.render.spc.unbatched_raytrace(octree, point_hierarchy, pyramid, exsum, origin, direction, level, return_depth=True, with_exit=False)¶
Apply ray tracing over an unbatched SPC structure.
The SPC model will be always normalized between -1 and 1 for each axis.
- Parameters
octree (torch.ByteTensor) – the octree structure, of shape \((\text{num_bytes})\).
point_hierarchy (torch.ShortTensor) – the point hierarchy associated to the octree, of shape \((\text{num_points}, 3)\).
pyramid (torch.IntTensor) – the pyramid associated to the octree, of shape \((2, \text{max_level} + 2)\).
exsum (torch.IntTensor) – the prefix sum associated to the octree. of shape \((\text{num_bytes} + \text{batch_size})\).
origin (torch.FloatTensor) – the origins of the rays, of shape \((\text{num_rays}, 3)\).
direction (torch.FloatTensor) – the directions of the rays, of shape \((\text{num_rays}, 3)\).
level (int) – level to use from the octree.
return_depth (bool) – return the depth of each voxel intersection. (Default: True)
with_exit (bool) – return also the exit intersection depth. (Default: False)
- Returns
Ray index of intersections sorted by depth of shape \((\text{num_intersection})\)
Point hierarchy index of intersections sorted by depth of shape \((\text{num_intersection})\) These indices will be IntTensor`s, but they can be used for indexing with `torch.index_select.
If return_depth is true: Float tensor of shape \((\text{num_intersection}), 1\) of entry depths to each AABB intersection. When with_exit is set, returns shape \((\text{num_intersection}), 2\) of entry and exit depths.
- Return type
(torch.IntTensor, torch.IntTensor, (optional) torch.FloatTensor)