Dense SLAM via Point-based Fusion

Depiction of different perspectives of the RGB point cloud produced using the Dense SLAM algorithm

For my final Robot Localization and Mapping course project, I was tasked with implementing a Dense Simultaneous Localization and Mapping (SLAM) system that leveraged Iterative Closest Point (ICP) and Point-Based Fusion to reconstruct a dense RGB 3D point cloud from a RGBD video feed. ICP was used to estimate the pose transformation from one frame to the next. The optimization loss function utilized projective point correspondences to determine the update magnitude and direction, allowing for convergence (in this particular scenario) within 10 iterations. With these estimated pose transformations, point-based Fusion was then used to merge new points with the existing map, represented as a weighted point cloud. Again, projective correspondence was used to determine whether an incoming point was merged with a prior one or if a new point was created. Combined, these two techniques allow for the full reconstruction of the 3D environment from a single RGBD feed, which we know, from my D3VO Implementation project, can be provided by a single monocular camera!