Inverse Compositional Object Tracker
For my final Computer Vision course project, I was tasked with implementing several optical flow-based object tracking algorithms. One such algorithm, known as Matthew-Bakers Inverse Compositional Alignment, attempts to align a template image (the sub-image within the bounding box of the current frame) to a target image (the next frame in the video). Unlike Lucas-Kanade Forward Additive Alignment, which performs an affine transformation on the target image back to the template image and attempts to iteratively minimize the difference between these two images, the Matthew-Bakers technique performs an affine transformation on both the template and the target image. Doing so allows for Jacobian and Hessian of the template image to be precomputed instead of computed every iteration, reducing the runtime complexity of the algorithm. The use of an affine transformation, as opposed to a 2D translation, allows for a variable bounding box size as the target object approaches or recedes from the camera.