Two-View Stereo 3D Reconstruction

Overview
This project implements a comprehensive two-view stereo reconstruction system that converts multiple 2D viewpoints into accurate 3D reconstructions of scenes.
Description
A computer vision project that implements a complete two-view stereo algorithm pipeline to reconstruct 3D point clouds from pairs of 2D images. This project demonstrates fundamental stereo vision techniques including image rectification, disparity estimation using multiple matching kernels, depth computation, and point cloud generation with post-processing.
Image Rectification computes the transformation between two camera coordinate systems, generates rectification rotation matrices to align epipolar lines, and applies perspective warping to create rectified image pairs with parallel epipolar lines.
Disparity Map Computation extracts image patches using configurable patch sizes and implements three matching kernels for correspondence search: SSD (Sum of Squared Differences) which measures squared pixel differences, SAD (Sum of Absolute Differences) which measures absolute pixel differences, and ZNCC (Zero-mean Normalized Cross-Correlation) which uses normalized correlation for illumination-invariant matching. The system also performs left-right consistency checking to filter unreliable matches.
Depth Estimation converts disparity maps to depth maps using camera geometry, computes baseline and focal length relationships, and back-projects pixels to 3D camera coordinates.
Point Cloud Reconstruction transforms 3D points from camera coordinates to world coordinates and applies post-processing filters including HSV-based background removal, depth range filtering with z-near and z-far constraints, and statistical outlier removal. This generates colored point clouds for visualization.
Multi-View Aggregation combines point clouds from multiple view pairs, enabling comprehensive 3D scene reconstruction from multiple viewpoints.