Dense stereo processing
Table of Contents
Overview
Given a pair of calibrated (both intrinsics and extrinsics) cameras, mrcal can perform stereo processing to produce a dense stereo map. This is relatively slow, and is often overkill for what is actually needed. But sometimes it is useful, and the resulting depth images look really nice.
On a high level, mrcal stereo processing is the usual epipolar geometry technique:
- Ingest
- Two camera models, each containing the intrinsics and extrinsics (the relative pose between the two cameras)
- A pair of images captured by these two cameras
- Compute a "rectified" system: a pair of models where each corresponding row of pixels in the two cameras all represent observation rays that lie in the same epipolar plane
- Reproject the images to these rectified models to produce rectified images
- Perform "stereo matching". For each pixel in the left rectified image we try to find the corresponding pixel in the same row of the right rectified image. The difference in columns is written to a disparity image. This is the most computationally-intensive part of the process.
- Convert the disparity image to a range image using the geometry defined by the rectified system
The epipolar constraint (all pixels in the same row in both rectified images represent the same plane in space) allows for one-dimensional stereo matching, which is a massive computational win over the two-dimensional matching that would be required with another formulation.
The rectified coordinate system looks like this:
We code and documentation refers to two angles:
- \(\theta\) the "azimuth"; the lateral angle inside the epipolar plane. Related directly to the \(x\) pixel coordinate in the rectified images
- \(\phi\): the "elevation"; the tilt of the epipolar plane. Related directly to the \(y\) pixel coordinate in the rectified images
Rectification models
A rectified system satisfies the epipolar constraint (see above). mrcal supports
two models that can have this property, selected with the rectification_model
argument to mrcal.rectified_system()
or with the --rectification
commandline
argument to mrcal-stereo
.
LENSMODEL_PINHOLE
: this is the traditional rectification model, used in most existing tools. This works decently well for small fields of view (as with a long lens), but fails with large fields of view (as with a wide lens). The issues stem from the uneven angular resolution across the image, which shoots out to \(\infty \frac{\mathrm{pixels}}{\mathrm{deg}}\) as \(\theta \rightarrow \pm 90^\circ\). This produces highly distorted rectified images, which affects stereo matching adversely, since areas of disparate resolution are being compared. This is supported by mrcal purely for compatibility with other tools; there's little reason to use this representation otherwiseLENSMODEL_LATLON
: this is a "transverse equirectangular projection". It is defined with even angle spacing in both directions, so \(x - x_0 = k_x \theta\) and \(y - y_0 = k_y \phi\) where \(x\) and \(y\) are pixel coordinates in the rectified images, \(x_0\) and \(y_0\) are the centers of projection of the rectified system and \(k_x\) and \(k_y\) are the angular resolution in the two directions. This is the recommended rectification model, and is the default in mrcal
Interfaces
Currently stereo processing is available via the mrcal-stereo
tool. This tool
implements the usual stereo processing for a single frame.
More complex usages are available via Python APIs. A sequence of images captured with a stereo pair can be processed like this:
mrcal.rectified_system()
to construct the rectified system defined by the stereo pairmrcal.rectification_maps()
to construct the pixel mappings needed to transform captured images into rectified images. This is relatively slow, but it depends on the relative stereo geometry only, so this can be computed once, and applied to all the subsequent images captured by the stereo pair- For each pair of captured images
mrcal.transform_image()
to generate rectified images- stereo matching to compute disparities. mrcal does not provide its own
method, and the
mrcal-stereo
tool uses the OpenCV SGBM stereo matcher. Any stereo matcher can be used. The result is a disparity image, where each pixel in the first rectified image is mapped to a corresponding pixel offset from the same feature in the second rectified image mrcal.stereo_range()
to convert the disparities to ranges, which can then be used to produce a point cloud
A demo of the process if shown in the tour of mrcal.