Projection uncertainty: mean-frames
A key part of the projection uncertainty propagation method is to compute a
function
mrcal has several methods of doing this, and the legacy mean-frames method is
described here. This is accessible by calling
mrcal.projection_uncertainty(method = "mean-frames")
. This method is simple,
and has some issues that are resolved by newer formulations: starting with mrcal
3.0 the improved cross-reprojection uncertainty method is recommended.
Mean-frames uncertainty
The state vector
How do we operate on points in a fixed coordinate system when all the coordinate
systems we have are floating random variables? We use the most fixed thing we
have: chessboards. As with the camera housing, the chessboards themselves are
fixed in space. We have noisy camera observations of the chessboards that
implicitly produce estimates of the fixed transformation
Thus if we project points from a chessboard frame, we would be unaffected by the untethered reference coordinate system. So points in a chessboard frame are somewhat "fixed" for our purposes.
To begin, let's focus on just one chessboard frame: frame 0. We want to know
the uncertainty at a pixel coordinate
We then transform and project
This works, but it depends on
So to summarize, to compute the projection uncertainty at a pixel
- Unproject
and transform to each chessboard coordinate system to obtain - Transform and project back to
, using the mean of all the and taking into account uncertainties
We have
Problems with "mean-frames" uncertainty
This "mean-frames" uncertainty method works well, but has several issues:
Aphysical transform
The computation above indirectly computes the transform that relates the unperturbed and perturbed reference coordinate systems:
Each transformation
Pessimistic response to disparate observed chessboard ranges
Because of this aphysical transform, the mean-frames method produces fictitiously high uncertainties when gives a mix of low-range and high-range observations. Far-away chessboard observations don't contain much information, so adding some far-away chessboards to a dataset shouldn't improve the uncertainty much at the distance, but it shouldn't make it any worse. However, with the mean-frames method, far-away observations do make the uncertainty worse. We can clearly see this in the dance study:
analyses/dancing/dance-study.py \ --scan num_far_constant_Nframes_near \ --range 2,10 \ --Ncameras 1 \ --Nframes-near 100 \ --observed-pixel-uncertainty 2 \ --ymax 4 \ --uncertainty-at-range-sampled-max 35 \ --Nscan-samples 4 \ --method mean-frames \ opencv8.cameramodel
This is a one-camera calibration computed off 100 chessboard observations at 2m out, with a few observations added at a longer range of 10m. Each curve represents the projection uncertainty at the center of the image, at different distances. The purple curve is the uncertainty with no 10m chessboards at all. As we add observations at 10m, we see the uncertainty get worse.
The issue is the averaging in 3D point space. Observation noise causes the
far-off geometry to move much more than the nearby chessboards, and that far-off
motion then dominates the average. If we use the newer
cross-reprojection--rrp-Jfp
method, this issue goes away:
analyses/dancing/dance-study.py \ --scan num_far_constant_Nframes_near \ --range 2,10 \ --Ncameras 1 \ --Nframes-near 100 \ --observed-pixel-uncertainty 2 \ --ymax 4 \ --uncertainty-at-range-sampled-max 35 \ --Nscan-samples 4 \ --method cross-reprojection--rrp-Jfp \ opencv8.cameramodel
As expected, the low-range uncertainty is unaffected by the 10m observations, but the far-range uncertainty is improved.
Chessboards are a hard requirement
The "mean-frames" metho has a hard requirement on chessboards being used in the
solve. In fact, the assumption of stationary cameras observing a moving
chessboard is baked into the formulation. So any other case (moving cameras or
calibrating off discrete points for instance) is not supported. The newer
cross-reprojection--rrp-Jfp
method lifts this restriction.