mrcal-triangulate - Triangulate a feature in a pair of images to report a range
$ mrcal-triangulate
--range-estimate 870
left.cameramodel right.cameramodel
left.jpg right.jpg
1234 2234
## Feature [1234., 2234.] in the left image corresponds to [2917.9 1772.6] at 870.0m
## Feature match found at [2916.51891391 1771.86593517]
## q1 - q1_perfect_at_range = [-1.43873946 -0.76196924]
## Range: 677.699 m (error: -192.301 m)
## Reprojection error between triangulated point and q0: [5.62522473e-10 3.59364094e-09]pixels
## Observed-pixel sensitivity: 103.641m/pixel (q1). Worst direction: [1. 0.]. Linearized correction: 1.855 pixels
## Calibration yaw sensitivity: -3760.702m/deg. Linearized correction: -0.051 degrees of yaw
## Calibration pitch sensitivity: 0.059m/deg.
## Calibration translation sensitivity: 319.484m/m. Worst direction: [1 0 0].
## Linearized correction: 0.602 meters of translation
## Optimized yaw correction = -0.03983 degrees
## Optimized pitch correction = 0.03255 degrees
## Optimized relative yaw (1 <- 0): -1.40137 degrees
Given a pair of images, a pair of camera models and a feature coordinate in the first image, finds the corresponding feature in the second image, and reports the range. This is similar to the stereo processing of a single pixel, but reports much more diagnostic information than stereo tools do.
This is useful to evaluate ranging results.
models Camera models for the images. Both intrinsics and
extrinsics are used
images_and_features The images and/or feature pixes coordinates to use for
the triangulation. This is either IMAGE0 IMAGE1
FEATURE0X FEATURE0Y or FEATURE0X FEATURE0Y FEATURE1X
FEATURE1Y. If images are given, the given pixel is a
feature in image0, and we search for the corresponding
feature in image1.
-h, --help show this help message and exit
--make-corrected-model1
If given, we assume the --range-estimate is correct,
and output to standard output a rotated camera1 to
produce this range
--template-size TEMPLATE_SIZE TEMPLATE_SIZE
The size of the template used for feature matching, in
pixel coordinates of the second image. Two arguments
are required: width height. This is passed directly to
mrcal.match_feature(). We default to 13x13
--search-radius SEARCH_RADIUS
How far the feature-matching routine should search, in
pixel coordinates of the second image. This should be
larger if the range estimate is poor, especially, at
near ranges. This is passed directly to
mrcal.match_feature(). We default to 20 pixels
--range-estimate RANGE_ESTIMATE
Initial estimate of the range of the observed feature.
This is used for the initial guess in the feature-
matching. If omitted, I pick 50m, completely
arbitrarily
--plane-n PLANE_N PLANE_N PLANE_N
If given, assume that we're looking at a plane in
space, and take this into account when matching the
template. The normal vector to this plane is given
here, in camera-0 coordinates. The normal does not
need to be normalized; any scaling is compensated in
planed. The plane is all points p such that
inner(p,planen) = planed
--plane-d PLANE_D If given, assume that we're looking at a plane in
space, and take this into account when matching the
template. The distance-along-the-normal to the plane,
in from-camera coordinates is given here. The plane is
all points p such that inner(p,planen) = planed
--q-calibration-stdev Q_CALIBRATION_STDEV
The uncertainty of the point observations at
calibration time. If given, we report the
analytically-computed effects of this noise. If a
value <0 is passed-in, we infer the calibration-time
noise from the optimal calibration-time residuals
--q-observation-stdev Q_OBSERVATION_STDEV
The uncertainty of the point observations at
observation time. If given, we report the
analytically-computed effects of this noise.
--q-observation-stdev-correlation Q_OBSERVATION_STDEV_CORRELATION
By default, the noise in the observation-time pixel
observations is assumed independent. This isn't
entirely realistic: observations of the same feature
in multiple cameras originate from an imager
correlation operation, so they will have some amount
of correlation. If given, this argument specifies how
much correlation. This is a value in [0,1] scaling the
stdev. 0 means "independent" (the default). 1.0 means
"100% correlated".
--stabilize-coords If propagating calibration-time noise
(--q-calibration-stdev != 0), we report the
uncertainty ellipse in a stabilized coordinate system,
compensating for the camera-0 coord system motion
--method {geometric,lindstrom,leecivera-l1,leecivera-linf,leecivera-mid2,leecivera-wmid2}
The triangulation method. By default we use the "Mid2"
method from Lee-Civera's paper
--corr-floor CORR_FLOOR
This is used to reject mrcal.match_feature() results.
The default is 0.9: accept only very good matches. A
lower threshold may still result in usable matches,
but do interactively check the feature-matcher results
by passing "--viz match"
--viz {match,uncertainty}
If given, we visualize either the feature-matcher
results ("--viz match") or the uncertainty ellipse(s)
("--viz uncertainty"). By default, this produces an
interactive gnuplot window. The feature-match
visualization shows 2 overlaid images: the larger
image being searched and the transformed template,
placed at its best-fitting location. Each individual
image can be hidden/shown by clicking on its legend in
the top-right of the plot. It's generally most useful
to show/hide the template to visually verify the
resulting alignment.
--title TITLE Title string for the --viz plot. Overrides the default
title. Exclusive with --extratitle
--extratitle EXTRATITLE
Additional string for the --viz plot to append to the
default title. Exclusive with --title
--hardcopy HARDCOPY Write the --viz output to disk, instead of an
interactive plot
--terminal TERMINAL The gnuplotlib terminal used in --viz. The default is
good almost always, so most people don't need this
option
--set SET Extra 'set' directives to gnuplotlib for --viz. Can be
given multiple times
--unset UNSET Extra 'unset' directives to gnuplotlib --viz. Can be
given multiple times
--clahe If given, apply CLAHE equalization to the images prior
to the matching
https://www.github.com/dkogan/mrcal
Dima Kogan, <dima@secretsauce.net>
Copyright (c) 2017-2021 California Institute of Technology ("Caltech"). U.S. Government sponsorship acknowledged. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0