NAME

mrcal-triangulate - Triangulate a feature in a pair of images to report a range

SYNOPSIS

  $ mrcal-triangulate
      --range-estimate 870
      left.cameramodel right.cameramodel
      left.jpg right.jpg
      1234 2234

  ## Feature [1234., 2234.] in the left image corresponds to [2917.9 1772.6] at 870.0m
  ## Feature match found at [2916.51891391 1771.86593517]
  ## q1 - q1_perfect_at_range = [-1.43873946 -0.76196924]
  ## Range: 677.699 m (error: -192.301 m)
  ## Reprojection error between intersection and q0: [5.62522473e-10 3.59364094e-09]pixels
  ## Observed-pixel sensitivity: 103.641m/pixel. Worst direction: [1. 0.]. Linearized correction: 1.855 pixels
  ## Calibration yaw sensitivity: -3760.702m/deg. Linearized correction: -0.051 degrees of yaw
  ## Calibration pitch sensitivity: 0.059m/deg.
  ## Calibration translation sensitivity: 319.484m/m. Worst direction: [1 0 0].
  ## Linearized correction: 0.602 meters of translation
  ## Optimized yaw correction   = -0.03983 degrees
  ## Optimized pitch correction = 0.03255 degrees
  ## Optimized relative yaw (1 <- 0): -1.40137 degrees

DESCRIPTION

Given a pair of images, a pair of camera models and a feature coordinate in the first image, finds the corresponding feature in the second image, and reports the range. This is similar to the stereo processing of a single pixel, but reports much more diagnostic information than stereo tools do.

This is useful to evaluate ranging results.

OPTIONS

POSITIONAL ARGUMENTS

  models                Camera models for the images. Both intrinsics and
                        extrinsics are used
  images                The images to use for the triangulation
  features              Feature coordinate in the first image optionally
                        followed by the coresponding feature position in the
                        second image. The first 2 arguments are the pixel
                        coordinates of the feature in the first image. If no
                        more arguments are given I seek a matching feature in
                        the second image. If 2 more arguments are given, I use
                        these extra arguments as the corresponding feature
                        coordinates in the second image

OPTIONAL ARGUMENTS

  -h, --help            show this help message and exit
  --make-corrected-model1
                        If given, we assume the --range-estimate is correct,
                        and output to standard output a rotated camera1 to
                        produce this range
  --templatesize TEMPLATESIZE TEMPLATESIZE
                        The size of the template used for feature matching.
                        Two arguments are required: width height. This is
                        passed directly to mrcal.match_feature()
  --searchradius SEARCHRADIUS
                        How far the feature-matching routine should search.
                        This should be larger if the range estimate is poor,
                        especially, at near ranges. This is passed directly to
                        mrcal.match_feature()
  --range-estimate RANGE_ESTIMATE
                        Initial estimate of the range of the observed feature.
                        This is used for the initial guess in the feature-
                        matching. If omitted, I pick 50m, completely
                        arbitrarily
  --plane-n PLANE_N PLANE_N PLANE_N
                        If given, assume that we're looking at a plane in
                        space, and take this into account when matching the
                        template. The normal vector to this plane is given
                        here, in camera-0 coordinates. The normal does not
                        need to be normalized; any scaling is compensated in
                        planed. The plane is all points p such that
                        inner(p,planen) = planed
  --plane-d PLANE_D     If given, assume that we're looking at a plane in
                        space, and take this into account when matching the
                        template. The distance-along-the-normal to the plane,
                        in from-camera coordinates is given here. The plane is
                        all points p such that inner(p,planen) = planed
  --corr-floor CORR_FLOOR
                        This is used to reject mrcal.match_feature() results.
                        The default is 0.9: accept only very good matches. A
                        lower threshold may still result in usable matches,
                        but do interactively check the feature-matcher results
                        by passing --viz
  --viz                 If given, we visualize the feature-matcher results.
                        This produces an interactive gnuplot window with 2
                        overlaid images: the larger image being searched and
                        the transformed template, placed at its best-fitting
                        location. Each individual image can be hidden/shown by
                        clicking on its legend in the top-right of the plot.
                        It's generally most useful to show/hide the template
                        to visually verify the resulting alignment.
  --extratitle EXTRATITLE
                        Extra title string for the --viz plot
  --hardcopy HARDCOPY   Write the --viz output to disk, instead of an
                        interactive plot
  --terminal TERMINAL   The gnuplotlib terminal used in --viz. The default is
                        good almost always, so most people don't need this
                        option
  --set SET             Extra 'set' directives to gnuplotlib for --viz. Can be
                        given multiple times
  --unset UNSET         Extra 'unset' directives to gnuplotlib --viz. Can be
                        given multiple times

REPOSITORY

https://www.github.com/dkogan/mrcal

AUTHOR

Dima Kogan, <dima@secretsauce.net>

LICENSE AND COPYRIGHT

Copyright (c) 2017-2020 California Institute of Technology ("Caltech"). U.S. Government sponsorship acknowledged. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0