Iterative Intrinsics + Distortion
Zhang's method assumes distortion-free observations. When applied to images from a camera with significant distortion, the estimated intrinsics are biased because the distorted pixels violate the linear homography model. The iterative intrinsics solver addresses this by alternating between intrinsics estimation and distortion correction.
Problem Statement
Given: views of a planar calibration board, with observed (distorted) pixel coordinates.
Find: Camera intrinsics and distortion coefficients jointly.
Assumptions:
- The observations are distorted (raw detector output)
- Distortion is moderate enough that Zhang's method on distorted pixels gives a usable initial
- 1-3 iterations of alternating refinement suffice
Why Alternation Works
The joint estimation of and distortion is a non-convex problem. However, it decomposes naturally:
- Given : Distortion estimation is a linear problem (see Distortion Fit)
- Given distortion: Undistorting the pixels and re-estimating is a linear problem (Zhang's method)
Each step solves a convex subproblem. The alternation converges because:
- The initial Zhang estimate (ignoring distortion) is typically within the basin where the alternation contracts
- Each step reduces the residual between the model and observations
- The coupling between and distortion is relatively weak for moderate distortion
Algorithm
Input: Views with distorted observations
Output: Intrinsics , distortion
-
Compute homographies from distorted pixels via DLT
-
Estimate initial from via Zhang's method
-
For :
a. Estimate distortion from homography residuals using
b. Undistort all observed pixels:
c. Recompute homographies from undistorted pixels
d. Re-estimate from via Zhang's method
e. (Optional) Enforce zero skew:
-
Return
Convergence
Typically 1-2 iterations suffice:
- Iteration 0 (Zhang on distorted pixels): 10-40% intrinsics error
- Iteration 1: Distortion estimate corrects the dominant radial effect; intrinsics error drops to 5-20%
- Iteration 2: Further refinement; diminishing returns
More iterations are safe but rarely improve the estimate significantly. The default is 2 iterations.
Configuration
#![allow(unused)] fn main() { pub struct IterativeIntrinsicsOptions { pub iterations: usize, // 1-3 typical (default: 2) pub distortion_opts: DistortionFitOptions, // Controls fix_k3, fix_tangential, iters pub zero_skew: bool, // Force skew = 0 (default: true) } }
The Undistortion Step
Step 3b converts distorted pixels back to "ideal" pixels by inverting the distortion model:
- Convert distorted pixel to normalized coordinates:
- Undistort using the current distortion estimate: (iterative fixed-point, see Brown-Conrady Distortion)
- Convert back to pixel:
Note that this uses both for normalization and for converting back — the undistorted pixels are in the same coordinate system as the original distorted pixels, just without the distortion effect.
Accuracy Expectations
| Stage | Typical intrinsics error |
|---|---|
| Zhang on distorted pixels | 10-40% |
| After 1 iteration | 5-20% |
| After 2 iterations | 5-15% |
| After non-linear refinement | <2% |
The iterative linear estimate is not meant to be highly accurate. Its purpose is to provide a starting point good enough for the non-linear optimizer to converge.
API
#![allow(unused)] fn main() { use vision_calibration::linear::iterative_intrinsics::*; use vision_calibration::linear::DistortionFitOptions; let opts = IterativeIntrinsicsOptions { iterations: 2, distortion_opts: DistortionFitOptions { fix_k3: true, fix_tangential: false, iters: 8, }, zero_skew: true, }; let camera = estimate_intrinsics_iterative(&dataset, opts)?; // camera is PinholeCamera = Camera<f64, Pinhole, BrownConrady5, IdentitySensor, FxFyCxCySkew> // camera.k: FxFyCxCySkew (intrinsics) // camera.dist: BrownConrady5 (distortion) }