Laser Plane Initialization
In laser triangulation, a laser line is projected onto the scene and a camera observes the line. The laser plane — defined by a normal vector and distance from the camera origin — must be calibrated. This chapter describes the linear initialization of the laser plane from observations of the laser line on a calibration target.
Problem Statement
Given:
- Camera intrinsics and distortion
- Per-view camera poses (target to camera)
- Per-view laser pixel observations (pixels where the laser line intersects the target)
Find: Laser plane in the camera frame, where is the unit normal and is the signed distance from the camera origin:
for all 3D points on the laser plane (in camera coordinates).
Algorithm
Step 1: Back-Project Laser Pixels to 3D
For each laser pixel in each view:
- Undistort the pixel to get normalized coordinates:
- Form a ray in the camera frame:
- Intersect with the target plane: The target at in target coordinates, transformed to camera frame by , defines a plane. Solve for that places the ray on this plane.
- Compute 3D point: in camera coordinates
This gives a set of 3D points on the laser plane.
Step 2: Fit a Plane via SVD
Given 3D points on the laser plane:
- Centroid:
- Center the points:
- Covariance matrix:
- SVD:
- Normal: is the eigenvector corresponding to the smallest singular value (last column of )
- Distance:
Degeneracy Detection
If all 3D points are nearly collinear (e.g., because all views have the same target pose), the plane is underdetermined. The algorithm checks the ratio of the two smallest singular values:
If the points are too close to collinear, the plane fit fails with an error.
Plane Representation for Optimization
In the non-linear optimization, the plane is parameterized as:
- Normal: A unit vector (2 degrees of freedom on the unit sphere)
- Distance: A scalar (1 degree of freedom)
The manifold is used for the normal to maintain the unit-norm constraint during optimization (see Manifold Optimization).
Accuracy
The linear plane initialization accuracy depends on:
- View diversity: Different target orientations provide points at different locations on the laser plane, improving the fit
- Camera calibration accuracy: Errors in intrinsics, distortion, or poses propagate to the 3D point estimates
- Number of laser pixels: More pixels (from more views or more detected pixels per view) improve the SVD fit
Typical accuracy: 1-5° normal direction error, 5-20% distance error. Refined in the Laserline Device Calibration optimization.
API
#![allow(unused)] fn main() { use vision_calibration::linear::laserline::{LaserlinePlaneSolver, LaserlineView}; // Each view carries laser pixels and the camera-to-target pose let views: Vec<LaserlineView> = poses.iter().zip(laser_pixels_per_view.iter()) .map(|(pose, pixels)| LaserlineView { camera_se3_target: *pose, laser_pixels: pixels.clone(), }) .collect(); let plane = LaserlinePlaneSolver::from_views(&views, &camera)?; // plane.normal: UnitVector3<f64> (unit normal in camera frame) // plane.distance: f64 (signed distance from camera origin) // plane.rmse: f64 (fit residual) }