Skip to main content

Module 5 - Vision & Integration

Objective: use vision to improve targeting or pose.

Prereqs: Command-based project; optional odometry.

Steps

  • Configure Limelight/PhotonVision pipeline; read targets and latency.
  • Integrate into aiming or pose estimation; add no-target fallback.
  • Test with recorded targets or in sim where possible.
  • Calibrate camera pose on the robot; account for pipeline latency.
  • Add LED and pipeline control from code; log target validity for driver feedback.

Deliverables

  • Vision-assisted aim or pose update in sim or on-bot.
  • Checklist: target acquisition speed, accuracy at range, and fallback behavior.

Resources

  • Limelight docs: docs.limelightvision.io
  • Tuning: docs.limelightvision.io/en/latest/tuning.html

Instructions (numbered)

  1. Mount and calibrate camera pose; set pipeline for target type; measure latency.
  2. Read targets/latency in code; add LED/pipeline control.
  3. Integrate into aiming or pose; add no-target fallback and validity checks.
  4. Test with recorded targets or sim; log acquisition time/accuracy.
  5. Verify performance at different ranges/angles; tune pipeline as needed.

Example

  • Limelight pipeline tuned for AprilTags; target valid → turn to aim; if no target, revert to gyro-only drive.

Best practices

  • Calibrate pose carefully; wrong offsets ruin results.
  • Log target validity and latency; surface to driver/dashboard.
  • Provide fallback to avoid dead behavior when targets drop.

Common mistakes

  • Ignoring latency; misaligned camera pose.
  • Overexposing/undertuning pipeline → false/no targets.
  • No fallback when target lost.

Spec notes / data to log

  • Camera pose offsets; pipeline settings; latency values.
  • Data: target validity, latency, pose/aim outputs, fallback triggers.

Checklist

  • Camera pose calibrated
  • Pipeline configured/tuned
  • Latency read and compensated
  • Fallback behavior implemented
  • Tests logged (range/angle)

Recommended tools

  • Limelight/PhotonVision UI, calibration target, logging/dashboard tools.

Sample log (template)

  • Date:
  • Pipeline settings:
  • Tests (range/angle):
  • Accuracy/latency results:
  • Changes planned:

Photos/diagrams

  • [Placeholder: camera pose diagram and target detection screenshot]

Numeric Example

Example camera configuration:

  • Horizontal FOV: 60°
  • Resolution: 1280 × 720
  • Measured pipeline latency: 35–45 ms
  • Target distance range: 1.0–6.0 m
  • Pose offset from robot center:
    • X offset: +0.20 m (forward)
    • Y offset: +0.10 m (to the left)
    • Z offset: +0.40 m (above floor)

Example target reading:

  • Detected target:
    • Distance: 3.2 m
    • Horizontal offset: -6°
    • Vertical offset: +4°
  • Robot pose correction applied using vision every 0.1 s (10 Hz)

Data to Log

For vision-based alignment and pose correction, log:

  • Timestamp
  • Raw camera latency (ms)
  • Target detected (yes/no)
  • Target distance (m)
  • Horizontal/vertical offset (deg)
  • Vision-estimated robot pose (x, y, heading)
  • Fused robot pose (after sensor fusion)
  • Confidence score / ambiguity if available

Purpose:

  • Verify that latency is stable
  • Confirm pose corrections are reasonable (no wild jumps)
  • Debug cases where autonomous misses or drifts off-target