PhotonVision on Orange Pi 5 — Vision Processing Guide
Implement real-time AI vision processing using PhotonVision on an Orange Pi 5 coprocessor with dual Arducam USB cameras.
Overview
Goal: Process video from dual Arducam cameras using PhotonVision running on an Orange Pi 5, send target data (detected objects, angles, distances) to the roboRIO via NetworkTables.
Hardware:
- Orange Pi 5 (ARM-based Linux coprocessor)
- 2x Arducam 100fps Global Shutter USB Camera (1MP OV9782, M12 lens)
- USB hub (powered, for camera stability)
- Ethernet cable (Orange Pi ↔ roboRIO network)
Software:
- PhotonVision (open-source vision pipeline)
- WPILib NetworkTables (robot code integration)
Hardware Setup
1. Orange Pi 5 Preparation
Install Ubuntu Server (22.04 LTS):
- Download Ubuntu Server image for Orange Pi 5: ubuntu.com/download
- Write to microSD card (minimum 32GB recommended):
# On your computer
sudo dd if=ubuntu-22.04-preinstalled-server-arm64+orangepi5.img of=/dev/sdX bs=4M status=progress
sudo sync - Boot Orange Pi 5, update system:
sudo apt update && sudo apt upgrade -y
sudo apt install -y git curl build-essential
2. Camera Mounting & Configuration
Physical Setup:
- Mount cameras with M12 lenses at optimal field-of-view angles
- Use USB 3.0 hub (powered) to avoid power draw issues
- Connect both cameras to hub, hub to Orange Pi USB port
- Verify camera detection:
lsusb
# Should show both Arducam devices
ls /dev/video*
# Should list /dev/video0 and /dev/video1
Camera Calibration:
- Focus lenses manually (adjust M12 lens ring)
- Ensure global shutter is set (no motion blur)
- Test frame rate:
v4l2-ctl -d /dev/video0 --list-formats-ext
# Should show support for MJPEG 100fps
PhotonVision Installation
1. Install PhotonVision
# SSH into Orange Pi 5
ssh ubuntu@<orange-pi-ip>
# Download latest PhotonVision
cd ~/
wget https://github.com/PhotonVision/photonvision/releases/download/v2024.3.1/photonvision-linux-arm64.tar.gz
# (Check releases page for latest version)
tar -xzf photonvision-linux-arm64.tar.gz
cd photonvision
# Install JDK (required)
sudo apt install -y openjdk-17-jdk-headless
# Make executable and run
chmod +x photonvision
./photonvision
2. Access PhotonVision Web UI
- Open browser on your network computer:
http://<orange-pi-ip>:5800 - You'll see the PhotonVision dashboard with camera feeds
Camera & Pipeline Configuration
1. Select Cameras
In PhotonVision UI:
- Go to Settings → Hardware
- Select:
- Camera 0:
/dev/video0(first Arducam) - Camera 1:
/dev/video1(second Arducam)
- Camera 0:
- Set resolution: 1280×800 (balance processing speed & accuracy)
- Frame rate: 100 FPS (use global shutter advantage)
- Apply and test feed
2. Create Detection Pipeline
Pipeline Setup:
- Click Pipelines → + New Pipeline
- Name:
Target_Detection(or game-specific, e.g.,Fuel_Detect) - Select mode:
- AprilTag — For game field targets (recommended for 2026 AGE)
- Reflective Tape — For retroreflective tape targets
- Neural Network — For custom object detection (advanced)
AprilTag Setup (Recommended for 2026):
- Set family: AprilTag 36h11 (FRC standard)
- Tag size: 0.1524 m (6 inches, standard for game)
- Enable 3D pose estimation
- Test with known tags; adjust lighting if needed
Output Pipeline Data:
- Detected targets list
- Best target (highest area)
- Target position (X, Y)
- Target rotation (yaw)
- Distance to target (if 3D enabled)
3. Configure for Dual Cameras
If using both cameras:
- Create separate pipelines for each (
Camera0_Pipeline,Camera1_Pipeline) - Or use multi-camera support:
- Settings → Multi-Target Processing
- Merge detections from both cameras (choose closest or highest-confidence target)
Robot Code Integration
1. Add PhotonVision to WPILib Project
In build.gradle:
dependencies {
implementation 'org.photonvision:PhotonLib-java:v2024.3.1'
}
Run: ./gradlew build to download dependencies
2. Create Vision Subsystem
File: src/main/java/frc/robot/subsystems/VisionSubsystem.java
package frc.robot.subsystems;
import org.photonvision.PhotonCamera;
import org.photonvision.targeting.PhotonTrackedTarget;
import edu.wpi.first.wpilibj2.command.SubsystemBase;
import edu.wpi.first.networktables.NetworkTableInstance;
import edu.wpi.first.networktables.DoublePublisher;
public class VisionSubsystem extends SubsystemBase {
private final PhotonCamera camera;
private final DoublePublisher targetXPublisher;
private final DoublePublisher targetYPublisher;
private final DoublePublisher distancePublisher;
public VisionSubsystem() {
// Connect to PhotonVision instance on Orange Pi
// Default camera name set in PhotonVision UI
camera = new PhotonCamera("OrangePi");
// Create NetworkTables entries for dashboard
var table = NetworkTableInstance.getDefault().getTable("Vision");
targetXPublisher = table.getDoubleTopic("targetX").publish();
targetYPublisher = table.getDoubleTopic("targetY").publish();
distancePublisher = table.getDoubleTopic("distance").publish();
}
@Override
public void periodic() {
var result = camera.getLatestResult();
if (result.hasTargets()) {
// Get best target (highest area)
PhotonTrackedTarget target = result.getBestTarget();
double targetX = target.getYaw(); // Horizontal offset (-30 to +30 degrees)
double targetY = target.getPitch(); // Vertical offset
double distance = -1; // Not available without 3D pose
// If 3D pose estimation is enabled:
if (target.getPoseAmbiguity() < 0.1) { // Low ambiguity (high confidence)
var pose = target.getBestCameraToTarget();
distance = pose.getTranslation().getNorm(); // Distance in meters
}
// Publish to NetworkTables for dashboard/debugging
targetXPublisher.set(targetX);
targetYPublisher.set(targetY);
distancePublisher.set(distance);
System.out.println(String.format(
"Target detected: X=%.2f° Y=%.2f° Distance=%.2fm",
targetX, targetY, distance
));
} else {
System.out.println("No target detected");
targetXPublisher.set(0);
targetYPublisher.set(0);
distancePublisher.set(-1);
}
}
/**
* Get horizontal offset to target (degrees)
* Positive = target right, Negative = target left
*/
public double getTargetX() {
var result = camera.getLatestResult();
if (result.hasTargets()) {
return result.getBestTarget().getYaw();
}
return 0;
}
/**
* Get distance to target (meters)
*/
public double getTargetDistance() {
var result = camera.getLatestResult();
if (result.hasTargets()) {
var pose = result.getBestTarget().getBestCameraToTarget();
return pose.getTranslation().getNorm();
}
return -1;
}
public boolean hasTarget() {
return camera.getLatestResult().hasTargets();
}
}
3. Use Vision in Drive Command
File: src/main/java/frc/robot/commands/DriveToTargetCommand.java
package frc.robot.commands;
import edu.wpi.first.wpilibj2.command.Command;
import frc.robot.subsystems.DriveSubsystem;
import frc.robot.subsystems.VisionSubsystem;
public class DriveToTargetCommand extends Command {
private final DriveSubsystem drive;
private final VisionSubsystem vision;
private static final double ALIGNMENT_TOLERANCE = 2.0; // degrees
public DriveToTargetCommand(DriveSubsystem drive, VisionSubsystem vision) {
this.drive = drive;
this.vision = vision;
addRequirements(drive);
}
@Override
public void initialize() {
System.out.println("Starting alignment to target");
}
@Override
public void execute() {
if (!vision.hasTarget()) {
System.out.println("No target visible, stopping");
drive.tankDrive(0, 0);
return;
}
double targetX = vision.getTargetX();
// Simple proportional control: turn toward target
double rotationSpeed = -targetX * 0.01; // Gain: 0.01
// Forward speed (constant for now)
double forwardSpeed = 0.3;
drive.tankDrive(forwardSpeed + rotationSpeed, forwardSpeed - rotationSpeed);
}
@Override
public boolean isFinished() {
if (!vision.hasTarget()) return false;
return Math.abs(vision.getTargetX()) < ALIGNMENT_TOLERANCE;
}
@Override
public void end(boolean interrupted) {
drive.tankDrive(0, 0);
}
}
4. Register in RobotContainer
public class RobotContainer {
private final VisionSubsystem visionSubsystem = new VisionSubsystem();
private final DriveSubsystem driveSubsystem = new DriveSubsystem();
public void configureButtonBindings() {
// Press button to align to target
new JoystickButton(driverController, XboxController.Button.kA.value)
.onTrue(new DriveToTargetCommand(driveSubsystem, visionSubsystem));
}
}
Network Configuration
1. Network Setup
Goal: Orange Pi and roboRIO communicate via NetworkTables
Option A: Static IP (Recommended)
# On Orange Pi, edit /etc/netplan/01-netcfg.yaml
sudo nano /etc/netplan/01-netcfg.yaml
# Add:
network:
version: 2
ethernets:
eth0:
dhcp4: no
addresses: [10.21.80.10/24] # 10.TEAM.80.10 pattern
gateway4: 10.21.80.1
nameservers:
addresses: [8.8.8.8]
sudo netplan apply
Option B: DHCP (Simpler for testing)
- Connect to team network
- Orange Pi gets IP via DHCP
- Find IP:
ifconfigon Orange Pi
Verify Connectivity:
# From roboRIO or laptop
ping 10.21.80.10
# Should see responses
2. PhotonVision Network Configuration
In PhotonVision UI (Settings → Networking):
- Team Number: 2180 (auto-configures 10.21.80.x)
- Mode: NetworkTables Server (to roboRIO)
- Apply and restart PhotonVision
Calibration & Testing
1. Camera Calibration
PhotonVision auto-calibrates on startup, but for best accuracy:
- Print AprilTag target: Download 36h11 tag from apriltag.csail.mit.edu
- Mount at known distance (1 meter)
- In PhotonVision: Calibration → Start Calibration
- Move tag around field of view
- Save calibration
2. Test Vision Pipeline
In PhotonVision UI:
- Point camera at AprilTag or game target
- Verify detection shows:
- Green bounding box around target
- X, Y, rotation values updating
- FPS indicator (should be near 100)
In Robot Code:
- Deploy and check NetworkTables output (Glass dashboard)
- Verify
Vision/targetX,Vision/targetY,Vision/distanceupdating - Test
DriveToTargetCommandin manual mode
3. Latency Check
PhotonVision displays latency (typically 5–20ms for 100fps):
- Locate in UI under camera name
- Log it:
System.out.println("Vision latency: " + result.getLatencyMillis() + " ms"); - Use for predictive aiming if needed
Best Practices
Performance
- Use MJPEG compression on cameras (reduces bandwidth)
- Keep pipeline simple (AprilTag is faster than neural networks)
- Run PhotonVision on dedicated core if possible:
taskset -c 0 ./photonvision
Reliability
- Power: Use powered USB hub (cameras draw ~500mA each)
- Heat: Add heatsink to Orange Pi; monitor temps (should stay below 70°C)
- Lighting: Use consistent field lighting; add LED ring around cameras if needed
- Network: Use wired Ethernet (WiFi unreliable during competition)
Code Quality
- Always check
hasTargets()before accessing target data - Handle case where target disappears mid-command (return early)
- Log detection events for debugging
- Use meaningful pipeline names in code comments
Troubleshooting
| Issue | Solution |
|---|---|
| No camera detected | Check USB connection; run lsusb; restart Orange Pi |
| Low FPS | Reduce resolution; enable hardware acceleration in PhotonVision settings |
| NetworkTables not updating | Verify static IP; check firewall; restart roboRIO and Orange Pi |
| Target detection inconsistent | Improve lighting; recalibrate camera; check lens focus |
| High latency (>50ms) | Reduce pipeline complexity; check Orange Pi CPU usage |
Next Steps
- Install Orange Pi 5 with Ubuntu 22.04 LTS
- Set up cameras and verify USB detection
- Install PhotonVision and create AprilTag pipeline
- Integrate VisionSubsystem into robot code
- Test alignment command in manual mode
- Calibrate during build season as mechanisms take shape
Resources
- PhotonVision Docs: docs.photonvision.org
- AprilTag Family: apriltag.csail.mit.edu
- Orange Pi 5 Docs: orangepi.org
- WPILib Vision: docs.wpilib.org/vision