# PARA vs ACT: OOD Generalization Study

## Experiment Overview

Compare PARA (pixel-aligned heatmap) vs ACT (CLS-token regression) on out-of-distribution generalization for LIBERO task 0 (pick up bowl, place on plate).

## Key Findings So Far

### Single-task data efficiency (zero-rotation, teleport eval)
| Demos | PARA | ACT |
|-------|------|-----|
| 5     | 0%   | 75% |
| 10    | 70%  | 75% |
| 25    | 85%  | 60% |

- ACT is better with very few demos (5-10)
- PARA scales better with more data (25+)

### Architecture ablations (50 demos)
| Model | Success |
|-------|---------|
| PARA (single agentview) | 95% |
| ACT (CLS regression) | 85% |
| Cost volume (agent + wrist) | 25% |

- Simple PARA wins; wrist cost volume hurts (adds noise)

### Key discoveries
- **Rotation prediction is harmful** for pick-and-place: 0% with rotation → 60-95% without
- **Euler angle wrapping** on X-axis creates bimodal targets (same rotation → two different bins)
- **Teleport eval** (closed-loop servo) is critical: 40% open-loop → 60-95% teleport
- **Zero-rotation dataset** (re-rendered demos with rotation zeroed) improves consistency

## OOD Datasets

### Object Position OOD (`/data/libero/ood_objpos_task0`)
- 16×16 grid of (dx, dy) shifts for bowl + plate positions
- dx: [-0.15, 0.0], dy: [-0.1, 0.1]
- Single demo trajectory retargeted via servo teleport
- Distractors removed, furniture hidden

### Viewpoint OOD (`/data/libero/ood_viewpoint_task0`)
- 16×16 grid of camera viewpoints on spherical cap
- theta: [0°, 20°], phi: [0°, 360°)
- Same demo trajectory, just rendered from different angles
- Original scene (with distractors)

## Training Setup
- **Model**: PARA and ACT
- **Backbone**: DINOv3 ViT-S/16
- **No rotation**: `--skip_rotation` flag, `--zero_rotation` at eval
- **Eval**: teleport servo + move-then-grip
- **Loss**: EMA adaptive weighting (volume CE + gripper 2-class CE)
- **N_WINDOW**: 4, frame_stride: 3

## Current Experiment Plan

### Phase 1: Object Position Generalization
- [x] Generate 16×16 OOD object position dataset (256 trajectories) — IN PROGRESS (~2.5hr)
- [ ] Train on ~8×8 grid subset (64 trajectories, evenly spaced)
- [ ] Evaluate on sampled held-out grid points
- [ ] Compare PARA vs ACT generalization

### Phase 2: Viewpoint Generalization
- [x] Generate 16×16 OOD viewpoint dataset (256 viewpoints) — DONE
- [ ] Train on subset, evaluate on held-out viewpoints
- [ ] Compare PARA vs ACT

### Train/Test Split for Object Position
- **Train**: every other grid point = 8×8 = 64 demos (indices where i%2==0 and j%2==0)
- **Test**: remaining 192 grid points (interpolation + extrapolation)
- **Eval subset**: sample ~20 test points for quick evaluation

### Phase 3: Combined
- [ ] Cross-product of position × viewpoint (later)

## Files
| File | Purpose |
|------|---------|
| `generate_ood_objpos.py` | Generate OOD object position dataset |
| `generate_ood_viewpoint.py` | Generate OOD viewpoint dataset |
| `train.py` | Training with `--skip_rotation`, `--max_demos`, EMA loss |
| `eval.py` | Eval with `--teleport`, `--zero_rotation`, `--max_steps 600` |
| `model.py` | PARA model (N_WINDOW=4, 2-class gripper, 1×1 conv heads) |
| `model_act.py` | ACT baseline (sigmoid outputs, binary gripper) |
| `prerender_zero_rot.py` | Generate zero-rotation demo dataset |

## Worktree
All code lives in `/data/cameron/para_normalized_losses/libero/` (git worktree `para_normalized_losses`).
