# Our Video Model

Predicts VidTok-style continuous latents (32×32) from 8-frame 256×256 video: DINO first-frame patch tokens + learnable patches for other frames, self-attention, conv decode. Trained with MSE on VidTok tokenizer targets (no grad).

## Setup

- **DINO**: expects keygrip repo at `../keygrip` with `dinov3/` and pretrained weights.
- **VidTok**: expects VidTok repo at `../VidTok` with 288_8chn v1.1 config and checkpoint.
- **Data**: MP4s under `/data/weiduoyuan/droid_raw/1.0.1/` (e.g. `.../recordings/MP4/*.mp4`). Frames sampled at ~4 fps.

## Train

From repo root (vidgen):

```bash
python our_vid_model/train.py --keygrip ../keygrip --data-root /data/weiduoyuan/droid_raw/1.0.1 --wandb-project our_vid_model
```

Options: `--batch-size`, `--lr`, `--steps`, `--vis-every` (default 100), `--vidtok`, `--vidtok-config`, `--vidtok-ckpt`.

## Layout

- `model.py`: VideoLatentModel (DINO featurizer + learnable patches + transformer + conv to 32×32).
- `dino_featurizer.py`: Loads DINO from keygrip, extracts 16×16 patch features from first frame.
- `dataset.py`: DroidVideoDataset (list MP4s, sample 8 frames @ 4 fps, 256×256).
- `train.py`: MSE on VidTok latents, wandb logging, frame vis every N steps.
