Skip to main content
Render your Hyperframes compositions to MP4, MOV, or WebM with the CLI. The rendering pipeline is frame-by-frame and seek-driven — see Deterministic Rendering for how this works under the hood.

Getting Started

1

Verify your environment

Run the diagnostics command to check for required dependencies:
Terminal
npx hyperframes doctor
Expected output:
✓ Node.js    v22.x
✓ FFmpeg      7.x
✓ FFprobe     7.x
✓ Chrome      (bundled)
✓ Docker      available
2

Preview your composition

Before rendering, preview your composition in the browser to verify it looks correct:
Terminal
npx hyperframes preview
3

Render to MP4

Run the render command from your project directory:
Terminal
npx hyperframes render --output output.mp4
Expected output:
⠋ Rendering composition "root" (30fps, standard quality)
✓ Captured 240 frames in 8.2s
✓ Encoded to output.mp4 (8.0s, 1920x1080, 4.2MB)

Rendering Modes

Local Mode (default)

Uses Puppeteer (bundled Chromium) and your system’s FFmpeg. Fast for iteration during development.Requires: FFmpeg installed on your system. See Troubleshooting if FFmpeg is not found.
Terminal
npx hyperframes render --output output.mp4
Pros:
  • Fast startup, no container overhead
  • Uses your system GPU for hardware-accelerated encoding (with --gpu)
  • Best for iterative development
Cons:
  • Output may vary across platforms due to font and Chrome version differences
  • Not suitable for CI/CD pipelines that require reproducibility

When to Use Each Mode

ScenarioRecommended Mode
Local development and iterationLocal
CI/CD pipelineDocker
Sharing renders with a teamDocker
Quick preview exportLocal
AI agent-driven renderingDocker
Benchmarking performanceLocal

Options

FlagValuesDefaultDescription
--outputpathrenders/<name>.mp4Output file path
--formatmp4, mov, webmmp4Output format (see Transparent Video below)
--fps24, 30, 6030Frames per second
--qualitydraft, standard, highstandardEncoding quality preset
--workers1-8 or autoautoParallel render workers (see Workers below)
--max-concurrent-renders1-102Max simultaneous renders via the producer server (see Concurrent Renders below)
--gpuoffGPU encoding (NVENC, VideoToolbox, VAAPI)
--dockeroffUse Docker for deterministic rendering
--quietoffSuppress verbose output

Workers

Each render worker launches a separate Chrome browser process to capture frames in parallel. More workers can speed up rendering, but each one consumes ~256 MB of RAM and significant CPU.

Default behavior

By default, Hyperframes uses half of your CPU cores, capped at 4:
MachineCPU coresDefault workers
MacBook Air (M1)84
MacBook Pro (M3)124 (capped)
4-core laptop42
2-core VM21
This is intentionally conservative. Each worker spawns its own Chrome process, so the per-worker overhead is significant. Fewer workers avoids resource contention with FFmpeg encoding and your other applications.

Choosing a worker count

Terminal
# Explicit worker count
npx hyperframes render --workers 1 --output output.mp4

# Let Hyperframes pick based on your CPU
npx hyperframes render --workers auto --output output.mp4

# Maximum parallelism (use with caution on laptops)
npx hyperframes render --workers 8 --output output.mp4
Start with the default. If renders feel slow and your system has headroom (check Activity Monitor / htop), try increasing --workers. If you see high memory pressure or fan noise, reduce it.

When to use 1 worker

  • Short compositions (under 2 seconds / 60 frames) — parallelism overhead exceeds the benefit
  • Low-memory machines (4 GB or less)
  • Running renders alongside other heavy processes (video editing, large builds)

When to increase workers

  • Long compositions (30+ seconds) on a machine with 8+ cores and 16+ GB RAM
  • Dedicated render machines or CI runners
  • Docker mode on a well-provisioned host

Concurrent Renders

When multiple render requests hit the producer server simultaneously (common with AI agents), each render spawns its own set of Chrome worker processes. Too many concurrent renders can exhaust CPU and cause failures. The producer server uses a request-level semaphore to queue renders. Only maxConcurrentRenders renders execute at a time — additional requests wait in a FIFO queue until a slot opens.

Configuration

Terminal
# CLI flag
npx hyperframes render --max-concurrent-renders 2 --output output.mp4

# Environment variable (for the producer server)
PRODUCER_MAX_CONCURRENT_RENDERS=2
The default is 2 concurrent renders, which works well on 8-core machines where each render uses 2-3 workers.

Queue status

The producer server exposes a GET /render/queue endpoint that returns the current state:
{
  "maxConcurrentRenders": 2,
  "activeRenders": 1,
  "queuedRenders": 3
}
AI agents can poll this endpoint to decide whether to submit a render or wait.

SSE queue events

When using the streaming endpoint (POST /render/stream), queued requests receive a queued event before rendering begins:
{"type": "queued", "requestId": "...", "position": 2}
This lets agents report “waiting in queue” to users rather than appearing stuck.

Choosing a concurrency limit

MachineCPU coresRecommended limit
4-core VM41
8-core workstation82
16-core server163-4
32-core render box325-6
When in doubt, use 1. Renders will queue up and execute sequentially, but each one gets full CPU and finishes as fast as possible. This is better than 3 renders fighting for CPU and all finishing slowly — or failing.

Transparent Video

Hyperframes supports rendering with a transparent background — useful for overlays, lower thirds, subscribe cards, and any element you want to composite over other footage in a video editor.
Terminal
npx hyperframes render --format mov --output overlay.mov
MOV with ProRes 4444 is the industry standard for transparent video. It works in all major video editors:
  • CapCut
  • Final Cut Pro
  • Adobe Premiere Pro
  • DaVinci Resolve
  • After Effects
ProRes MOV files are large (typically 5-40 MB for short clips) because ProRes is a high-quality intermediate codec optimized for editing, not delivery. This is expected — the same tradeoff Remotion and professional pipelines make.

Format comparison

FormatCodecTransparencyVideo editorsBrowsersFile size
MOVProRes 4444YesCapCut, Final Cut, Premiere, DaVinci, After EffectsNoLarge
WebMVP9YesNone (shows black background)Chrome, FirefoxSmall
MP4H.264NoAllAllSmall
WebM VP9 alpha is technically supported but all major video editors ignore the alpha channel and render transparent areas as black. Only Chromium-based browsers (Chrome, Arc, Brave, Edge) decode VP9 alpha correctly. Safari does not support it. Use MOV for editor workflows and WebM only for browser-based playback.

How it works

When you render with --format mov or --format webm, Hyperframes:
  1. Captures each frame as a PNG with alpha channel (instead of JPEG for MP4)
  2. Sets Chrome’s page background to transparent via Emulation.setDefaultBackgroundColorOverride
  3. Encodes with an alpha-capable codec (ProRes 4444 for MOV, VP9 for WebM)
Your composition’s HTML should not set a background on html or body — leave it unset so the transparent background comes through.

Authoring transparent compositions

<style>
  /* Do NOT set background on html/body — leave them transparent */
  * { margin: 0; padding: 0; box-sizing: border-box; }

  [data-composition-id="my-overlay"] {
    position: relative;
    width: 1920px;
    height: 1080px;
    overflow: hidden;
    /* No background here either */
  }
</style>
Only the visible elements (cards, text, images) will appear in the final video. Everything else will be transparent.

Verifying transparency

  • In a browser: Open the MOV file — it won’t play (ProRes is not a browser codec). Instead, render a WebM copy and open it in Chrome on a checkerboard background page.
  • In a video editor: Import the MOV file and place it on a track above other footage. Transparent areas should show the footage below.
  • Online tool: Use rotato.app/tools/transparent-video to verify your MOV or WebM has working transparency.

Tips

Use draft quality during development for fast previews. Switch to standard or high for final output.
  • Use npx hyperframes benchmark to find optimal settings for your system
  • Docker mode is slower but guarantees identical output across platforms
  • For compositions with many frames, --gpu can significantly speed up local encoding

Next Steps

Deterministic Rendering

Understand the determinism guarantees

CLI Reference

Full list of CLI commands and flags

Troubleshooting

Fix common rendering issues

Common Mistakes

Avoid pitfalls that affect render output