Skip to main content
The hyperframes CLI is the primary way to work with Hyperframes. It handles project creation, live preview, rendering, linting, and diagnostics — all from your terminal.
npm install -g hyperframes
# or use directly with npx
npx hyperframes <command>

When to Use

Use the CLI when you want to:
  • Create a new composition project from a template
  • Preview compositions with live hot reload during development
  • Render compositions to MP4 (locally or in Docker)
  • Lint compositions for structural issues
  • Check your environment for missing dependencies
Use a different package if you want to:
  • Render programmatically from Node.js code — use the producer
  • Build a custom frame capture pipeline — use the engine
  • Embed a composition editor in your own web app — use the studio
  • Parse or generate composition HTML in code — use core
The CLI is the recommended starting point for all Hyperframes users. It wraps the producer, engine, and studio packages so you do not need to install them separately.

Agent-Friendly by Default

The CLI is non-interactive by default — designed so AI agents (Claude Code, Gemini CLI, Codex, Cursor) can drive every command without prompts or interactive UI.
  • All inputs are passed via flags (e.g., --template, --video, --output)
  • Missing required flags fail fast with a clear error and usage example
  • Output is plain text suitable for parsing
  • No interactive prompts, spinners, or selection menus
Add --human-friendly to any command to enable the interactive terminal UI with prompts, spinners, and selection menus.
# Fully non-interactive — all inputs from flags
npx hyperframes init my-video --template blank --video video.mp4
npx hyperframes render --output output.mp4 --fps 30 --quality standard
npx hyperframes upgrade --check --json

JSON Output and _meta Envelope

All commands that support --json wrap their output with a _meta field containing version check info:
{
  "name": "my-video",
  "duration": 10.5,
  "_meta": {
    "version": "0.1.4",
    "latestVersion": "0.1.5",
    "updateAvailable": true
  }
}
This allows agents to detect outdated versions from any command’s output without running a separate upgrade check. The version data comes from a 24-hour cache — no network request is made during --json output.

Passive Update Notices

The CLI checks npm for newer versions in the background (cached 24 hours). If an update is available, a notice appears on stderr after command completion:
  Update available: 0.1.4 → 0.1.5
  Run: npx hyperframes@latest
This is suppressed in CI environments, non-TTY shells, and when HYPERFRAMES_NO_UPDATE_CHECK=1 is set.

Getting Started

1

Create a project

Scaffold a new composition from a template:
npx hyperframes init --template warm-grain
You will be prompted for a project name, or pass it as an argument:
npx hyperframes init my-video --template warm-grain
See Templates for all available templates.
2

Preview in browser

Start the development server with live hot reload:
cd my-video
npx hyperframes preview
The Hyperframes Studio opens in your browser. Edit index.html and the preview updates instantly.
3

Lint your composition

Check for structural issues before rendering:
npx hyperframes lint
◆  Linting my-project/index.html

◇  0 errors, 0 warnings
4

Render to MP4

Produce the final video:
npx hyperframes render --output output.mp4
For deterministic output, add --docker:
npx hyperframes render --docker --output output.mp4

Commands

init

Create a new composition project from a template:
# Agent mode (default) — --template is required
npx hyperframes init my-video --template blank --video video.mp4

# Human mode — interactive prompts
npx hyperframes init --human-friendly
FlagDescription
--template, -tTemplate to use (required in default mode, interactive in --human-friendly)
--video, -VPath to a video file (MP4, WebM, MOV)
--audio, -aPath to an audio file (MP3, WAV, M4A)
--skip-skillsSkip AI coding skills installation
--skip-transcribeSkip automatic whisper transcription
--modelWhisper model for transcription (e.g. small.en, medium.en, large-v3)
--languageLanguage code for transcription (e.g. en, es, ja). Filters non-target speech.
--human-friendlyEnable interactive terminal UI with prompts
TemplateDescription
blankEmpty composition — just the scaffolding
warm-grainCream aesthetic with grain texture
play-modePlayful elastic animations
swiss-gridStructured grid layout
vignelliBold typography with red accents
In default (agent) mode, --template is required — the CLI errors with a usage example if missing. In --human-friendly mode, you choose interactively. When --video or --audio is provided, the CLI automatically transcribes the audio with Whisper and patches captions into the composition (use --skip-transcribe to disable).After scaffolding, the CLI installs AI coding skills for Claude Code, Gemini CLI, and Codex CLI (use --skip-skills to disable). See skills command.See Templates for full details.

compositions

List all compositions in the current project:
npx hyperframes compositions
FlagDescription
--jsonOutput as JSON
Shows each composition’s ID, duration, resolution, and element count.

transcribe

Transcribe audio/video to word-level timestamps, or import an existing transcript:
# Transcribe audio/video with local whisper.cpp
npx hyperframes transcribe audio.mp3
npx hyperframes transcribe video.mp4 --model medium.en --language en

# Import existing transcripts from other tools
npx hyperframes transcribe subtitles.srt
npx hyperframes transcribe captions.vtt
npx hyperframes transcribe openai-response.json
FlagDescription
--dir, -dProject directory (default: current directory)
--model, -mWhisper model (default: small.en). Options: tiny.en, base.en, small.en, medium.en, large-v3
--language, -lLanguage code (e.g. en, es, ja). Filters out non-target language speech.
--jsonOutput result as JSON
The command auto-detects the input type. Audio/video files are transcribed with whisper.cpp. Transcript files (.json, .srt, .vtt) are normalized and imported.Supported transcript formats:
FormatSource
whisper.cpp JSONhyperframes init --video, hyperframes transcribe
OpenAI Whisper API JSONopenai.audio.transcriptions.create() with word timestamps
SRT subtitlesVideo editors, YouTube, subtitle tools
VTT subtitlesWeb players, YouTube, transcription services
All formats are normalized to a standard [{text, start, end}] word array and saved as transcript.json. If the project has caption HTML files, they are automatically patched with the transcript data.
For music or noisy audio, use --model medium.en for better accuracy. For the best results with production content, transcribe via the OpenAI or Groq Whisper API and import the JSON.

tts

Generate speech audio from text using a local AI model (Kokoro-82M). No API key required — runs entirely on-device.
# Generate speech from text
npx hyperframes tts "Welcome to HyperFrames"

# Choose a voice
npx hyperframes tts "Hello world" --voice am_adam

# Save to a specific file
npx hyperframes tts "Intro" --voice bf_emma --output narration.wav

# Adjust speech speed
npx hyperframes tts "Slow and clear" --speed 0.8

# Read text from a file
npx hyperframes tts script.txt

# List available voices
npx hyperframes tts --list
FlagDescription
--output, -oOutput file path (default: speech.wav in current directory)
--voice, -vVoice ID (run --list to see options)
--speed, -sSpeech speed multiplier (default: 1.0)
--listList available voices and exit
--jsonOutput result as JSON
Combine tts with transcribe to generate narration and word-level timestamps for captions in a single workflow: generate the audio with tts, then transcribe the output with transcribe to get word-level timing.

Producer

The rendering pipeline the CLI calls under the hood. Use directly for programmatic rendering.

Studio

The editor UI that powers hyperframes preview. Use directly to embed in your own app.

Core

Types, linter, and runtime. Use directly for custom tooling and integrations.

Engine

The capture engine. Use directly for custom frame capture pipelines.