Create, preview, and render HTML video compositions from the command line.
The hyperframes CLI is the primary way to work with Hyperframes. It handles project creation, live preview, rendering, linting, and diagnostics — all from your terminal.
npm install -g hyperframes# or use directly with npxnpx hyperframes <command>
Preview compositions with live hot reload during development
Render compositions to MP4 (locally or in Docker)
Lint compositions for structural issues
Check your environment for missing dependencies
Use a different package if you want to:
Render programmatically from Node.js code — use the producer
Build a custom frame capture pipeline — use the engine
Embed a composition editor in your own web app — use the studio
Parse or generate composition HTML in code — use core
The CLI is the recommended starting point for all Hyperframes users. It wraps the producer, engine, and studio packages so you do not need to install them separately.
The CLI is non-interactive by default — designed so AI agents (Claude Code, Gemini CLI, Codex, Cursor) can drive every command without prompts or interactive UI.
All inputs are passed via flags (e.g., --template, --video, --output)
Missing required flags fail fast with a clear error and usage example
Output is plain text suitable for parsing
No interactive prompts, spinners, or selection menus
Add --human-friendly to any command to enable the interactive terminal UI with prompts, spinners, and selection menus.
This allows agents to detect outdated versions from any command’s output without running a separate upgrade check. The version data comes from a 24-hour cache — no network request is made during --json output.
The CLI checks npm for newer versions in the background (cached 24 hours). If an update is available, a notice appears on stderr after command completion:
Template to use (required in default mode, interactive in --human-friendly)
--video, -V
Path to a video file (MP4, WebM, MOV)
--audio, -a
Path to an audio file (MP3, WAV, M4A)
--skip-skills
Skip AI coding skills installation
--skip-transcribe
Skip automatic whisper transcription
--model
Whisper model for transcription (e.g. small.en, medium.en, large-v3)
--language
Language code for transcription (e.g. en, es, ja). Filters non-target speech.
--human-friendly
Enable interactive terminal UI with prompts
Template
Description
blank
Empty composition — just the scaffolding
warm-grain
Cream aesthetic with grain texture
play-mode
Playful elastic animations
swiss-grid
Structured grid layout
vignelli
Bold typography with red accents
In default (agent) mode, --template is required — the CLI errors with a usage example if missing. In --human-friendly mode, you choose interactively. When --video or --audio is provided, the CLI automatically transcribes the audio with Whisper and patches captions into the composition (use --skip-transcribe to disable).After scaffolding, the CLI installs AI coding skills for Claude Code, Gemini CLI, and Codex CLI (use --skip-skills to disable). See skills command.See Templates for full details.
Transcribe audio/video to word-level timestamps, or import an existing transcript:
# Transcribe audio/video with local whisper.cppnpx hyperframes transcribe audio.mp3npx hyperframes transcribe video.mp4 --model medium.en --language en# Import existing transcripts from other toolsnpx hyperframes transcribe subtitles.srtnpx hyperframes transcribe captions.vttnpx hyperframes transcribe openai-response.json
Flag
Description
--dir, -d
Project directory (default: current directory)
--model, -m
Whisper model (default: small.en). Options: tiny.en, base.en, small.en, medium.en, large-v3
--language, -l
Language code (e.g. en, es, ja). Filters out non-target language speech.
--json
Output result as JSON
The command auto-detects the input type. Audio/video files are transcribed with whisper.cpp. Transcript files (.json, .srt, .vtt) are normalized and imported.Supported transcript formats:
Format
Source
whisper.cpp JSON
hyperframes init --video, hyperframes transcribe
OpenAI Whisper API JSON
openai.audio.transcriptions.create() with word timestamps
SRT subtitles
Video editors, YouTube, subtitle tools
VTT subtitles
Web players, YouTube, transcription services
All formats are normalized to a standard [{text, start, end}] word array and saved as transcript.json. If the project has caption HTML files, they are automatically patched with the transcript data.
For music or noisy audio, use --model medium.en for better accuracy. For the best results with production content, transcribe via the OpenAI or Groq Whisper API and import the JSON.
Generate speech audio from text using a local AI model (Kokoro-82M). No API key required — runs entirely on-device.
# Generate speech from textnpx hyperframes tts "Welcome to HyperFrames"# Choose a voicenpx hyperframes tts "Hello world" --voice am_adam# Save to a specific filenpx hyperframes tts "Intro" --voice bf_emma --output narration.wav# Adjust speech speednpx hyperframes tts "Slow and clear" --speed 0.8# Read text from a filenpx hyperframes tts script.txt# List available voicesnpx hyperframes tts --list
Flag
Description
--output, -o
Output file path (default: speech.wav in current directory)
--voice, -v
Voice ID (run --list to see options)
--speed, -s
Speech speed multiplier (default: 1.0)
--list
List available voices and exit
--json
Output result as JSON
Combine tts with transcribe to generate narration and word-level timestamps for captions in a single workflow: generate the audio with tts, then transcribe the output with transcribe to get word-level timing.
Opens your composition in the Hyperframes Studio with live preview. Edits to index.html and any referenced sub-compositions are reflected automatically. The preview uses the same Hyperframes runtime as production rendering, so what you see is what you get.The preview server runs in three modes, auto-detected:
Embedded mode (default for npx) — runs a standalone server with the studio bundled in the CLI. Zero extra dependencies.
Local studio mode — if @hyperframes/studio is installed in your project’s node_modules, spawns Vite with full HMR for faster iteration.
Monorepo mode — if running from the Hyperframes source repo, spawns the studio dev server directly.
◆ Linting my-project/index.html ✗ missing_gsap_script: Composition uses GSAP but no GSAP script is loaded. ⚠ unmuted-video [clip-1]: Video should have the 'muted' attribute for reliable autoplay.◇ 1 error(s), 1 warning(s)
By default only errors and warnings are printed. Info-level findings (e.g., external script dependency notices) are hidden to keep output clean for agents and CI. Use --verbose to include them.
Flag
Description
--json
Output findings as JSON (includes errorCount, warningCount, infoCount, and findings array)
--verbose
Include info-level findings in output (hidden by default)
Severity levels:
Error (✗) — must fix before rendering (e.g., missing adapter library, invalid attributes)
Warning (⚠) — likely issues that may cause unexpected behavior
Info (ℹ) — informational notices, shown only with --verbose
The linter detects missing attributes, missing adapter libraries (GSAP, Lottie, Three.js), structural problems, and more. See Common Mistakes for details on each rule.
Use --format webm to render compositions with a transparent background. This produces VP9 video with alpha channel in a WebM container — the standard format for overlayable video.
# Render a caption overlay with transparent backgroundnpx hyperframes render --format webm --output captions.webm# Overlay on another video with FFmpegffmpeg -c:v libvpx-vp9 -i captions.webm -i background.mp4 \ -filter_complex "[1:v][0:v]overlay=0:0" -y composited.mp4
For transparency to work, your composition’s HTML should use background: transparent on the root elements. WebM renders use PNG frame capture (instead of JPEG) to preserve the alpha channel.
hyperframes doctor ✓ Version 0.1.4 (latest) ✓ Node.js v22.x (linux x64) ✓ FFmpeg 7.x ✓ FFprobe 7.x ✓ Chrome (system or cached) ✓ Docker 24.x ✓ Docker running Running ◇ All checks passed
Verifies CLI version, Node.js, FFmpeg, FFprobe, Chrome, and Docker availability. If a newer CLI version is available, the version row shows an upgrade hint.
npx hyperframes telemetry enablenpx hyperframes telemetry disablenpx hyperframes telemetry status
Telemetry collects command names, render performance, template choices, and system info. It does not collect file paths, project names, video content, or personally identifiable information. Disable with HYPERFRAMES_NO_TELEMETRY=1 or the command above.
Install HyperFrames and GSAP skills for AI coding tools:
# Install to all default targets (Claude Code, Gemini CLI, Codex CLI)npx hyperframes skills# Install to specific toolsnpx hyperframes skills --claudenpx hyperframes skills --cursornpx hyperframes skills --claude --gemini
Flag
Description
--claude
Install to Claude Code (~/.claude/skills/)
--gemini
Install to Gemini CLI (~/.gemini/skills/)
--codex
Install to Codex CLI (~/.codex/skills/)
--cursor
Install to Cursor (.cursor/skills/ in current project)
Skills are fetched from GitHub and include composition authoring, GSAP animation patterns, and other domain-specific knowledge. The init command also offers to install skills automatically after scaffolding a project.