Documentation
RepoStudio turns any public GitHub repository into a polished 1080p product demo video — automatically. Paste a URL, and the pipeline ingests the code, writes a script grounded in real imports, synthesizes narration, and composites a cinematic MP4.
Stack
| Layer | Technology |
|---|---|
| Framework | Next.js 16 App Router |
| Video render | Remotion 4 — React → MP4 via headless Chrome |
| Auth | NextAuth v5 — GitHub OAuth |
| Database | Supabase (video_jobs table + video-exports bucket) |
| AI script | Gemini 2.5 Pro → 2.5 Flash → 2.0 Flash → 1.5 Pro; Nemotron fallback |
| TTS + captions | ElevenLabs /v1/text-to-speech/{id}/with-timestamps |
| Screenshots | Playwright headless Chromium — 6-frame interaction journey |
| Brand detection | CSS custom-property + Tailwind config parser |
| UI | Framer Motion · Apple Liquid Glass design system |
How It Works
Four stages run in sequence every time you click "Create Video".
Ingest
Script
Audio
Render
Quick Start
1. Clone and install
git clone https://github.com/JayantDeveloper/repostudio cd repostudio npm install npx playwright install chromium
2. Configure environment variables
Create a .env.local file in the project root. See the Environment Variables section for the full reference.
# Minimum viable setup (no audio, no persistent DB) AUTH_SECRET= # openssl rand -base64 32 GITHUB_ID= # GitHub OAuth App client ID GITHUB_SECRET= # GitHub OAuth App client secret GEMINI_API_KEY= # Google AI Studio — free tier works
3. Create a GitHub OAuth App
Go to GitHub → Settings → Developer settings → OAuth Apps → New OAuth App.
| Field | Value |
|---|---|
| Homepage URL | http://localhost:3000 (or your Vercel URL) |
| Callback URL | http://localhost:3000/api/auth/callback/github |
Copy the Client ID and generate a Client Secret — paste them into GITHUB_ID and GITHUB_SECRET.
4. Apply the Supabase migration (optional but recommended)
Without Supabase, video jobs are stored in server memory and lost on restart. To persist them, run this SQL in your Supabase Dashboard → SQL Editor:
create extension if not exists pgcrypto;
create table if not exists public.video_jobs (
id uuid primary key default gen_random_uuid(),
user_id text not null,
repo_url text not null,
status text not null default 'ready',
scenes jsonb not null default '[]'::jsonb,
video_url text,
created_at timestamptz not null default now(),
updated_at timestamptz not null default now(),
constraint video_jobs_status_check check (
status in ('ingesting','scripting','audio','face',
'ready','rendering','done','error')
)
);
create index if not exists video_jobs_user_updated
on public.video_jobs (user_id, updated_at desc);
alter table public.video_jobs enable row level security;
create policy if not exists "users manage own jobs"
on public.video_jobs
using (auth.uid()::text = user_id)
with check (auth.uid()::text = user_id);
insert into storage.buckets (id, name, public)
values ('video-exports', 'video-exports', true)
on conflict (id) do nothing;5. Run
npm run dev # → http://localhost:3000
GEMINI_API_KEY + GitHub OAuth. ElevenLabs and Supabase are optional — the pipeline degrades gracefully without them.Environment Variables
Auth (required)
| Variable | Description |
|---|---|
| AUTH_SECRET | Random secret for NextAuth session encryption. Generate: openssl rand -base64 32 |
| GITHUB_ID | GitHub OAuth App client ID |
| GITHUB_SECRET | GitHub OAuth App client secret |
Supabase (optional — enables persistent storage)
| Variable | Description |
|---|---|
| NEXT_PUBLIC_SUPABASE_URL | Your project URL: https://xxx.supabase.co |
| NEXT_PUBLIC_SUPABASE_ANON_KEY | Public anon key — safe to expose in browser |
| SUPABASE_SERVICE_ROLE_KEY | Service role key — server-only, bypasses RLS for Storage uploads |
AI — script generation (at least one required)
| Variable | Description | Priority |
|---|---|---|
| GEMINI_API_KEY | Google AI Studio key. Leads the model chain: 2.5 Pro → 2.5 Flash → 2.0 Flash → 1.5 Pro. Free tier available. | 1st |
| NVIDIA_NIM_API_KEY | NVIDIA NIM key for Llama-3.3-Nemotron-Super-49B. Used as fallback if Gemini is unavailable. | 2nd |
| NVIDIA_NIM_BASE_URL | Override the NIM endpoint. Defaults to https://integrate.api.nvidia.com/v1 | optional |
TTS + captions (optional)
| Variable | Description |
|---|---|
| ELEVENLABS_API_KEY | Enables real narration audio and character-level word timestamps for karaoke captions. Without this, the video renders silently with generated timestamps. |
Utilities (optional)
| Variable | Description |
|---|---|
| GITHUB_TOKEN | Personal access token. Raises GitHub API rate limit from 60 → 5,000 requests/hour. Useful in production. |
| FIRECRAWL_API_KEY | If present, fetches README via Firecrawl instead of the GitHub API — better Markdown extraction for complex READMEs. |
Background music (optional)
| Variable | Description |
|---|---|
| NEXT_PUBLIC_MUSIC_CINEMATIC_URL | HTTPS URL to a royalty-free instrumental MP3 for the Cinematic mood preset (🎬) |
| NEXT_PUBLIC_MUSIC_UPBEAT_URL | HTTPS URL for the Upbeat mood preset (⚡) |
| NEXT_PUBLIC_MUSIC_MINIMAL_URL | HTTPS URL for the Minimal mood preset (🌊) |
| NEXT_PUBLIC_MUSIC_HYPE_URL | HTTPS URL for the Hype mood preset (🔥) |
Background Music
RepoStudio supports instrumental background music that automatically ducks under the narrator's voice. Music is opt-in — if no mood is selected in the editor, the video renders without music.
Mood presets
| Emoji | Mood | Character |
|---|---|---|
| 🎬 | Cinematic | Epic, orchestral — builds tension and drama |
| ⚡ | Upbeat | Electronic, energetic — forward-moving pulse |
| 🌊 | Minimal | Ambient, calm — focused and unobtrusive |
| 🔥 | Hype | High-energy, punchy — big drops |
Adding tracks
Drop royalty-free MP3 files (no vocals) into public/music/:
public/music/ cinematic.mp3 # 🎬 Cinematic upbeat.mp3 # ⚡ Upbeat minimal.mp3 # 🌊 Minimal hype.mp3 # 🔥 Hype
Or set the NEXT_PUBLIC_MUSIC_*_URL environment variables to host tracks externally (Supabase Storage, CDN, etc.).
How ducking works
| State | Music volume |
|---|---|
| During narration (±0.15 s buffer) | 8% |
| Fading out of speech (0–0.35 s gap) | 8% → 28% linear |
| Between words (silence) | 28% |
| First 30 frames (1 s) | 0% → full (fade-in envelope) |
| Last 45 frames (1.5 s) | full → 0% (fade-out envelope) |
Video Output Spec
| Property | Value |
|---|---|
| Resolution | 1920 × 1080 (1080p) |
| Frame rate | 30 fps |
| Duration | 25–55 s — LLM-determined by repo complexity |
| Max duration | 60 s hard cap (validated before render) |
| Codec | H.264 via headless Chrome (Remotion) |
| Container | MP4 |
| Scene transitions | 18-frame (0.6 s) cross-dissolve — all scenes render simultaneously |
| Background | Full-bleed screenshot at 108% scale with Ken Burns zoom |
| Ken Burns — hook | Slow zoom in (1.00 → 1.07) from 55% 40% |
| Ken Burns — feature_1 | Slow zoom out (1.06 → 1.00) from 45% 55% |
| Ken Burns — feature_2 | Zoom in (1.00 → 1.08) from 60% 45% |
| Ken Burns — outro | Held (1.03) from 50% 50% |
| Shot cycling | 2 screenshots per scene — 14-frame midpoint crossfade |
| Vignette | Heavy top/bottom, light center (where product UI lives) |
| Lower-third text | 52px, weight 740, 190px from bottom |
| Feature badge | Glass pill, lower-left, brand accent color |
| Watermark | Repo name pill, top-right |
| Captions | Real karaoke word-level timestamps (ElevenLabs) |
| Music | Optional instrumental, ducked to 8% during narration |
| Storage | Supabase Storage (video-exports bucket) or local download |
FAQ
What repos work best?
Public repos with a live deployed app and a populated README. The pipeline captures the live app with Playwright — repos without a deployment still work (it falls back to the GitHub repo page), but the visual quality is higher when there's a real product to screenshot.
Can I use this on private repos?
Not yet. The ingest stage uses the GitHub API which requires repos to be public. Private repo support via personal access token is on the roadmap.
Why does the video have no audio?
No ElevenLabs key is configured. Add ELEVENLABS_API_KEY to your .env.local to enable narration. Without it the video renders silently with subtitles generated from estimated timestamps.
The Supabase banner shows on my dashboard. What do I do?
The video_jobs table hasn't been created in your Supabase project yet. Copy the SQL from the banner in your dashboard and paste it into Supabase Dashboard → SQL Editor → Run. The banner is dismissible and the app works fine without it (in-memory fallback).
The AI script mentions things not in my repo.
This shouldn't happen — the system prompt hard-requires every claim to be traceable to an import in the source files. If you see hallucination, set GEMINI_API_KEY to ensure the most capable model runs first. The fallback models (especially if all fail and buildFallbackScenes is used) produce generic copy that isn't repo-specific.
How do I change the video duration?
The LLM sets durations automatically based on how much the repo has to say. You can override them manually in the editor (UI Editor or Raw JSON mode) after generation. The render uses whatever durations are in the scenes JSON at render time.
Playwright screenshots are blank or show errors.
Run npx playwright install chromium to ensure the Chromium binary is present. On Vercel, Playwright doesn't work inside serverless functions — you'll need to add the @sparticuz/chromium package and configure it for the serverless environment, or use an external screenshot service.
Can I deploy this myself?
Yes. Deploy to Vercel with vercel --prod. Set all environment variables in the Vercel dashboard. Note that Playwright screenshot capture requires the serverless function to have enough memory (1 GB+ recommended) and execution time (60 s+).