Ditch the 5-Min Limit.
Record Freely.
I got tired of Loom cutting off my technical explanations. I built this full-stack alternative to handle seamless WebRTC stream compositing, direct-to-Mux video encoding, and AI transcripts—all without limits.
Create a new recording
Ensure your camera and microphone are ready.
How It Works Under The Hood
An overview of the architecture and technical decisions.
WebRTC & Canvas Compositing
Instead of recording separate video files, the app merges the user's screen and webcam feeds directly in the browser.
- getDisplayMedia captures the screen.
- getUserMedia captures the webcam & mic.
- A hidden HTML5 Canvas uses
ctx.arc()to mask the webcam feed into a circle and overlays it onto the screen feed at 30fps. - captureStream() extracts the combined feed for the MediaRecorder API.
Direct-to-Mux Uploads
Routing large video blobs through a Next.js API route risks server timeouts and heavy bandwidth costs.
- The server generates a secure signed upload URL via the Mux Video API.
- The client uses a PUT request to send the WebM blob directly to Mux's ingest servers.
- This offloads the heavy lifting, ensuring fast uploads and immediate HLS (HTTP Live Streaming) encoding.
AI-Powered Transcripts
As soon as the video is processed, the audio track is extracted and run through an AI speech-to-text pipeline. This generates a precise, timestamped transcript stored alongside the video asset, allowing viewers to read along or quickly jump to specific sections of the recording.
Next.js 15 Server Actions
The entire backend logic is handled using React Server Components and Next.js 15 Server Actions. This creates a secure, seamless bridge between the frontend player and the Mux/AI backend without exposing any API keys to the client or requiring a separate Express server.