Available for Q3 projects πŸ“ Islamabad, PK Β· Working globally
CASE STUDY Β· AI WHITEBOARD ANIMATION SAAS Β· 2025–PRESENT

From zero to 1 million doodle videos in the first year.

How we built InstaDoodle from scratch in 2025 β€” an AI-powered whiteboard animation platform that renders the same animation pixel-for-pixel in the browser and on the server. Here's what we built, the technical problems we solved, and the numbers that came out the other side.

10K+
Active users
1M+
Videos rendered
2,600
Daily marketers
1,000+
Library assets
60-day
Money-back trust
Hero screenshot placeholder InstaDoodle editor β€” drop in editor dashboard screenshot here
Client
InstaDoodle (BlasterSuite)
Engagement
2025 – ongoing
Type
AI Whiteboard Animation SaaS
Stack
React Β· anime.js Β· Daybrush Β· Node Β· FFmpeg

Two software founders, tired of paying $200–$1000 per video.

The InstaDoodle team β€” the creators behind BlasterSuite, Speechelo, Videly, and Thumbnail Blaster β€” had been spending thousands on Fiverr freelancers and agency contracts to produce whiteboard explainer videos for their own products. Their customers loved the videos. Their budget didn't.

They wanted a tool that did three things existing whiteboard apps couldn't: run entirely in the cloud, generate doodles from text with AI, and render production-quality MP4s fast enough to ship same-day. Every existing player β€” VideoScribe, Doodly, the rest β€” failed on at least one of those.

They needed an engineering partner who could build a real-time canvas, an animation engine, an AI doodle generator, and a server-side video render pipeline β€” and ship the whole thing as a SaaS. That's where Uforia came in.

β˜…β˜…β˜…β˜…β˜…
"[CLIENT TESTIMONIAL β€” to be added once received. Pull-quote from founders Vlad & Stoica covering build quality, communication, and ability to ship hard engineering problems on schedule.]"
VS
Vlad & Stoica Founders, InstaDoodle (BlasterSuite)
⏳ Awaiting written testimonial from client
WHAT WE BUILT

The full whiteboard animation stack. From canvas to MP4.

InstaDoodle isn't a single feature wrapped in a UI. It's a complete animation system: a real-time editor, an AI generation engine, a 1,000+ asset library, and a server-side video pipeline that reproduces the exact same animation as the browser. We built every piece.

EDITOR Β· CANVAS
CORE FEATURE

Real-time whiteboard editor

Drag-and-drop canvas built in React with anime.js for path interpolation and Daybrush (Scena + Moveable) for timeline scrubbing, object transforms, and keyframe control. Hands draw on paths with realistic timing. Designers see the final animation play back the instant they hit preview.

React anime.js Daybrush SVG
AI Β· GENERATION
AI INTEGRATION

DoodleAI text-to-doodle engine

Type a prompt, get a doodle. We integrated an AI generation pipeline that produces brand-consistent hand-drawn characters and props from natural-language descriptions, then auto-converts them to animatable SVG paths the editor can draw stroke-by-stroke.

AI APIs SVG pipeline Queue workers Caching
LIBRARY Β· ASSETS
PRODUCT FEATURE

1,000+ asset library with smart search

Hand-curated library of characters, props, hands (male/female/diverse), backgrounds, and templates. All assets are SVG-based so they animate cleanly and scale to any resolution. Tag-based search and lazy-loaded thumbnails keep the editor responsive even with thousands of items.

SVG library CDN Tag search Lazy load
RENDER Β· EXPORT
VIDEO PIPELINE

Server-side FFmpeg render pipeline

The browser previews are great, but customers need MP4s. We built a Node.js + FFmpeg backend that replays the exact same animation timeline server-side and renders production-quality video β€” synced with voiceover, music, and TTS audio. One million videos rendered and counting.

Node.js FFmpeg Audio sync Render queue
THE HARD PARTS

Four engineering problems we actually solved.

Whiteboard animation looks simple from the outside. Under the hood, it's a real-time canvas problem, an animation timing problem, an AI integration problem, and a server-side rendering problem β€” all at once. Here's what we built to make it work.

01

Browser and server rendering the exact same animation

The problem
Users preview their animation in the browser using anime.js and Daybrush. But the exported MP4 has to be rendered on the server with FFmpeg. If the two engines disagree on timing, easing, or stroke order by even a few frames, what you see is not what you download. Every existing whiteboard tool has this drift problem β€” and customers notice.
What we built
A single animation specification that drives both engines. The browser reads the spec and animates via JS. The server reads the same spec and writes a frame-by-frame composition to FFmpeg. Result: pixel-identical preview and export. No surprises when the video downloads.
02

Drawing hands on arbitrary SVG paths

The problem
Every whiteboard animation needs a hand drawing the doodle β€” and the hand has to follow the actual stroke path at the right speed, angle, and pressure. With 1,000+ library assets plus user-uploaded SVGs and AI-generated content, hard-coding hand paths isn't an option.
What we built
Custom path-following engine that parses any SVG, computes tangent angles at every point, and animates a hand sprite along the stroke with natural timing variance. Works on library assets, user uploads, and DoodleAI-generated content alike. Any doodle, any hand, instantly drawable.
03

AI generation that doesn't burn the budget

The problem
DoodleAI generates custom characters from text prompts. At 2,600 daily active marketers, naive AI API usage is a five-figure monthly bill β€” and a slow user experience while each request waits for the model.
What we built
Content-hash caching that deduplicates identical prompts. A queue-based worker pool that batches similar requests. Pre-warmed common categories (people, objects, scenes). The result: most generations return instantly from cache, and we cut variable AI cost dramatically per user.
04

Rendering a million videos without melting the servers

The problem
FFmpeg video rendering is CPU-heavy. With marketers cranking out videos all day across time zones, naive single-queue rendering creates massive wait times at peak hours and idle capacity at night. And one failed render shouldn't block the next one in line.
What we built
Distributed render queue with priority lanes, automatic worker scaling, per-job isolation, and retry-on-failure. Each render is sandboxed so a bad input can't crash the pool. 1M+ videos rendered to date β€” most complete in under a few minutes, even at peak load.
β˜… ON CAMERA

Hear it from the InstaDoodle team. On camera.

The founders walk through what working with Uforia has looked like β€” from spec to shipped product, the technical depth, the speed, and what it takes to build an animation engine that runs in two places at once and produces identical results.

VS
Vlad & Stoica Founders, InstaDoodle Β· BlasterSuite
VIDEO PENDING
β˜… THE NUMBERS

Year one. Real outcomes.

InstaDoodle launched in 2025. Here's what the platform has shipped since.

10K+
Active users
1M+
Videos rendered
2,600
Daily marketers
1,000+
Library assets shipped
STILL BUILDING

Still shipping features for InstaDoodle in 2026.

Most agencies disappear after launch. The team that built it moves on, the docs get stale, and the founder is left looking for someone to patch a broken integration six months later.

InstaDoodle has been live since 2025 β€” and the same Uforia engineers who built the canvas, the AI engine, and the FFmpeg render pipeline are still adding features and shipping improvements every sprint.

When founders ask us what long-term partnership actually looks like, this is the answer.

2025+
Same team, still shipping

Building something like InstaDoodle? Let's talk.

If you're building a canvas-based SaaS, a real-time animation tool, or anything that needs to render the same thing in two places β€” we've solved that exact problem. 30-min call, no prep, no sales pitch.

No obligation. No mailing lists. Fahad personally takes the call.