Starter
AI feature added to an existing product. 4–6 weeks.
Final number lands in the discovery call.
- 1 AI feature, integrated into your stack
- Eval harness + cost model
- Production deployment
- 30-day post-launch support
Custom AI products, copilots, and automation built by a founder shipping 10 AI startups in parallel.
FIELD · AI
EST. 2011
AI STARTUPS SHIPPING
Live across Voxly, NOBBYO, Pannakhata
Founders come to me with an 'AI feature' they need shipped. 80% of the work is the unsexy part: data plumbing, evaluation harnesses, fallback behavior, cost monitoring, observability. The model is the easy part. The product around it is where AI projects live or die.
Your team can prompt an LLM but can't ship one to production safely.
You're spending more on tokens than on the engineers running them.
Hallucinations break demos — and you don't have a system to catch them.
You shipped an AI feature six months ago. Nobody knows if it's working.
I run 10 AI startups concurrently — Voxly, NOBBYO, Pannakhata, VoiceBridge, and six more. Every constraint your AI build will hit, I've hit this quarter. The playbook is shipped, not theoretical.
Production-grade integration in 6–10 weeks, not 6–10 months.
Evaluation harness on day one — you know if accuracy regresses.
Cost ceiling enforced via provider routing — no surprise OpenAI bills.
Observability built in — drift, hallucinations, and usage tracked from launch.
What you get in every AI Solutions engagement, regardless of tier.
The same four-step rhythm used across every service, tuned for this specific work.
Two calls and a written brief. I separate the problem you have from the AI demo you saw last week. Most teams arrive with a solution looking for a problem — together we leave Discover with a problem that's specific enough to scope.
Eval harness first, UI second. I define what 'good' looks like in measurable terms — accuracy, latency, cost-per-call — before any model selection. The UI gets designed against the eval, not the other way around.
Two-week sprints with weekly working demos. You see the actual product behaving against your data before week 3. I pick models for your eval, not for hype, and swap them as cheaper or smarter ones land.
Deploy to your stack with monitoring on key metrics from day one. 30 days of post-launch support included so the first week of real-user data doesn't catch you alone.
OpenAI
Anthropic
Vercel AI SDK
LangSmith
Pinecone
Next.js
Final numbers land after a discovery call so the tier matches actual scope. Tier shapes are stable.
AI feature added to an existing product. 4–6 weeks.
Final number lands in the discovery call.
AI MVP — copilot, agent, or end-to-end product. 8–12 weeks.
Final number lands in the discovery call.
Strategic AI engagement with embedded retainer. Quarterly cycles.
Final number lands in the discovery call.
Three of my own ventures using exactly the AI project playbook I'd run for you. Live receipts, not Phase-10 mockups.

Stage · LIVEStarted · EST 2024Users · 500+ BUSINESS USERS
AI-powered Business OS, live across BD, UK, and Luxembourg with 500+ business users.

Stage · BETAStarted · MAR 2025Users · 240 BETA TESTERS
Real-time voice-to-text for Bangla and other under-served South Asian languages.

Stage · BETAStarted · DEC 2024Users · 400 ACTIVE SELLERS
AI knowledge ops — copilot for non-technical operators.
Founder, lead architect
Currently shipping 10 AI startups. I write production code, set evaluation criteria, and review every release before it ships to your users.
Managing Director, NextBangla
Operational backbone. Project schedules, contracts, invoicing, communication cadence. You'll hear from her at least as often as me.
Implementation engineer
Senior engineer assigned per project based on stack. Names disclosed once an NDA is in place.
Voxly, NOBBYO, Pannakhata, VoiceBridge — when I build AI into your product, the same patterns I'm using shipped in mine last week. You're not the test case.
Most AI projects look great in a demo and fall over in week 2 of real traffic. I define 'good' with numbers before I pick a model. That's why my products survive past the launch tweet.
An AI feature that costs $4 per session can sink a unit economics model. I model cost before I ship, and I revisit it every time a cheaper model lands — which is roughly every six weeks now.
If the scope above looks like what you need, the next step is a 15-minute discovery call. No pitch — just a conversation about what you're building.