Akash Sharma is betting big on the future of production-grade AI—and the early results suggest he’s on the right track.
As the co-founder and CEO of Vellum, Sharma is quietly reshaping how developers and product teams build, deploy, and maintain Large Language Model (LLM) applications. His mission? Make AI reliable at scale.
Backed by Y Combinator and trusted by brands like Redfin, Vellum has grown from a startup idea into a powerful platform that now supports over 20 million API requests monthly.
Sharma’s journey started in 2023—but the foundation was laid long before that.
From consulting to building the infrastructure for AI
Before launching Vellum, Sharma spent five years at McKinsey & Company’s Silicon Valley office, advising top tech clients on product strategy and innovation. But it was in 2023 that he pivoted into something bigger.
With co-founders and a strong technical team, Sharma launched Vellum during Y Combinator’s Winter 2023 batch. The startup soon raised $5 million in seed funding from Y Combinator, Rebel Fund, Pioneer Fund, and Eastlink Capital.
The pitch? Offer a reliable, developer-focused platform for building AI systems without needing to reinvent the wheel.
“We realized every company was trying to build internal tools to test prompts, track performance, and scale AI safely,” Sharma said on a podcast episode. “Vellum makes that plug-and-play.”
What Vellum does—and why it matters
Vellum provides a full-stack solution for companies building AI features, offering tools for:
- Prompt engineering
- Version control
- Semantic search
- Evaluation
- Monitoring
Instead of juggling multiple internal systems or relying on one-off scripts, teams can manage everything inside Vellum.
“Akash isn’t just building another dev tool. He’s building the missing infrastructure that turns prototypes into production-ready AI,” noted one early investor.
Companies using Vellum include fast-scaling tech firms that need LLM applications to perform under pressure. By offering guardrails for safety, accuracy, and cost optimization, Vellum is positioning itself as essential infrastructure.
Becoming a voice in the AI development movement
Beyond his role as CEO, Sharma is emerging as a public thinker in the AI community.
He regularly shares tactical advice on prompt engineering and model reliability with his 11,000+ LinkedIn followers. His articles on AI Business break down the nuances of LLM stack development, and his commentary is often picked up across developer communities.
“Reliability is the killer feature,” Sharma said. “What we’re solving is the silent killer of AI rollouts—consistency.”
With every feature Vellum ships, it’s becoming clearer: Sharma isn’t chasing trends. He’s building for the long term.