Key Takeaways
- Nava has raised $22 million (approx. ₹184 crore) in a new round led by Greenoaks Capital, with RTP Global and Unicorn India Ventures also participating.
- The company, formerly known as Kluisz, is building full-stack enterprise AI infrastructure that combines data centers, GPUs, and software tools for AI workloads.
- Prior to this round, Nava raised about $9.6 million in seed funding, taking its total capital to roughly $31.6 million.
- The fresh capital will be used to expand Nava’s GPU cloud capacity, data center footprint, and go-to-market across Asia for enterprise AI customers.
What Happened?
Enterprise AI startup Nava has secured $22 million in a funding round led by Greenoaks Capital, as first reported via an announcement from Entrackr on X. Existing backers RTP Global and Unicorn India Ventures also joined the round, which follows Nava’s earlier $9.6 million seed raise. The company plans to accelerate its enterprise AI cloud platform, combining GPUs, data centers, and orchestration software for large-scale AI workloads.
Scaling full-stack AI infrastructure for enterprises
Nava develops cloud infrastructure specifically tuned for AI workloads, stitching together physical data centers, high-performance GPUs, and a software layer that abstracts complexity for enterprises. The startup aims to offer a full-stack AI cloud where customers can train, fine-tune, and deploy models on dedicated or shared GPU clusters, with observability, security, and cost management built in. With the new $22 million, Nava plans to expand its GPU inventory, deepen its data center presence in Asia, and enhance its software platform to support more demanding enterprise generative AI and inference use cases.
Investors are betting that vertically integrated AI infrastructure, rather than generic cloud instances, will be critical as enterprises move from proof-of-concept experiments to production-grade AI systems.
Why this round matters in the AI infrastructure race?
This raise lands amid a global scramble for GPU capacity, where hyperscalers dominate but regional, specialized providers like Nava are emerging to serve compliance-sensitive and latency-sensitive enterprise workloads. In Asia, many banks, telecoms, and large traditional enterprises are looking for sovereign or regionally hosted AI infrastructure that meets data residency norms while still offering access to cutting-edge GPUs.
Nava positions itself as a nimble alternative to general-purpose clouds, similar in spirit to other GPU-native startups that bundle hardware, networking, and MLOps tooling for end-to-end AI deployment. The funding also signals continued investor conviction that AI infrastructure—rather than just model providers—will capture significant value as organizations scale their AI adoption.
Competitive landscape and comparison tables
For Nava’s segment—GPU-first enterprise AI infrastructure—two relevant peers are CoreWeave and Lambda, both independent GPU cloud providers focused on AI workloads. Public, precise data on context windows, pricing per 1M tokens, and agentic capabilities is clearer for model/API companies than for infra providers, so the table below uses indicative, high-level comparisons based on current public offerings and typical configurations.
Feature comparison: Nava vs CoreWeave vs Lambda
| Feature/Metric | Nava (Enterprise AI Infra) | CoreWeave | Lambda (Lambda Cloud) |
|---|---|---|---|
| Context Window | Depends on models deployed; optimised infra for large-context LLMs used by enterprises. | Depends on customer-chosen models; supports large-context LLM training and inference. | Model-agnostic; supports training/inference for long-context models on rented GPUs. |
| Pricing per 1M Tokens | Indirect; usage billed primarily by GPU hours and infra, token cost varies by model partner or self-hosted models. | Indirect; primarily GPU-hour pricing, token economics depend on workloads and models. | Indirect; GPU/cluster pricing, no native per-token pricing. |
| Multimodal Support | Supports multimodal workloads (vision, text, speech) as long as corresponding models are deployed on its GPU cloud. | Designed for diverse AI workloads including vision and 3D/graphics-heavy jobs. | Supports training and serving of multimodal models via general-purpose GPU infrastructure. |
| Agentic Capabilities | Provides infra and orchestration for agents built on top of customer-chosen LLMs and tools; not a direct agent framework. | Infra layer only; agent behaviors depend on software stacks customers deploy. | Infra-focused; agentic capabilities come from open-source or commercial stacks customers run. |
Strategically, Nava’s edge is in focusing on full-stack enterprise AI in Asia, where data residency and local support can be decisive, while CoreWeave and Lambda are more entrenched in US and global high-performance GPU markets. CoreWeave and Lambda still likely win on sheer GPU scale and ecosystem maturity today, but Nava can be more attractive for regional enterprises that need tailored compliance, integration, and support closer to their home markets.
Sci-Tech Today’s Takeaway
In my experience, funding rounds at this scale for AI infrastructure startups are clear signals that we are still early in the enterprise AI build-out cycle rather than near a saturation point. I think this is a big deal because Nava is not just another model player—it is trying to solve the unglamorous but critical problem of giving enterprises reliable access to GPUs, data centers, and the orchestration stack needed to run real-world AI workloads at scale.
For CIOs and CTOs in Asia, I generally prefer the presence of strong regional infra providers alongside the hyperscalers, as this tends to improve pricing, resilience, and regulatory flexibility for end users. Overall, I see this round as modestly bullish for enterprise AI adoption: it won’t move token prices by itself, but it strengthens the underlying plumbing that future AI applications—and possibly on-chain AI integrations—will quietly rely on.
