Supercharging LLMs. Why We’re Excited to Lead ZeroEntropy’s $4.2M Seed Round
If you believe in the future of AI agents, copilots, and enterprise automation, you have to believe in retrieval. That’s where most of today’s systems fall apart. Generative models are great at reasoning, but only if they’re grounded in the right context. Feed them the wrong chunk, an outdated doc, or irrelevant information, and your application suffers: hallucinations, slow and expensive responses, and you end up with a brittle user experience.

RETRIEVAL IS THE CRITICAL UNLOCK FOR THE NEXT PHASE OF AI
RAG was supposed to solve this, but in practice, retrieval is still the weak link. The default stack of off-the-shelf embeddings, basic vector search, and generic rerankers can’t handle nuance, adapt to domain-specific data, or meet real-world latency requirements. In-context learning is also not a scalable option for large knowledge bases. Even large context windows are not suited to be filled with irrelevant context that makes LLMs slow, expensive, and prone to hallucinations. Retrieval isn’t just a component, it is the critical unlock for the next phase of AI.
Over the past year, we’ve seen dozens of teams try to “fix” retrieval. Most approaches demo well, but fall apart in production at scale. Recall drops on long-tail or jargon-heavy queries. Precision falls on large knowledge bases. Adding an LLM reranker slows everything down, and still serves the wrong context.
A BREAKTHROUGH IN MODEL PERFORMANCE
Ghita Houir Alami and Nicholas Pipitone were the first team we met that treated retrieval as a first-class infrastructure problem, and actually built an integrated system from the ground up to solve it.
What impressed us most was that they didn’t take shortcuts or try to patch in solutions for an easy first solve. They built ZeroEntropy from the ground up to make search accurate, fast, reliable, and easy to implement. That’s why customers already trust them to power core infrastructure.
The result? Their models didn’t just beat baselines. They outperformed everything we’d seen, across benchmarks and in production. Now, with their latest reranker zerank-1, ZeroEntropy is making a new breakthrough in retrieval, achieving massive gains in both accuracy and latency. zerank-1 improves the precision of initial search results across embedding, keyword, and hybrid methods, while outperforming every other reranker we’ve seen across domains.
Teams are ripping out brittle retrieval logic and replacing it with a single API call to ZeroEntropy, because it’s the one that actually works in production.
Just as important, the system is built to improve over time. As foundation models evolve, ZeroEntropy gets better too. That kind of dynamic improvement is a technical advantage and a key driver of long-term defensibility. It marks a fundamental shift in how we think about search in AI.
A RARE BREED: DOMAIN EXPERTS WHO CAN SHIP AND SCALE
Ghita has a background in applied mathematics, with graduate degrees from École Polytechnique and UC Berkeley. She was experimenting with GPT-2 long before ChatGPT became a household name, building voice-enabled assistants years ago, only to be frustrated by how stateless and brittle they were in practice.
Nicholas brings a rare mix of theoretical depth and practical execution. He is a competitive programmer and USACO finalist, and studied computer science and theoretical math at Carnegie Mellon and has built AI agents across multiple startups, from emotionally intelligent bots to enterprise sales assistants.
Across every application, both of them experienced the same limitation: retrieval. It was the bottleneck that kept systems from being truly useful. We couldn’t be more excited to be partnering with them.
ZEROENTROPY IS HIRING
At ZeroEntropy, they’re building a system that makes AI reliable. It performs better and evolves with the field. And their team is the kind of technical force that pushes the frontier and attracts the talent to build it.
They are hiring founding engineers to help them in their mission. If you’re a competitive programmer, mathematician, or obsessed with search and AI research you can apply here!