Discover / Blog / From the Founder | Building AI That Shows Its Work: Kepler's Bet on Transparency Over Magic

From the Founder | Building AI That Shows Its Work: Kepler's Bet on Transparency Over Magic

Why Work Here is a series in which Amit Matani, CEO of Wellfound, has honest, behind-the-scenes conversations with founders, executives, and employees about why their companies are worth joining.

AI is extraordinary at understanding language and reasoning. It can interpret sentiment, break down unstructured information, and generate analysis at a speed no human can match. But it has a fundamental problem: it is a probability machine. Ask it for a revenue figure twice and you might get two different numbers, neither with a citation. For most use cases that is a minor annoyance. In finance, healthcare, defense, and legal, a wrong number can mean a bad investment, a compliance failure, or worse.

Vinoo Ganesh, CEO and co-founder of Kepler, thinks the industry has been framing the problem incorrectly. The question is not how to make AI stop hallucinating. The question is why we are asking AI to do things it was never designed to do in the first place.

"AI is phenomenally good at non-numerical data," Ganesh tells Amit Matani, CEO of Wellfound. "It can interpret text, understand sentiment, classify intent. But whenever numbers are involved and stakes get really high, it hallucinates. Not because the model is bad, but because AI fundamentally works by prediction, not by calculation. It's trying to sound right, not be right."

Kepler's answer is an architectural separation: use AI for what AI is good at, which is language, reasoning, and intent, and use deterministic code for what code is good at, which is retrieving data, computing, and enforcing consistency. The AI layer interprets your question. The code layer retrieves and computes the answer. They never cross. Every number is traced to its source document, page, and line item. Same question, same answer, every time.

Ganesh recently sat down with Matani to discuss how Kepler is building the trust layer for AI, why the company started in finance, what it means to hire for ownership over credentials, and where the entire AI industry goes from here.

🎥 Watch the full interview on YouTube


From Palantir and Citadel to the Trust Problem

Ganesh spent seven years at Palantir Technologies, working across the stack with a focus on large-scale storage and retrieval systems and large-scale compute platforms. His co-founder, Dr. John McRaven, was at Palantir for 11 years, leading the time series and analytics side of the business for customers like BP and Airbus, where the accuracy of data was not optional. The two built products together for a decade.

After Palantir, Ganesh served as CTO of Veraset, a geospatial data-as-a-service startup that grew to $15M ARR and was acquired. He then became Citadel's first Head of Business Engineering, overseeing data pipelines and investment platforms for the hedge fund. That experience crystallized the idea behind Kepler.

"Portfolio managers at Citadel are so good at generating alpha because they can look at data, make sure it's accurate, and actually interpret it," Ganesh says. "That's what led to this idea: if we could get AI to be really good at understanding data while keeping the numbers deterministic and traceable, that's a game changer. Not just in finance, but in a totally industry-agnostic way."

Why Finance First

Kepler's first product, Kepler Finance, is in production for investment banking and financial analysis. Analysts can query SEC filings, earnings transcripts, 10-Ks, 10-Qs, and market data and get answers with full provenance and traceability.

Ganesh chose finance for three reasons. First, there is a massive trove of publicly available data, which makes building an intelligent layer on top of it easier. Second, the founding team comes from finance and understands how those workflows actually work. Third, the data itself is uniquely interesting: structured data like stock prices, unstructured data like quarterly earnings calls, and semi-structured data like 10-Ks that mix numerical tables with management discussion and analysis.

The stakes in these workflows are real. Ganesh likes to illustrate the point with a specific example: if you Google the purchase price Microsoft paid for LinkedIn, you will see $26.2 billion. That number is wrong. Accounting for fully diluted shares, the actual figure is closer to $26.3 billion, more than $100 million off. In finance, rounding errors are not rounding errors.

"When you're actually operating in this space, these numbers that seem like rounding errors are actually a huge deal," Ganesh says. "Microsoft is going to be unhappy if they find out after the fact they owe an additional $100 million."

The product is built around personalization and traceability. When a customer onboards, Kepler immediately configures to their specific workflows: their coverage universe, how they format Excel files, whether they use parentheses for negatives, how they line up decimal points. From there, the platform provides a workspace with a formal spreadsheet at its center where every number is traced to a PDF source. Any calculation, percentage, or margin can be sourced through its full mathematical lineage. If Kepler makes an adjustment to an adjusted EBITDA, you can see the reasoning, the addition and subtraction, and the source documents behind every step.

"The experience really becomes: finally, I don't have to worry about the numbers being made up," Ganesh says. "I can be creative and do what I do best, which is ideate about why a stock may go up or may go down."

The Deep Technical Problem

When Matani asks where the big hill is, Ganesh is direct: it is a series of deep technical problems. Cell-level provenance on every number and every transformation is not trivial. There are huge institutions tackling this space, which Ganesh sees as validation of the opportunity.

"A lot of people are selling AI that doesn't hallucinate," Ganesh says. "That's kind of funny because these are giant probability machines. There's no way to make an AI fully never hallucinate, because even truth can arguably be subjective. But if we can get AI to do what it's really good at, which is classifying user intent into predefined workflows, and let code handle the data, that's a really powerful place to be."

He draws a comparison to Cursor, the code editor, and why it works so well: it has a compute environment, domain-specific tools, prompts, and an understanding of the data it works across, which is code. Kepler brings that same perspective to other industries. The product gives AI the same toolkit that makes Cursor powerful, but pointed at financial data, and eventually at any data.

Building the Company: Five Values and No Whiteboard Tests

Ganesh and McRaven spent 16 hours in a room before they did anything else, defining five values they would hold the company to.

First, they are forward deployed with product DNA. They only win if their customers win, and they only know if their customers win by embedding, iterating, and deploying where their customers are. Ganesh drew this from his time at Palantir, where he flew to places like Afghanistan to deploy software and embed with customers to understand what was and was not working. He also helped build Palantir Frontline, the program that trained forward deployed engineers, many of whom are now founders in the Palantir alumni community.

Second, they have a culture of extreme ownership. If you notice a problem, you fix it, and once you step up, you are held accountable for the outcome.

Third, they build in a production-first way from day one. Durable execution platforms, blue-green deploys, rollback, CI/CD, observability, shipping into the field with real user feedback.

Fourth, they operate with trust as the default. When you come in, you immediately become an owner of the company. Everyone does their best work when confidence and trust are mutual.

Fifth, they keep raising the bar. They block out time for training, code health sprints, deep-dive tech talks. The head of reinforcement learning at xAI came in to speak to the team.

Hiring reflects all of this. Kepler does not do LeetCode. Candidates use AI tools as part of the process because the company's whole philosophy is that AI should be a force multiplier.

"You're not sitting here without internet access writing if-statements in Google Docs," Ganesh says. "We're giving you real problems that we actually have to solve. We look for how scrappy you are about defining assumptions. We look for production-first engineering. We look for people going out of their way to solve problems above and beyond the scope they're given."

Where AI Goes From Here


Ganesh sees the AI industry following the same arc as big data. Phase one was search: every product was a ChatGPT wrapper, connecting to data sources without creating any intelligent layer. Phase two is personalization: tools that continuously learn, fine-tune, and solve users' workflows in a differentiated way. Phase three is scale.

"We're entering the personalization phase now," Ganesh says. "The next generation of AI is going to be built off of platforms exactly like what Kepler is building."

He is most excited about 2026 being the year people finally start understanding how to develop trust for AI. People are beginning to see that these models are probability machines, and that understanding is a benefit, not a threat. AI will never be uninvented. The question is how we learn to coexist with it.

Kepler is industry-agnostic by design. Finance is first, but the platform is built to extend to any domain where numbers have to be right. Ganesh, who also spends time as an EMT, sees the same problem in healthcare: structured data like vitals and unstructured data like a general impression need to stay in sync, especially on 12- or 24-hour shifts when fatigue is real.

"Enabling people working in mission-critical environments to validate their data," Ganesh says, "is something so exciting to be working at the forefront of."


Watch the full conversation between Amit Matani and Vinoo Ganesh here

Why Work Here is a series in which Amit Matani, CEO of Wellfound, has honest, behind-the-scenes conversations with founders, executives, and employees about why their companies are worth joining. Click to watch more Why Work Here.