
Kepler
We're building the trust layer for AI in high-stakes industries. The LLM orchestrates (decides what data to gather, what to compute, how to structure the output), but the model never touches the data itself. Every actual value flows through deterministic code pipelines with provenance metadata back to source. Verification loops cross-check outputs before users see them. We started in finance because tolerance for wrong answers is zero. First product is a research platform for buy-side analysts: pull comparables, build models, research filings. Same architecture extends to chemicals, legal, healthcare. Models are commoditizing fast; the trust layer is what's missing. In-office NYC.