Understanding Intelligence
We're a team of researchers who've left the big labs to focus on understanding machine intelligence. We've worked on Google DeepMind's early scientific initiatives, developed Anthropic's interpretability efforts, and commercialized super intelligence at Meta.
"Understanding intelligence" might seem like a strange goal: we build neural networks, so we understand them already, right? Medieval peasants built bridges before Newton's laws. Trial and error let them land on recipes that worked. This was not understanding.
You don't need understanding to build a bridge. You also don't need it to build a neural network. But you'll never build a steam engine by stochastically searching the space of bridges. To build a steam engine, you need Newton's laws.
Today, LLMs are largely a result of trial and error. We pour in data, stir it into matrices with gradient descent, and end up with a black box that (sometimes) performs our task. This is a fantastic bridge between human language and computers. But if we're betting on another industrial revolution, we need a steam engine.
This time is also different. The stakes are not merely "do we understand AI, or do we not". Neural networks are increasingly looking like the future of science. Nobel Prizes are being awarded for throwing massive amounts of compute and data into models to predict the universe. But we don't understand those models. If that's all the future looks like, then it's the end of explanations that humans can understand. In short: the end of science.
This makes understanding machine intelligence the central scientific problem of our time—and a prerequisite for building systems we can trust with increasing autonomy.
How we work
Measurement.
Every scientific revolution has depended on careful measurement. There's a reason the ancient Greeks had philosophy but not chemistry—the precision instruments for science didn't exist until the 18th century. Model science starts with rigorous study of model behavior. That means building telescopes for models—so we can see how the universe of models works with exacting precision.
Explanation
Science starts where measurement ends. It's theory that separates prediction from understanding. We work on feature geometry, long-horizon interpretability, and the mathematics of what models learn.
Application
Like all ML, Interpretability is subject to the bitter lesson: methods that scale with compute will dominate all others. Successfully understanding models at the frontier requires massive computational resources. That's why we commercialize our research—not through consulting or one-off projects, but through products that scale with global LLM usage.
Why "Martian"
Models are alien minds that have landed on our planet. We need to understand them.
But we've seen superhuman intelligence before. The smartest scientists of the 20th century were a group of Hungarian émigrés—Teller, Szilard, von Neumann, Erdős—who fled Nazi Europe and helped invent the modern world. The Manhattan Project. Game theory. The architecture of the computer you're reading this on.
People called them "the Martians" because no one could explain where all that intelligence came from. We are once again faced with superhuman intelligence. This time, our explanation needs to be better.
Follow our work @withmartian