Scientific Superintelligence: The Deep Blue MomentAI Just Went from Answering Questions to Running Labs
TLDR: In 1997, Deep Blue defeated the world chess champion. In 2016, AlphaGo’s Move 37 showed AI could generate creative strategies no human had ever conceived. Now, in 2026, we are witnessing the equivalent moment for all of science. Companies like Lila Sciences are building AI systems that autonomously perform the scientific method (generating hypotheses, designing experiments, running physical labs, and iterating) at superhuman speed and scale. This is not a prediction. It’s already happening. And it changes everything. “We define scientific superintelligence as the ability to conduct the scientific method at a level beyond human intelligence at every step of the process.”— Geoffrey von Maltzahn, CEO, Lila Sciences From Deep Blue to Move 37 to the AI Science FactoryMost people remember Deep Blue’s victory over Garry Kasparov in 1997. It was a brute-force triumph: the machine evaluated 200 million moves per second and simply out-calculated the greatest chess mind alive. It was impressive. It was also, in hindsight, primitive. The real breakthrough came nineteen years later, when DeepMind’s AlphaGo defeated world Go champion Lee Sedol. Go has more possible board positions than atoms in the observable universe… you cannot brute-force it. AlphaGo had to learn something deeper: strategy, intuition, creativity. And then came Move 37. In Game 2 of the match, AlphaGo played a move so unexpected, so alien to conventional Go wisdom, that the commentators fell silent. Lee Sedol left the room for fifteen minutes. No human in the 2,500-year history of Go had ever played that move. It wasn’t merely correct, it was brilliant. The machine had discovered a creative strategy that humans had never imagined. That single moment changed the trajectory of AI research. It proved that artificial intelligence could do more than optimize within known frameworks. It could discover entirely new knowledge. Now ask yourself: what happens when that same capability is applied not to a board game, but to the entirety of science? The Scientific Method at Machine SpeedHere is what’s happening right now, in 2026: AI systems are running the scientific method autonomously, at machine speed, around the clock, and across every domain of science simultaneously. These aren’t chatbots that help researchers write papers. These are autonomous agents that generate hypotheses, design experiments, operate physical laboratory equipment, analyze results, and iterate. All without human intervention. They are doing science the way AlphaGo played Go: by exploring a possibility space so vast that no human team could cover it in a thousand lifetimes. Consider what DeepMind has already accomplished. AlphaFold predicted the 3D structure of virtually every known protein (over 200 million of them) solving a problem that had stumped biologists for fifty years. That work earned Demis Hassabis and John Jumper the 2024 Nobel Prize in Chemistry. Their latest system, AlphaEvolve, recently had its own “Move 37 moment” when it discovered a novel method for matrix multiplication: a fundamental mathematical operation that underlies all of modern AI. No human mathematician had found it. AlphaFold was just the beginning… Lila Sciences: Building the World’s First AI-Driven Science FactoryOne company that is leading the charge into scientific superintelligence is Lila Sciences (full disclosure, I’m an investor), founded by Flagship Pioneering, the same venture creation firm that built Moderna. Led by CEO Geoffrey von Maltzahn, Lila is building what they call “AI Science Factories”: fully autonomous laboratories where AI systems generate hypotheses, design experiments, operate lab equipment, analyze results, and iterate at machine speed with minimal human intervention. What makes Lila extraordinary is scale. Their AI has accumulated over 10 trillion tokens of scientific reasoning data, generated entirely by AI models reasoning through the scientific method against experimental results. For context, the usable subset of the internet for training LLMs is roughly 15 trillion tokens. By the end of 2026, Lila’s scientific reasoning dataset will exceed twice the size of the internet used to train frontier LLMs. All of it is original scientific thought. And crucially, Lila trains across all scientific domains simultaneously: life sciences, chemistry, materials science, energy. This matters because many of history’s greatest breakthroughs came from cross-domain insights. Penicillin was discovered by a biologist who noticed something strange about a mold. CRISPR was found by microbiologists studying bacterial immune systems. The transistor emerged from quantum physics applied to materials science. AI systems that train across all of science simultaneously can find these cross-domain patterns at a scale and speed no human team can match. Lila calls these discoveries “Move 37 moments, — and they report that they’ve been happening across every domain since late 2025. Lila’s AI, training on just 2% of available scientific data, already outperforms leading AI models (including the latest Claude Opus and GPT-5 models) across materials science, chemistry, and life sciences.The Results Are Already IncredibleThe early demonstrations of scientific superintelligence are producing results that would have seemed impossible just two years ago… In mRNA therapeutics, Lila’s AI used a self-play approach (essentially playing a million games of mRNA design against itself, the way AlphaGo played millions of Go games), and achieved performance that is twice as effective as current mRNA technologies from the leading pharmaceutical companies. Expression lasting 15 days versus the 1.5 days achieved by conventional approaches. A 10x improvement. And in CAR-T cell therapy, one of the most promising frontiers in cancer treatment, an AI-driven program invested $3 million and six months to develop a therapy that outperformed a competing approach that was recently acquired for $2.1 billion based on traditional methods. The AI system explored 300,000 design variants. The traditional approach tested 13. Read that again. Three million dollars (Lila) versus two billion (everyone else). Three hundred thousand variants (Lila) versus thirteen (everyone else). And the cheaper one won. This is what happens when the scientific method compounds at machine speed. The Bitter Lesson Applied to ScienceThere is a famous concept in AI research called “the bitter lesson,” articulated by Rich Sutton in 2019. The lesson is this: across the entire history of artificial intelligence, the approaches that ultimately win are not the ones that try to build in human knowledge, but the ones that leverage massive computation and learning. Every time researchers tried to hand-code human expertise into AI systems, they were eventually outperformed by systems that simply learned from vast amounts of data. Chess, Go, protein folding, language… the pattern is always the same. Scale wins. The bitter lesson is now applying to science itself. Narrow AI systems trained on a single domain are being outperformed by broad systems that train across all scientific domains simultaneously. Lila’s approach (training one unified intelligence across biology, chemistry, materials science, and more) is proving that the bitter lesson holds in the physical world, not just the digital one. The Claude Code Moment for All of ScienceIf you follow AI, you’ve seen what happened when coding assistants like Claude Code, Cursor, and GitHub Copilot transformed software development. Suddenly, a single developer with an AI assistant could do the work of a team. Productivity wasn’t merely improved, it was transformed by an order of magnitude. We are about to witness the same transformation across all of science. Every scientist will soon have an AI collaborator that can search the entire scientific literature in seconds, generate novel hypotheses, design experiments, simulate outcomes, and iterate. All before the human scientist finishes their morning coffee. The question will not be “Can AI help with research?” It will be “How did we ever do research without it?” And just like the software revolution, this won’t replace scientists. It will amplify them. The scientists who learn to collaborate with AI will produce breakthroughs at a rate that would have seemed impossible a few years ago. Those who refuse to adapt will find themselves working at a pace that’s no longer competitive. Why This Matters for All of UsThis isn’t only a story about science itself. It’s a story about everything science touches, which is everything. Medicine: Drug development that currently takes a decade and costs $2.4 billion per approved drug could be compressed to months at a fraction of the cost. Diseases we consider incurable today will face an onslaught of AI-designed therapies tested at a scale previously unimaginable. Energy: New materials for solar cells, batteries, and nuclear fusion are being discovered through autonomous experimentation at ten times the speed of conventional research. Materials: AI-designed materials with properties we’ve never seen before (stronger, lighter, more conductive) will transform manufacturing, construction, aerospace, and electronics. Agriculture: AI-optimized crop varieties and agricultural processes will increase yields while reducing environmental impact. So, what’s the the key point? That we are on the verge of solving everything, in the fashion that Alex Wissner-Gross and I wrote about in our paper www.SolveEverything.org. The Compound Interest of KnowledgeCharlie Munger once said: “The first rule of compounding is never interrupt it unnecessarily.” The scientific method is itself a compounding phenomenon. Each discovery builds on previous discoveries. Each experiment generates data that improves the next experiment. Knowledge compounds. Until now, the rate of compounding has been limited by human speed: how fast we can read papers, design experiments, run tests, and analyze results. AI removes that bottleneck. When the scientific method runs at machine speed, with machine-scale breadth, the compounding accelerates by orders of magnitude. We are about to see more scientific progress in the next 5 years than in the previous century. Not because scientists suddenly became smarter, but because the tool they’re using to do science became superintelligent. Deep Blue beat a chess champion. AlphaGo made a move no human had ever conceived. AlphaFold won a Nobel Prize. And now, in 2026, AI is making its Move 37 in every field of science simultaneously.The Deep Blue moment for all of science is here. To a future of Abundance, Peter More From Peter
You're currently a free subscriber to Metatrends. For the full experience, upgrade your subscription.
© 2026 Peter Diamandis |



Comments