When three young founders walked into investor meetings last year with a radical proposal—to build AI systems that learn more like humans and less like data-hungry transformers—they weren’t sure if anyone would bite. The AI world was obsessed with scale, with bigger models trained on ever-larger datasets. But Ben and Asher Spector, along with Aidan Smith, had a different vision. This week, that vision got $180 million in validation. Their new lab, Flapping Airplanes, represents a bet against the prevailing wisdom of modern AI development. While giants like OpenAI and DeepMind pour resources into scaling existing architectures, this trio is asking a question that sounds almost heretical in today’s landscape: What if the path to better AI isn’t more data, but different algorithms? “We’re exploring a different set of tradeoffs.” — Aidan Smith, Co-founder, Flapping Airplanes The Data Efficiency Problem The central challenge Flapping Airplanes has set for itself is deceptively simple: current frontier models are trained on what amounts to the sum total of human knowledge, yet humans themselves learn from a tiny fraction of that data. The gap between human learning efficiency and AI training requirements isn’t just a curiosity—it’s a fundamental constraint on what AI can become. “The current frontier models are trained on the sum totality of human knowledge, and humans can obviously make do with an awful lot less,” Ben Spector explained in an interview with TechCrunch. “So there’s a big gap there, and it’s worth understanding.” This focus on data efficiency isn’t merely academic. In highly constrained domains like robotics and scientific discovery, the ability to learn from limited examples isn’t a nice-to-have—it’s essential. A model that’s significantly more data-efficient could unlock applications that remain out of reach for today’s systems, from autonomous manipulation in unstructured environments to accelerating scientific breakthroughs. Three Bets on the Future The research bet is that data efficiency represents a genuinely new direction where meaningful progress is possible. Rather than iterating on transformer architectures, Flapping Airplanes is looking to the human brain for inspiration—while carefully avoiding the trap of trying to replicate biology exactly. The commercial bet is that solving this problem will create enormous economic value. As Asher Spector noted, “A model that’s a million times more data efficient is probably a million times easier to put into the economy.” The applications span robotics, enterprise software, and scientific research—all domains where data scarcity currently limits AI adoption. The team bet is perhaps the most contrarian. Flapping Airplanes believes the right people to tackle this problem aren’t necessarily the established researchers who built the current generation of models, but a “creative and even in some ways inexperienced team that can go look at these problems again from the ground up.” “LLMs have an incredible ability to memorize, and draw on this great breadth of knowledge, but they can’t really pick up new skills very fast. It takes just rivers and rivers of data to adapt.” — Aidan Smith Flapping Airplanes, Not Birds The lab’s name reflects its philosophical approach to biological inspiration. The founders draw a sharp distinction between their goals and pure neuromorphic computing. “Think of the current systems as big, Boeing 787s,” Ben Spector said. “We’re not trying to build birds. That’s a step too far. We’re trying to build some kind of a flapping airplane.” The brain serves as what they call an “existence proof”—evidence that alternative algorithms are possible. But the founders are quick to note that silicon and neurons operate under fundamentally different constraints. The brain’s slow signal propagation (action potentials take milliseconds) compared to the blazing speed of modern processors suggests that the optimal AI architecture may look quite different from biological neural networks. Aidan Smith, who previously worked at Neuralink, brings a unique perspective to this question. “When you look inside the brain, you see that the algorithms that it uses are just fundamentally so different from gradient descent and some of the techniques that people use to train AI today,” he explained. Research First, Commercialization Later Unlike some AI labs that race toward product launches, Flapping Airplanes is taking a deliberately patient approach. The founders acknowledge they don’t have a timeline for when their research will yield commercial applications—and they’re comfortable with that uncertainty. “We don’t know the answers. We’re looking for truth,” Asher Spector said. “That said, I do think we have commercial backgrounds… we actually are excited to commercialize. We just need to start by doing research, because if we start by signing big enterprise contracts, we’re going to get distracted.” This research-first philosophy extends to the team’s willingness to pursue approaches that might initially underperform compared to existing methods. “Sometimes radically different things are just worse than the paradigm,” Smith acknowledged. “We’re exploring a set of different trade offs. It’s our hope that they will be different in the long run.” The $180 million seed round gives them substantial runway to pursue these long-term bets without immediate pressure to demonstrate commercial viability. In an AI landscape increasingly dominated by product announcements and revenue projections, Flapping Airplanes represents a different kind of bet—one on fundamental research and the possibility that the next breakthrough in AI might come from a direction nobody is currently watching. This article was reported by the ArtificialDaily editorial team. For more information, visit TechCrunch. Related posts: All the important news from the ongoing India AI Impact Summit Introducing Lockdown Mode and Elevated Risk labels in ChatGPT Custom Kernels for All from Codex and Claude Custom Kernels for All from Codex and Claude Post navigation Accelerating science with AI and simulations After all the hype, some AI experts don’t think OpenClaw is all that exciting