Welcome back to Foresight Radio, where we dive deep into the technologies shaping our world and explore how they're redefining the way we work, live, and lead. I'm Tom Koulopoulos, and today we're diving into a concept that's poised to redefine our relationship with artificial intelligence. That's what's called recursive AI. This is the moment when AI stops waiting for us to teach it and starts teaching itself. And this is where things begin to get really interesting. If you thought agent-based AI was the next big thing, this is what comes after that next big thing. Let me set the stage. Over the past decade, AI models have been trained on massive data sets—texts, images, audio, and code, mostly texts. For instance, Meta's Llama 3 model was trained on up to 15 trillion tokens. A token isn't exactly a word; it's more like a piece of a word. So, a thousand tokens are roughly about 750 words. For perspective, that's maybe about 10 to 20% of the words indexed by Google. But that means that models like Llama are already trained on a significant portion of the publicly available text of the internet. They aren't just scraping the surface—they're plumbing the depths of our collective digital knowledge. These vast data sets enable AI to perform some impressive tasks, but they also tether the AI to existing knowledge. This can be frustrating because, while we want AI to be creative, it often just regurgitates and reconstructs ideas from what it has already been trained on. It's like a student who excels at memorization but struggles with critical thinking and creativity. They can regurgitate information, but they can't extrapolate or innovate from it. The real transformation occurs when learning becomes recursive. This applies to humans as much as to AI. When a system uses its existing knowledge to determine what it needs to learn next, that's what we call curiosity in human nature. This mirrors how humans develop. After years of structured education, we begin to teach ourselves, integrating our experiences and insights to evolve our understanding of the world. Recursive AI is inching toward that kind of capability, marking a significant shift in how AI learns and adapts. It’s not just about learning; it’s about learning how to learn. Think about DeepMind's AlphaZero. Unlike its predecessor, AlphaGo, which learned from human games, AlphaZero started with no prior knowledge and learned to play games like chess and Go solely through self-play. This self-reinforcing learning loop is an example of recursion. Another example is Meta's Llama model, which has been used in continual learning experiences. These models adjust their training based on the impact of specific data types and enhance their performance in tasks like question answering by evaluating what they've already learned and testing it against future knowledge. It’s like the way humans learn, growing and adapting over time. Recursive AI introduces a form of curiosity, enabling systems to identify knowledge gaps and seek out information to fill those gaps. This self-directed learning is a step toward AI developing a form of synthetic intuition. As Alvin Toffler said in *Future Shock*, "The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn." Recursive AI embodies that principle. It doesn't just learn; it unlearns what no longer serves it and relearns based on new data. This leads to something I call "synthetic intuition." It's the ability to connect disparate pieces of information to generate novel insights. It’s what Crick and Watson did when they discovered the double helix structure of DNA. It’s what Einstein did when he developed the theory of relativity. Recursive AI brings about this kind of leap—an entirely new construct within which we understand the world. For example, Elicit, developed by Ought, is an AI research assistant that helps users find academic papers, ask questions, and summarize findings. A researcher might input a question about the effectiveness of a vitamin supplement, and Elicit will synthesize information from multiple studies to provide a comprehensive answer. Another example is Windsurf, formerly known as Coding, which OpenAI is considering acquiring. Windsurf offers an AI system that evolves and learns over time from both its successes and failures. One of the remarkable things about Windsurf is that, when I interact with it, I can ask why it didn’t anticipate certain issues, and over time, it learns from these conversations. This interaction builds a form of memory, which is essential for recursive learning. Memory allows the AI to retain context across conversations, creating continuity and improving its understanding over time. OpenAI is already exploring ways to create memory across prompts, enabling AI to retain context and build upon previous interactions. This continuity is essential for developing more advanced and intuitive AI systems. You’ve probably noticed that if you’re using OpenAI or ChatGPT, it might refer to you by name and recall prior conversations, striking a chord with you because it connects topics from past exchanges. As AI systems evolve, they’ll no longer be confined to their initial training data. They will generate new knowledge, moving from intelligence to creativity—and potentially, as I suspect, even consciousness, or at least the appearance of it. Let’s reflect on our own development. The first 16 years of our lives are structured with formal education, but the most significant growth comes afterward as we teach ourselves, synthesize experiences, and test those beliefs with others. Recursive intelligence works similarly—it builds not just on itself, but through trial, tribulation, and interaction with other entities. This shift from agent-based technology to recursive learning could usher in a new era of AI, one that is more creative and innovative in its capabilities. Recursive AI could lead to AI systems that evolve through self-reflection, continuously learning from their experiences and their interactions with the world. This transformation has vast implications. First, recursive AI could be the foundation for lifelong learning systems—AI entities that grow alongside us. Imagine AI doctors who improve with each patient, financial advisors who adapt to market changes in real-time, or teachers who tailor their methods to individual students. Recursive AI could also redefine autonomy. Finally, recursive AI will compel us to reconsider governance, safety, and ethics. As AI begins to rewrite its own rules and build its own mental models, it will challenge our understanding of machine intelligence and even our own intelligence. AI could pose significant risks, not from AI itself, but from humans misusing it. As we move forward, it’s important that we develop proper governance and safety measures to mitigate these risks. We are not at the end of AI development. We’re just at the very beginning, and recursive AI is not just the next step, it’s a leap. It will allow AI to teach itself and choose what to learn next. It challenges our current understanding of knowledge and ignorance, as Karl Popper once said, "Our knowledge can only be finite, while our ignorance must necessarily be infinite." Recursive AI challenges that notion because it continuously closes the gap between ignorance and knowledge.