In the last couple of years, much of the conversation around Artificial General Intelligence (AGI) has been dominated by the meteoric rise in popularity of Large Language Models (LLMs) and other forms of deep neural networks (DNNs).
From helping us with our everyday tasks to generating exquisite forms of art, these models have excelled in a variety of tasks and impressed many users around the world. With that said, there’s one major problem with their presence in the discourse surrounding AGI, and that’s that they fall short in one vital area: true general intelligence.
Back in 2022, Dr. Ben Goertzel, CEO of SingularityNET and the Artificial Superintelligence Alliance, explored the idea that deep learning systems may not be enough to achieve AGI in its fullest form in one of his blogs titled “Three Viable Paths to True AGI”
In his blog, Dr. Goertzel outlines three potentially viable paths that could lead to true AGI, each representing a different way of approaching the immense challenge of replicating human-like intelligence.
Today, we’ll dive into these paths: the cognition-level approach, the brain-level approach, and the chemistry-level approach—each offering unique insights into how we might achieve AGI and what lies beyond the current AI landscape.
The first path that Dr. Goertzel highlights, and one that he is personally most excited about, is the cognition-level approach. This method involves combining various AI techniques, including neural networks, symbolic reasoning, probabilistic programming, and evolutionary learning, into a single unified framework.
At the heart of this approach is OpenCog Hyperon, a project led by Dr. Goertzel and his fellow researchers and engineers that launched its Alpha version earlier this year.
Hyperon is not just another neural network-based system. It’s a hybrid neural-symbolic framework that seeks to model human cognition through the integration of multiple AI paradigms. According to Dr. Goertzel, deep neural networks alone are not enough for achieving AGI because of their limited ability to generalize, innovate, and abstract beyond narrow, specific tasks.
“The absolute upper bound for which these deep nets or any vaguely similar methods could be sensibly hoped to achieve would be what I’d call ‘closed-ended quasi-AGI’ systems which could imitate a lot of human behaviors — but […] would be incapable to address difficult unsolved science and engineering problems, or to perform the self-modification and self-improvement needed to serve as seed AIs and launch a Singularity.”
This critique of deep neural networks highlights their fundamental limitations, which lie in their inability to self-organize or generate creativity.
Hyperon addresses these gaps by incorporating symbolic reasoning, evolutionary algorithms, and advanced probabilistic inference, all operating within a metagraph-based system called Atomspace. This architecture allows for flexible, scalable reasoning and learning processes, drawing inspiration from human cognitive psychology.
Dr. Goertzel and our team believe that Hyperon represents a vital step toward AGI. The system has a design that emulates key cognitive functions—memory, attention, reasoning—using a collection of advanced AI algorithms optimized to interact within a common framework. This hybrid approach brings together the strengths of different AI paradigms while overcoming their individual weaknesses.
If the cognition-level approach focuses on mimicking human thinking processes through a combination of AI paradigms, the brain-level approach takes a more direct route: simulating the brain itself.
Dr. Goertzel acknowledges the complexity of this endeavor but believes that creating a nonlinear, dynamical model of the brain could offer another path to AGI. This approach differs significantly from current neural network models, which, as he points out, are only “neural” in the loosest sense.
Drawing inspiration from computational neuroscience, Goertzel advocates for brain simulation methods based on chaos theory and nonlinear dynamics, which would emulate the complex interactions of neurons at a much more granular level. He references Eugene Izhikevich’s work in dynamical systems, which offers a theoretical foundation for this kind of model.
“Even the best possible stab we could take at brain simulation given our current knowledge of the brain wouldn’t be SUCH an accurate stab really — there is just too much we don’t know about how the brain works, and our computer hardware is just too different from neural wetware.”
Despite these challenges, he sees promise in this path. A full-scale brain simulation could provide insights into human-like intelligence by replicating the interactions between various brain regions and neural networks. While the Human Brain Project attempted something along these lines but fell short, Goertzel believes that by grounding brain simulations in nonlinear dynamics and running them on advanced hardware, we might yet realize AGI through this approach.
But there’s another consideration: what if instead of simulating the human brain, we started by simulating simpler organisms? As he suggests, simulating the brains of animals like cockroaches or bees might be a more feasible intermediate step.
These simpler brains, with fewer neurons and connections, could offer valuable insights into the principles of neural dynamics. We could later scale them up to human-level intelligence.
The third path that Goertzel proposes moves away from neural modeling entirely and focuses on chemistry-level AGI. This speculative yet compelling approach involves simulating complex self-organizing systems at the chemical level, with the goal of coaxing emergent brain-like structures to form. Essentially, instead of trying to emulate the brain or cognition directly, we would create artificial chemical systems that mimic the underlying processes of life and intelligence.
Goertzel’s fascination with this approach stems from the insights offered by complex systems science and chaos theory, which suggest that intelligence could emerge from highly dynamic, self-organizing environments. He points to work done by researchers like Bruce Damer (Evogrid) and Walter Fontana (algorithmic chemistry) as potential inspirations. The idea is that by running large-scale simulations of artificial chemistry, we might be able to witness the spontaneous emergence of cognitive structures akin to those found in biological organisms.
While this approach is highly speculative and “blue-sky” as Goertzel puts it, it’s also deeply intriguing. The core challenge here is developing a computational environment that supports such massive, distributed simulations and the subsequent pattern analysis required to understand how complex behaviors emerge.
Goertzel acknowledges the high-risk nature of this approach, but he sees immense philosophical appeal in attempting to replicate the very processes that gave rise to life and intelligence in the first place. As he aptly states:
“This potential approach to AGI is very blue-sky-researchy and high-risk, yet also philosophically very attractive — and it’s never really been tried at modern scale.”
Should breakthroughs occur in this area, they could revolutionize our understanding of both intelligence and life itself, providing an entirely new foundation for AGI research.
Each of these three individual paths play some sort of a role in the broader discourse about AGI and its potential viability in the coming years.
They represent the diversity of approaches that are necessary to tackle the enormous complexity of intelligence itself.
While deep neural networks have achieved remarkable success in narrow AI applications, the creation of human-level AGI will require more than scaling up existing techniques.
Dr. Goertzel’s proposals invite us to think beyond current paradigms and consider more holistic, multi-disciplinary approaches that span cognitive science, neuroscience, and chemistry.
At SingularityNET, our commitment to decentralizing AI development and fostering innovation aligns perfectly with this broader vision. By supporting projects like OpenCog Hyperon and encouraging research into alternative AGI pathways, we are helping to push the boundaries of what is possible.
These three approaches outlined by Dr. Goertzel—cognition-level, brain-level, and chemistry-level—provide a roadmap for future AGI research, each offering a unique perspective on how to achieve true machine intelligence.
Even though the future of AGI is not set in stone, the Singularity is near.
And as we continue to explore these diverse paths, we invite you to join us on our journey to the dawn of an intelligence explosion that could (and likely will) reshape the world as we know it.