Dear Singularitarians,
The Beneficial AGI Summit & Unconference 2024 (BGI24) recently marked a pivotal step in our journey to achieving Beneficial Artificial General Intelligence (BGI). It gathered together some of the greatest minds in the field of AI and beyond and gave them an open stage where they can discuss some of the most exciting ideas and pressing issues in this crucial moment in time when AGI is but an imminent reality.
Today, we want to share with you insights from one of the panel discussions led by our CEO Dr. Ben Goertzel, where several esteemed experts took the stage to discuss and explore software and hardware architectures for Beneficial AGI.
Panelists included Alexey Potapov, the Chief AGI Officer of Singularity NET, David Hanson, the CEO of Hanson Robotics, Joscha Bach, the renowned AI Strategist at Liquid AI, and Sergey Shyalapin, the CTO of SingularityNET.
Interestingly, a conversation about technology led us to explore other topics such as ethics, social interactions, the training and education of AI systems, all the way to the overcoming of the shortcomings of current LLMs with neural-symbolic AI.
It explored some of the key approaches being taken today toward the technical development of AGI systems and how different AGI architectures might lead to different outcomes in terms of benefit for humans and other sentient beings.
The panel was led by Dr. Ben Goertzel, who took the stage to pose a few all-important questions to our panel of renowned experts and thought leaders.
To watch the full panel titled Software and Hardware Architectures for Beneficial AGI, please visit the Beneficial AGI Summit 2024 livestream.
In his first question, Dr. Ben Goertzel asked panelists about the connection between the architecture of AGI systems and their potential for ethical or unethical behavior, wondering if certain architectures might inherently lead to more compassionate AI.
What are the relationships between the AGI architecture and the ethical behavior of the AGI system? Are some more likely to lead to AGIs that are compassionate to people and other sentient beings? And are some of them bound to lead to less fortunate outcomes for humanity?
Joscha Bach shared his perspective first, emphasizing that the evolution of AI seems almost accidental and that current AI architectures, like Large Language Models (LLMs), differ significantly from how organisms develop intelligence.
He suggested that current architectures might not inherently scale towards consciousness or creativity in the same way humans do, leaving open questions about their potential for ethical behavior.
Current AI alignment approaches do not stem from a deep understanding of how to build a society capable of perpetuating itself into the future in which we coexist with AI and with each other and life on Earth, but mostly from a perspective that is about emulating good behavior.
Joscha Bach AI Strategist, Liquid AI
Ultimately, he pointed out the realization that the way we currently develop AI systems is vastly different from the way nature evolved us as Intelligences, and the question remains will those two architectures ever meet, and are they even compatible with each other?
David Hanson went on to talk about how AIs current lack of inherent drives for survival, curiosity, and compassion makes it rather amoral.
He highlighted the necessity of instilling these biological drives in AGI systems to ensure their benevolence and safety, suggesting that bio-inspired approaches might be key. He believes that the inherent benevolence and drives that come with appreciating and being curious about life are inherently required to create a safe, beneficial AGI.
So, naturally, this poses the question why do *we* have these kinds of drives? The drive to live, learn, and love, and what can we do to instill that in the foundation of our AGI architecture?
Alexey Potapov pointed out (and emphasized) a dichotomy between imperative algorithms, which are limited and potentially dangerous outside their training scope, and architectures capable of world modeling.
As an example, he pointed out AlphaGo. While highly proficient in playing Go a task with well-defined rules and outcomes operates on a principle that could be generalized to other fit-forward or imperative algorithms. These algorithms excel within their trained domain but may act unpredictably or inadequately when faced with situations outside their training scope. The same, he says, applies for LLMs.
He advocated for architectures that can model the world and assign values, suggesting this as a safer route toward ethical AGI.
Sergey Shalyapin focused on the engineering aspect, noting the absence of world modeling in current AGI approaches. He went on to say that while LLMs and other forms of generative AI are proficient in certain behaviors, however, in the context of AGI, we need to have the world model and the behavioral model being co-trained.
He emphasized the need for AGIs that can understand and interact with the world in complex, open-ended scenarios, suggesting that current models are insufficient for truly ethical behavior.
Dr. Ben Goertzels second question concerned how AGI systems could be architecturally prepared to evolve toward superintelligence in a way that ensures beneficial outcomes.
Joscha Bach highlighted the importance of shared purposes and a cooperative societal model. He argued for moving beyond superficial ethical behaviors to foster a deep, existential understanding of coexistence among humans and AI.
David Hanson suggested that creating biologically inspired AGIs that can learn from human experiences might foster a beneficial co-evolution. He envisioned AGIs evolving from bio-inspired seeds into powerful, compassionate entities through relationships with humans.
Alexey Potapov emphasized the necessity of AGIs being able to model and understand humans to act ethically. He argued for architectures that allow AGIs to learn complex models of the world, including human values and social dynamics.
Sergey Shalyapin spoke to the need for iterative thinking processes in AGI, allowing for creativity and robust world modeling. He envisions a future where AGI could surpass human capabilities in task-solving and creativity, while simpler AI forms might focus on human-machine interaction.
While every panelist brought unique perspectives and opinions to the podium, they all seemed to agree on the importance of world modeling, self-understanding, and biologically inspired drives for developing ethical AGIs and preparing them for the eventual transition to superintelligence.
They emphasized a co-evolutionary approach, with AGIs learning from and with humans to ensure that they move beyond amorality and learn ethical behaviors that allow them to enhance our lives and the lives of all sentient beings.
The discussions from our esteemed panelists at the BGI24 conference show us that the path forward has the potential for great promise, but peril, as well.
The insights into AGI architectures, ethical considerations, and the journey toward superintelligence challenge us to rethink much more than the future of technology. It challenges us to rethink the very fabric of our societal and moral frameworks.
By coming together at this important event in a crucial time in history where AGI is moving closer and closer to reality, these experts remind us that our journey towards AGI is not just a technical challenge. Its a mirror reflecting our deepest fears, our highest hopes, and the values we uphold every single day.
As we venture forward, let us do so with an awareness of the delicate balance between creation and destruction, between control and autonomy, and between the known and the unknowable.
The path we choose will shape not just the future of AI, but the legacy of the human race itself.
SingularityNET is a decentralized AI Platform and Marketplace for Artificial Intelligence (AI) services. Our mission is the creation of a decentralized, democratic, inclusive, and beneficial Artificial General Intelligence (AGI), democratizing access to AI and AGI technologies through: