SingularityNET’s Progress Towards Beneficial General Intelligence (BGI)

author-img
By SingularityNET Feb 27, 2024

Table of Contents

    SingularityNET’s Progress Towards Beneficial General Intelligence (BGI)

    We bring you our 2023 progress toward the Beneficial General Intelligence (BGI) report, highlighting the extraordinary body of work carried out by the SingularityNET core AGI team in collaboration with our spin-offs TrueAGI and Zarqa, our partners, and the wider OpenCog community.

    We specifically focus on the technical advancements of the OpenCog Hyperon AGI framework. Key achievements include advancing Hyperon’s cognitive components, the extension of AI-DSL into the MeTTa SDK, and advancements in experiential learning through projects like ROCCA and NARS.

    We also outline a roadmap for future development, focusing on Hyperon Alpha, real-world applications, expansions of Hyperon components, interoperability with the SingularityNET platform, and implementation of the PRIMUS cognitive model.

    While our commitment to achieving BGI is nothing new, what is different today is our transition from conceptualization to active implementation, translating our AGI R&D ideas into advanced systems ready to deal with real-world use cases. In parallel and with clear synergies, we are energetically progressing our decentralized AI Platform development, aimed at leveraging Hyperon to realize our broad vision of an intelligent, self-organizing, generally-intelligent global brain.

    This report also serves as an open invitation to AGI researchers and the wider AI community to contribute to our open cognitive-level approach and help drive the development of true AGI.

    Table of Contents:

    · OpenCog Hyperon: A Framework for AGI at the Human Level and Beyond
    · What is OpenCog Hyperon?
    · MeTTa Language
    ·
    Distributed Atomspace
    ·
    Cognitive Algorithms
    · Hyperon in 2023
    · OpenCog Hyperon R&D in 2023: Core Components
    ·
    AI-DSL in 2023
    ·
    F1R3FLY Update
    · From Core Tech Development to AGI R&D and Applications
    · Incorporating LLMs into OpenCog Hyperon
    ·
    LLMs and AGI
    ·
    Beyond LLMs
    ·
    Neural-Symbolic Synergy
    ·
    Experiential learning — ROCCA, NARS, Causal Reasoning, AIRIS
    · The Road to AGI: Hyperon Roadmap
    · OpenCog Hyperon
    ·
    The PRIMUS Cognitive Model
    ·
    SingularityNET Platform Integration
    · Towards Beneficial AGI

    OpenCog Hyperon: A Framework for AGI at the Human Level and Beyond

    What is OpenCog Hyperon?

    OpenCog is an integrative cognitive architecture and AI system design with the breadth and complexity to achieve Artificial General Intelligence (AGI) at the human level and beyond via a combination of autonomous learning and human education and supervision.

    The core of the infrastructure is a distributed metagraph called the Atomspace, a highly complex and nuanced form of Knowledge Graph. The Atomspace consists of nodes and (hyper)links containing various information — interrelationships, types, and probabilities. This symbolic space of knowledge allows the system to learn, store, and update knowledge and facts in a reliable (and non-hallucinatory) way and reason logically from this knowledge.

    Multiple AI algorithms from different paradigms (logical reasoning, probabilistic programming, neural networks, evolutionary learning) can be integrated into a common framework, allowing them to leverage the Atomspace for internal representation and to interoperate and work cooperatively on it.

    An associated cognitive systems theory explains how this approach can encompass all the central aspects of human intelligence, focusing on the “cognitive synergy” among different algorithms and modes of memory organization.

    Since circa 2020, the SingularityNET Foundation, together with its ecosystem partner TrueAGI and members of the OpenCog community, has been developing a new, scalable, and more powerful version of the system called OpenCog Hyperon.

    The decision to create a new version of the OpenCog Classic codebase was motivated by scalability and usability considerations and the need to leverage novel tools and methods from math branches such as dependent type theory, intuitionistic and paraconsistent logic, and meta-computations at deeper levels in the system design.

    As with the original OpenCog framework, Hyperon is designed to provide modular support for diverse cognitive architectures. We discuss the design and evolution of one such architecture, PRIMUS, in the Road to AGI section below.

    MeTTa Language

    OpenCog Hyperon introduced a new core component alongside the Atomspace, a purpose built language for cognitive computations MeTTa (Meta Type Talk), which provides an interoperability and communication layer to the architecture. MeTTa has been designed to be decentralized, self-modifying, and self-organizing.

    The basic idea behind MeTTa is that queries in logic languages, probabilistic programming languages, and database engines, as well as pattern matching in functional languages, have the same form and can be united. Chaining such queries is enough to build a programming language that unifies logic inference over knowledge with probabilistic and functional programming, making the implementation of AI algorithms in MeTTa very natural.

    Moreover, programs written in MeTTa are represented as subgraphs in the Atomspace in the same way as knowledge, allowing interlinked algorithms to cooperate by sharing information and intermediate state. In addition to this, the AI algorithms can, in turn, become the input for other agents which can, therefore, change and evolve such algorithms in a process fully coordinated inside the Atomspace.

    Distributed Atomspace

    Another important improvement presented in OpenCog Hyperon is the Distributed Atomspace (DAS), an extension of the Atomspace into a more independent component designed to support multiple simultaneous connections with different AI algorithms, providing a flexible query interface to distributed knowledge bases.

    DAS is the hypergraph OpenCog Hyperon uses to represent and store knowledge, being the source of knowledge for AI agents and the container of any computational result that might be created or achieved during their execution. It can be used as a component or as a stand-alone server to store essentially arbitrarily large knowledge bases and provide means for the agents to traverse regions of the hypergraphs and perform global queries involving properties, connectivity, subgraph topology, etc.

    Cognitive Algorithms

    The OpenCog design supposes implementing a variety of cognitive algorithms on top of its core components to perform knowledge-based reasoning and problem-solving in a synergistic manner.

    Of special importance is PLN (Probabilistic Logic Network), a logic designed to emulate human-like common sense reasoning. This logic excels at dealing with uncertainty, both empirical and abstract knowledge, and the relationships between them. While closely resembling NAL (Non-Axiomatic Logic), PLN remains firmly grounded in probability theory. For instance, the notion of uncertainty is captured by using second-order probability distributions. Such logic provides a common language and contributes to realizing the cognitive synergy in question.

    Hyperon in 2023

    In 2023, the development of core components of OpenCog Hyperon advanced remarkably, meeting our targets and incorporating emerging technologies. We have been focusing on delivering the Alpha version of the OpenCog Hyperon AGI framework — foundational components being the MeTTa interpreter and DAS. Concurrently, our core AGI team has transitioned into prototyping in MeTTa of higher-level Hyperon components, including PLN and Pattern Miner. Hyperon has already shown its utility as an AGI R&D platform by demonstrating the possibility of conveniently porting implementations of other reasoning frameworks like the Non-Axiomatic Reasoning System (NARS) into MeTTa.

    We have also been actively exploring and implementing scalability improvements at all computational levels, from base hardware to high-level algorithms and distributed computational platforms, all with an eye toward creating the tech stack required to speed up Hyperon’s metagraph-based algorithms. The hardware level includes the design of specialized computer chips to accelerate DAS pattern-matching queries. Backend optimization besides DAS includes compilation to Rholang, Warren Abstract Machine, and Hypervector representation. The possibility of deployment of Hyperon nodes on HyperCycle and NuNet is also under discussion. Higher-level optimizations include preliminary work on inference control, and attention and resource allocation.

    In early 2023, Minecraft AI mod, Vereya, was developed, extended, and published, with our AI agents keeping up-to-date with new Minecraft versions and working on different platforms. This allowed us to compare our AI agents to the GPT4-based Voyager agent. And now, Hyperon is ready to deal with other real-world use cases, including bio-AI and dialog systems, which will accompany Alpha Release.

    Let’s recap the results of 2023.

    OpenCog Hyperon R&D in 2023: Core Components

    In the first half of 2023, our MeTTa interpreter became ready for internal testing and experimentation, with improvements related to variable bindings, type checking, expression indexing, and other features. A Space API was implemented and improved, and examples of an SQL space and a “Neural Space” with its own pattern matching via LLM (Large Language Model) calls were provided, opening possibilities for further integration with other types of spaces. Further improvements of MeTTa included an advanced REPL and enriched standard library.

    Though still in the early development stage, the second half of 2023 saw a burst of AI algorithms and Hyperon components prototyping in MeTTa, including PLN PoC, Non-Axiomatic Reasoning System (NARS) implementation in MeTTa, SingularityNET dialog system port to MeTTa, experiments with Pattern Mining, and others.

    In the last quarter of 2023, the first version of the Distributed AtomSpace (DAS) integration to MeTTa using Space API was completed. This important milestone was accompanied by considerable progress in the DAS development, including its caching and indexing components, APIs, and its application to Bio-Atomspace for longevity research. The following milestones were especially important:

    • A MeTTa module for LLMs integration has been prototyped as a basis for future replacement of the SingularityNET dialog system, and the first version of the SingularityNET Platform and Marketplace Assistant has been implemented within it and run as a telegram bot for testing;
    • A Minimal MeTTa core component has been developed, which allowed rewriting the Rust implementation of MeTTa interpreter using the minimal set of core operations, making MeTTa even more flexible and amenable to metacomputations and inference control;
    • A Backward Chainer has been implemented in pure MeTTa. The code is only 5 lines long, yet it is universal enough to generate PLN proof trees, discover frequent patterns, compose AI services, and more. While currently inefficient, ongoing experiments to bring inference control to this implementation are already yielding promising results.

    AI-DSL in 2023

    The AI-DSL project roadmap underwent considerable change over the course of the year due to the advent of new technologies and tools. During Q1, we built prototype tools in Idris to crawl the SingularityNET marketplace and retrieve relevant Protobuf information about the SingularityNET services while verifying their types and subtypes.

    Due to rapid MeTTa development, we quickly switched in Q2 from working with Idris to our original goal of developing MeTTa tools for AI-DSL. Tools such as chaining, typing, and sub-typing were implemented in MeTTa in Q3. Q4 saw the build completion of Protobuf to MeTTa parser supporting nested definitions and methods and the incorporation of Protobuf to MeTTa parser into the marketplace crawler. The idea to bring service calls and AI-DSL together and make them accessible to Hyperon led to the extension of the AI-DSL project into the MeTTa SDK for the SingularityNET Platform as the main track.

    With the advent of systems such as ChatGPT, and the release of HuggingGPT as a tool for discovering and connecting services, we began a thorough investigation of HuggingGPT capabilities. We found that, while HuggingGPT was an impressive system, it was quite brittle, oftentimes breaking and not working as desired. As a result, we decided to work on the SingularityNET Platform Assistant to provide an easy-to-use system for interacting with the SingularityNET platform (for instance, requesting services or combined workflows in natural language via a LLM) as an additional track.

    F1R3FLY Update

    In 2023, SingularityNET partnered with F1R3FLY to scale particular MeTTa use cases dramatically through advanced Rholang concurrency methods. Steady progress towards demonstrating MeTTa-to-Rholang compilation has been taking place. The demo is designed to show a near-linear scale-up of execution as hardware and nodes are added to the network. Towards this end, F1R3FLY has:

    • Ensured the now completed (first iteration) of the Rust implementation of RSpace works well in the network;
    • Developed example MeTTa programs for demonstration and is reviewing MeTTa programs under development by the PLN team as candidates for the demo;
    • Refactored the MeTTa to Rholang compiler (HeavyMeTTaL) to support debugging and language modifications;
    • Developed an algorithm to compile Rholang and other programming languages into hypervectors and started working on the compiler;
    • Engaged Oracle as a channel partner to help deliver this demo by providing the cloud infrastructure to support testing, staging, and production deployment.

    Additionally, an extension of the Operational Semantics in Logical Form (OSLF) algorithm to account for a realizability semantics for PLN has been under development. This means that PLN formulae are interpreted as collections of MeTTa programs. This makes for a very powerful query model. Using OSLF provides algorithmic means to generate both the formulae and the semantics/query model.

    In the spring of 2023, a paper on operational semantics for MeTTa for judging the correctness of the HeavyMeTTaL compiler was published. Formally specified operational semantics can also be used in the style of Ethereum’s yellow paper in the blockchain domain. In the Fall, this approach was updated, producing four alternative calculi for MeTTa for further discussion.

    From Core Tech Development to AGI R&D and Applications

    Incorporating LLMs into OpenCog Hyperon

    The design of the OpenCog Hyperon architecture was inherently infused with tight neural-symbolic interoperability. Neural networks in the Hyperon design were envisioned as trainable skills that should solve repetitive tasks quickly and reliably. We have been using them as such in dialog systems, semantic vision, and Minecraft agents. Being an essentially integrative framework, OpenCog Hyperon is open to incorporating innovative solutions not only in knowledge representation and reasoning but in deep neural networks as well.

    At SingularityNET, we have been developing and using Deep Neural Networks (DNNs) both as services on the SIngularityNET platform and as OpenCog components for solving various tasks in robotics, dialog systems, bioinformatics, and other domains. Nonetheless, recent advances in LLMs and Transformer architectures have prompted further exploration of their potential to contribute towards the goal of achieving AGI, be it through incremental augmentations or incorporation into broader AGI frameworks like Hyperon.

    LLMs and AGI

    LLMs occupy an interesting location on the landscape of narrow AI/AGI systems. They possess an incredibly broad scope in a human sense, and yet they are not AGI. They have a number of known serious limitations, including hallucinations (yet despite hallucinating, they struggle with creativity), absence of memory (as an internal component of LLMs themselves), inability to acquire new knowledge consistently, and inability to solve problems requiring multi-step reasoning.

    The core reason for this is that LLMs directly map inputs to outputs in a fixed number of processing steps. This is similar to AlphaGo (and its more recent versions), which learned a fixed mapping to predict the best move by playing hundreds of millions of games. A vast number of combinations are enumerated and hashed during training, and novel combinations are not enumerated during playing (unless the DNNs are augmented with Monte-Carlo tree search).

    That is why generative DNNs are so large — they hash answers during training and recall them with interpolation during inference. And that is why they cannot go far beyond patterns in their training data, e.g., to play with modified rules or to solve a really novel task. They operate as a trained skill, reflex, or intuition, occupying this appropriate functional and architectural position in the neural-symbolic frameworks and Hyperon design.

    The same is true for LLMs. They utilize vast amounts of training data and exhibit limited generalization beyond the training examples. However, the difference from AlphaGo is that this data is not task-specific. Even by recombination, interpolation, and minor generalization of hashed information, LLMs can perform (although not on an expert level) a very broad range of tasks because their training data covers so much of what interests humans in their everyday pursuits. One may call them General Narrow AI.

    Beyond LLMs

    Limitations of LLMs do not intrinsically imply that they are not helpful toward the goal of AGI. Both shortcomings and successes of these systems are, in some ways, educational regarding what AGI is and is not.

    The research community is mitigating the limitations of LLMs by integrating them with different components like knowledge graphs and databases, embeddings-based memory, and plugins for searching the web, doing math, and executing code. LLMs are put into wrapper frameworks, which chain calls to LLMs and compose prompts for them, using templates and injecting their output into the next calls to emulate reasoning and even graphs of thoughts. This adds even more utility to LLMs but does not turn them into AGI. These frameworks have their own limitations. They act as ad hoc symbolic systems but without intrinsic knowledge representation and reasoning capabilities. Additionally, their plugins and components lack compositionality, and the expressive power of the control language for chaining LLMs and interoperating them with other components is highly restricted.

    Thus, while the usefulness of LLMs should not be downplayed, utilization of LLMs in further AGI R&D will require a more principled solution to the limitations of LLM architectures either by deeply integrating them into advanced neural-symbolic cognitive systems or by extending them with corresponding neural components.

    Neural-Symbolic Synergy

    Native interoperability of LLMs and Hyperon requires that they should speak the same language, namely MeTTa for Hyperon. Then, LLMs could be considered as Atomspaces, which could be queried to retrieve results to be evaluated further both in the context of symbolic and neural Atomspaces, so neural and symbolic components would talk to each other and solve tasks in collaboration, compensating for the weaknesses of each other. Composing a prompt for an LLM and chaining it with other operations (which could be queries to knowledge graphs or semantic memory, to reasoning engines, to other LLMs, or the next query to the same LLM) will be represented and executed as MeTTa expressions. These can also be composed automatically by any of the above-mentioned components.

    There are multiple ways in which this idea can be implemented and applied. At SingularityNET and Zarqa, we have explored a number of these. A Langchain or Guidance-like approach supposes that MeTTa is used for writing arbitrary custom scripts for prompt composition and LLM chaining and translating the output of LLMs back to MeTTa in a custom way, while also asking the LLMs to format the output in a defined style. Another approach is to use LLMs for translating natural language sentences into expressions in a specific knowledge representation, store them in the Atomspace, and perform symbolic reasoning within a particular reasoning system like PLN or NAL (the logic underlying NARS).

    Both approaches can be implemented on top of a generic framework known as MeTTa-Motto. For example, the PoC of the SingularityNET Platform and Marketplace Assistant was implemented in it as a set of task-specific prompt templates and scripts in MeTTa. NARS integration with LLMs using MeTTa-Motto, in turn, serves as an example of a generic Q&A agent focused on a particular reasoning system in Hyperon rather than delegating problem-solving functions to LLMs.

    Tighter integration between Hyperon and LLMs is being explored in a number of ways, including the fine-tuning of LLMs to be able to interpret MeTTa code and produce the correct code for the given prompt, and extending LLMs architecture to communicate with Atomspace not only via inputs and outputs, but also through attention mechanisms on intermediate levels of Transformers. Even the idea of putting LLMs inside Atomspace so the latter could use explicit knowledge and reasoning during inference is being considered as a potential step towards AGI.

    Experiential learning — ROCCA, NARS, Causal Reasoning, AIRIS

    The main purpose of OpenCog is for it to be the basis for developing intelligent agents, and our work on such agents has migrated to Hyperon in 2023. One important project in this direction is ROCCA (Rational OpenCog Controlled Agent), which focuses on recreating experiential learning in OpenCog. Much effort this year was spent on porting fundamental components required by ROCCA from OpenCog classic to Hyperon.

    These components include forward and backward chaining, PLN, and pattern mining. Amazingly, the new capabilities of Hyperon/MeTTa not only allowed us to nearly complete these ports in a relatively short time (a good portion of the effort was, in fact, spent testing and improving MeTTa), but also gave us the possibility to increase the quality and generality of these components vastly. As a result, we now have a universal chainer that is able to handle any logical system, classical and constructive mathematics, crisp and probabilistic logic, and more.

    In addition, the renowned NARS experiential reasoning system has been successfully integrated into Hyperon, leveraging the powerful and innovative programmatic constructs of MeTTa. Thanks to MeTTa’s advanced features, such as built-in unification, non-determinism, and intrinsic handling of Atomspaces, this integration has led to a significantly reduced code base. Compared to previous NARS implementations, MeTTa-NARS is modular and portable, facilitates easy experimentation with PLN and novel cutting-edge reasoning mechanisms, and it can run in a highly accelerated way with the MeTTa-Morph MeTTa to Scheme to C compiler. Furthermore, initial LLM-based natural language input for the reasoner is supported via MeTTa-MoTTo, which translates sentences into MeTTa representations on which the reasoning component can reliably reason.

    These endeavors are part of a broader initiative to consolidate the most effective elements of various systems, ROCCA, NARS, OpenPsi, and AIRIS, into a unified experiential learning component for Hyperon. What all of these systems have in common is the use of cognitive schematics of the form (precondition, operation) => consequence. These essentially causal pieces of knowledge allow the AI to develop a goal-independent understanding of its environment from both spontaneous and planned interactions with the environment. Concerning the latter, a recent addition has been the formulation of a curiosity model, enabling agents to efficiently explore their environment by actively seeking situations with the highest uncertainty in order to challenge their own causal knowledge.

    This approach not only enhances learning efficiency considerably, but our initial results suggest it also surpasses common Reinforcement Learning techniques in terms of data efficiency by orders of magnitude. So far, this is being evaluated in diverse grid world scenarios, which may look relatively archaic, but can support very different interaction dynamics between grid cells and are hence able to represent a very diverse set of simulated worlds, some of which might more or less resemble various real-world situations.

    One could argue that the notion of causality is not new, as it was studied by the ancient Greeks thousands of years ago. Furthermore, nowadays, it is studied in Animal Psychology as various animal species seem to be capable of reliable long-term planning and are able to acquire some kind of causal knowledge from observing the environment and sometimes their own kind. How specialized these abilities are might vary between animal species as this is a topic of active investigation, but the benefits of such abilities are obvious.

    Nonetheless, AI systems that cannot only use but also learn causal knowledge in real-time are rare and have not yet had great success. With Hyperon and its sophisticated experiential reasoning components, this is about to change, and it will make AI not just more adaptive and general, but also more trustworthy, transparent, and ultimately more competent.

    The Road to AGI: Hyperon Roadmap

    OpenCog Hyperon

    The most important milestone for Hyperon in H1 of 2024 will be the Alpha Release, which will include a MeTTa packages management and deployment system and useful packages and extensions themselves, tutorials, examples of Hyperon applications to practical use cases, including bio-AI and involving DAS and MeTTa-Motto, and AI libraries in MeTTa.

    A significant milestone will be the achievement of real-world interoperability between Hyperon and the SingularityNET platform. This will incorporate a number of upgrades, including: a) implementation of the platform SDK in MeTTa, allowing the use of services as Hyperon components; b) implementation of AI-DSL as a part of MeTTa SDK, allowing its utilization in both services and clients with gradual development and adoption, and c) further improvement of the platform assistant and its integration with MeTTa SDK.

    Post-Alpha release plans will focus heavily on AGI R&D and the related expansion of Hyperon components. They will include a considerable extension of the Hyperon “firmware,” MeTTa, whose interpreter will be turned into a customizable inference engine, which will facilitate and speed up the implementation of forward and backward chaining, probabilistic sampling, and genetic programming in MeTTa, as well as knowledge-based inference control. Supercompilation techniques, which are planned to be implemented in the inference framework, will be made reusable by particular reasoning engines and other higher-level cognitive components under development.

    The PRIMUS Cognitive Model

    The Hyperon software framework can be adapted to implement a variety of cognitive architectures and AGI approaches. For instance, it can be used to create chat systems that answer questions on a one-off basis, theorem provers, or even cognitive architectures resembling the human mind. That said, there is a particular cognitive architecture has been at the center of OpenCog development since the beginning, namely PRIMUS (formerly known as

    Stay Updated!

    Get the latest insights, news, and updates.