icon-annoucmentnew

We’re thrilled to announce the launch of our new features icon-arrow

SingularityNET AI Platform 2024?—?strategy and roadmap

author-img
By SingularityNET February 27, 2024

 

SingularityNET AI Platform 2024 Roadmap

Introduction

Over the last few years, we have been pushing the boundaries of decentralized AI, developing a comprehensive suite of innovative tools and products designed to develop and utilize AI services in ways that prioritize decentralization, safety, scalability, and benefits to all — developing the SingularityNET decentralized AI Platform.

The core goal of our Platform remains the same as it was in 2017 when we founded SingularityNET as the first truly decentralized AI network: To create a foundation suitable for running AGI systems with general intelligence at the human level and beyond in a secure, efficient, easily usable and fully decentralized way, without any central owners or controllers. Along the way, as our AI systems gradually move toward full AGI capabilities, the platform must also provide a high-quality decentralized infrastructure for AI applications serving diverse vertical markets.

One thing this mandate means is that — unlike most of the more recent entrants in the decentralized AI space — the Platform cannot be specialized to any particular class of AI algorithms or data types, nor any particular vertical market application. If it is going to serve as the decentralized infrastructure for the global economy as the world enters the AGI phase, it must be far more generic and flexible than that.

To meet these goals, the Platform does much more than aggregate multiple AI services on a single decentralized network, and functions as an end-to-end ecosystem of diverse, complex processes and interconnected components. These include the Marketplace and API, Publisher Portal, Developer Portal, SDKs, CLI, Daemon, and advanced smart contracts. Each component addresses several challenges, ranging from showcasing AI services and their respective capabilities to enabling integration into third-party applications.

The Platform will orchestrate services, payments and hosting in a decentralized way, integrating horizontal blockchain layers and vertical implementations. Following this vision, the Platform architecture currently foresees AI models to be hosted not only on a serverless infrastructure currently provided as an easy and risk-free service by the Platform, but also on fundamentally decentralized infrastructures, such as HyperCycle, NuNet and ICP (Internet Computer Protocol). All this will be bound together by ‘AI Deployment Infrastructure-as-a-Service,’ making the deployment and hosting process both easy and extremely flexible and powerful. The particulars of each solution component will be explained further below.

These core functionalities serve as the foundation, supplemented by ongoing research and development initiatives exploring the implementation of next-generation tools and third-party integrations.

The purpose of this report is to detail the achievements of the past year and outline our Platform development roadmap for 2024, with an emphasis on four key items:

  • The Internet of Knowledge — a unique approach to extending the applicability of decentralized AI in the immediate term, and paving the way for the emergence of decentralized AGI;
  • Integrations between the SingularityNET Platform and other decentralized networks: HyperCycle, NuNet, Cardano, Dfinity (ICP), and others, all part of the process of moving toward a next-generation cross-chain decentralized AI ecosystem;
  • Scalability and Usability improvements, including large new features in basic areas like hosting and billing;
  • Adoption through targeted initiatives like Deep Funding, SingularityNET ecosystem spinoffs, and more.

Throughout 2023, the Platform underwent architectural changes and other strategic enhancements aimed at refining and solidifying its development trajectory and aligning it with our Beneficial General Intelligence (BGI) plans, bringing us closer to our shared vision: a full-scale modernized AI Platform.

Table of contents:

· Platform Strategy and Roadmap for 2024
· The Internet of Knowledge — A Distributed and Decentralized Metaframework
· Deployment of Knowledge Nodes and Model Nodes on The Platform
·
Decentralized AI Deployment Infrastructure-as-a-Service
·
AI-DSL and Unified API for The Internet of Knowledge
·
Knowledge Node Assembling Deployment Toolkit
·
Neural-symbolic MeTTa-based Framework for AI Orchestration and LLM Tooling For Zarqa
· SingularityNET Platform Assistant
· Decentralized Collaborations on the Platform Architecture and Vertical Tech Stack
· ICP Integration and Decentralized AI Marketplace Deployment
·
HyperCycle: Steps Toward a Fully AI-Customized Decentralized Software Stack
·
NuNet AI Model Hosting for the SingularityNET Platform
·
Custom AI Developments for the Enterprises
·
Accelerating Progress on Cardano Integration

· Scalability and Usability Improvements, Smoothing the Path to a Decentralized AI Future
· Improving the Onboarding Experience
· Development of Text User Interface (TUI)
·
Improved Technical Documentation
·
Streamlining the Onboarding Process and Publisher Experience
·
Facilitated Service Development and Deployment Automation
· Improving Key Components: CLI, SDK, Daemon
· Progressed with Daemon, CLI, and SDK Transformation
· Developing Zero-code AI Model Training
· SingularityNET Token Bridge

· Driving Platform Utilization in 2024
· Deep Funding
· Deep Funding Platform Development
·
Deep Funding Request For Proposals (RFPs)
·
Growth Strategy
·
How to Get Involved?
· SingularityNET Ecosystem Spinoffs and incubating initiatives
· Rejuve.AI
·
Jam Galaxy
·
Mindplex
· Domain-Oriented AI Metaservices With AI Training Capabilities
· Scaled Image Generation Metaservice
·
Controllable Speech Synthesis Metaservice
·
Text Generation Training Metaservice
· Conclusion

Platform Strategy and Roadmap for 2024

The Internet of Knowledge — A Distributed and Decentralized Metaframework

Deployment of Knowledge Nodes and Model Nodes on The Platform

The concept of the Internet of Knowledge involves the separation of ML models and knowledge containers through their implementation in the form of decentralized nodes in order to introduce a synergetic network that makes knowledge interoperable by the ecosystem of ML models for creating efficient AI metaservices. In such a network, Knowledge Nodes can be domain-, task- and identity-dedicated, being dynamically updated and cross-utilized by ML models represented as Model Nodes. It opens a wide range of possibilities for different academic research, software development, data operation, business-dedicated, socially-motivated, idea-driven, and just creative teams without specific AI development skills to efficiently contribute to the evolution of beneficial AI.

The Internet of Knowledge can contain static nodes aggregating the golden standard knowledge and best practices or, on the other side of the spectrum, contain highly dynamic domain or subdomain representation with real-time updates. Knowledge nodes can support different modalities and multimodal knowledge, contain both declarative and procedural knowledge, be focused on implementation details and real-world scenarios, have different designs, implement various databases, and be supplemented by various retrieval subsystems for better task-driven optimization. Each node has its own technical specification for its operation, submitting a specifically formulated task or entity identifier to the control stack.

It should be emphasized that knowledge representation is most effective in the form of a graph when a system of connections is specified between data units. The described set of tools simplifies the creation of knowledge nodes, providing the user with a set of tools for formatting and organizing data in the form of a graph structure, as well as a set of universal extensible interfaces, both for interaction with AI agents, as well as with external systems. The use case is as follows: the user, using a toolkit in a declarative style, describes the desired parameters of the node, and describes the “contract” for the data structure, after which the automatic deployment process is launched, as a result of which the user has their own knowledge node, completely ready to work in the decentralized Internet of Knowledge.

A Knowledge Graph is a semantic network that visualizes entities and the relationships between them. The information represented by the Knowledge Graph is stored in a graph database. An entity is a real object such as an event or a person. In a Knowledge Graph, these real objects are represented as nodes. Each node/entity is related to other nodes/entities. The relationships are represented by edges — connections between the nodes. Edges can, for example, have the meaning “is part of,”,= “works at” or “has properties”. By connecting the nodes via the edges, i.e. the relationships between the individual objects, a knowledge network is created.

The graph database stores Knowledge Graph’s nodes and edges as well as the associated metadata. The nodes and edges of a particular graph can be described by universal methods of graph representation, which allow, on the one hand, to describe complex and structured data, and, on the other hand, to maintain a flexible and extensible model for representing information. An important aspect of creating a Knowledge Graph is the semantic enrichment of the data. This means that the data is enriched with additional information that more precisely describes the meaning and relationships between the nodes and edges.

The Knowledge Nodes in collaboration with Model Nodes create a powerful framework, capable of acting in different modalities in order to facilitate cognitive synergy. In turn, Model Nodes which can be represented by symbolic and neural-symbolic algorithms and deep learning neural models such as GAN, Stable diffusion, and Transformer, can benefit from knowledge access in the process of their evolution, not being constrained to use relevant knowledge in inference time but also to reuse it for retraining setups, which represents the ability to federated and continual learning for the system as a whole.

We also apply Integration Nodes wrapping popular frameworks for building and deploying dialogue agents such as LangChain, AutoGPT, AutoChain, and others in order to aggregate and synergize the best practices evolved in various developer communities and simplify integration to the Internet of Knowledge into the SingularityNET Platform.

Ultimately, these factors position SingularityNET to become the Knowledge Layer of the Internet in the AI era. Knowledge Nodes and Model Nodes deployed on the blockchain allow, through the use of smart contracts, to interact with AI agents, thereby achieving all the main advantages of functioning in a decentralized network and accelerating the evolution of beneficial AI.

Decentralized AI Deployment Infrastructure-as-a-Service

AI developers can focus on what they do best and AI customers can be confident that the services they want will have high uptime, robustness, and performance and will be deployed and managed in secure, scalable environments. This is the promise of our AI-infrastructure-as-a-service (IaaS) we plan to offer for a fee or share of revenues, similar to how app stores have simplified the mobile app economy for users and developers. Our IaaS tools will play the role of similar tools by platforms such as AWS and Azure, but with the following design goals tailored to the needs of networked AI:

  • Optimize for the computational requirements of training and deploying machine learning models. This goes beyond deep neural networks and GPU usage and considers graph processing, multi-agent systems, dynamic distributed knowledge stores, and other processing models needed to allow the emergence of networked AGI.
  • Support processing of stateful services, which currently represents a challenge in cloud platforms but is necessary for many tasks, such as those of conversational agents, task-oriented augmented reality, personal assistants, and others.
  • Provide different runtimes and environments for deploying the model, at the checkpoint and code level, at the container and VM level.
  • Autoscale the load of services in serverless, an event-driven architecture that allows users to scale models depending on the load;
  • Include secure support for public, private, and hybrid cloud deployments (public–private mix and edge–cloud mix).
  • Dynamically optimize compute locations to maximize compute and data proximity, improving performance and reducing bandwidth costs. We will leverage critical open-source technologies such as Kubernetes and CloudStack and support the deployment of our IaaS solution both on top of existing cloud platforms (where we make optimal use of built-in tooling) and bare metal data centers. One key consideration is using cryptocurrency mining hardware to train AI models and long-running AI reasoning and inference tasks.

AI-DSL and Unified API for The Internet of Knowledge

AI-DSL is a powerful tool that offers a convenient and universal standard for describing the interfaces and interactions of AI services. From the developer’s point of view, they will not be exposed to the Protocol Buffers (Protobuf) anymore by default. Instead, the developers will enter their service descriptions directly in AI-DSL. Next, the Platform SDK modules will automatically generate the Protobuf file required for the technical components of the Platform. It should also be noted that the development strategy also plans to add support for not only gRPC services but also REST API, which is determined by the planned support for translating AI-SDL into REST API. This will significantly expand the capabilities of the Platform for AI developers and ensure a high-quality level of compatibility.

Developing tailored AI algorithms that can solve real-world problems has been tedious, expensive, and time-consuming. The implementation of AI-DSL as a part of the Platform SDK is paving the way for users to access all the AI services they need in a single place. These self-assembling workflows will replace the current labor-intensive process for creating specialized, one-off AI processes. This protocol will create a universal mode of AI intercommunication and collaboration, making the benefits of complex AI processes accessible at scale.

The next stage in the implementation of the Internet of Knowledge is the development of a universal API for services related to both knowledge graphs and LLMs representing Knowledge Nodes and Model Nodes respectively. This standard is aimed at ensuring the interaction of the Platform with the Internet of Knowledge ecosystem through the different Platform components like MeTTa (a purpose-built language for cognitive computations) SDK, Python SDK, Javascript SDK, MeTTa-Motto modular library (a package that provides interoperability of LLMs and a variety of AI/ML Models with Knowledge Graphs, Databases, reasoning algorithms, etc.).

Knowledge Node Assembling Deployment Toolkit

The Knowledge Node Assembling Deployment Toolkit is an advanced tool designed for creating and deploying generic modules called Knowledge Nodes that provide universal approaches to knowledge representation. These modules are capable of interacting with ML models and the MeTTa SDK, i.e. with AI agents in the Internet of Knowledge layer. They are designed to store and provide knowledge and contextual data to improve the quality of AI services.

When configuring a knowledge node, the user describes a data contract, according to which they will fill the node with “knowledge data”. A data contract is a declarative configuration specifying the format, structure, and metadata of knowledge data. It serves as a blueprint for the system, guiding the interpretation and integration of diverse data sources into a cohesive graph. This contract defines the semantics, relationships, and properties of entities, allowing the system to harmonize disparate data elements into a unified knowledge representation.

By adhering to the data contract, the system can ensure consistency and interoperability, facilitating the seamless construction and enrichment of the Knowledge Graph from heterogeneous data sources. Knowledge and contexts are subsequently retrieved from the knowledge node through searching and navigating a graph, which involves querying for specific information or exploring relationships within the graph. Navigation includes traversing the graph by following edges to discover related entities and uncovering contextual information. Data enrichment and the use of various types of metadata enhance the precision of searches, enabling the knowledge node engine to explore complex interconnections and derive insights from the rich, interconnected data structure.

Conceptually, the core of a knowledge node is based on a graph database with the ability to expand with additional data storage as needed. The toolkit allows users to connect universal interfaces to interact with the node. This is a control interface, an interface for interacting with AI agents, and an interface for connecting external systems, e.g., for updating the Knowledge Graph from the outside. The user can implement their own data collection and processing system, or use their own data warehouse and connect it to a knowledge node to dynamically update the Knowledge Graph using both a webhook mechanism or a REST API.

The SingularityNET team is currently working on a comprehensive example of such a Knowledge Node service to streamline further development. The service will include a dynamically updated graph database and provide access to an extensive data set of scientific and technical texts in AI and machine learning domain, as well as related technical domains. To implement the prototype, a database of technical articles with a total initial volume of about a million paragraphs, dynamically updated from several sources, will be used. Scientific and technical texts and their metadata will be transposed into a graph structure, which will allow them to be linked with each other by a system of connections based on a variety of metadata, and, ultimately, provide highly relevant contexts for queries in the field covered. Such a service will serve as a reliable support for AI services and will serve as a good example confirming the usefulness of a Knowledge Node as an element of the Internet of Knowledge.

Neural-symbolic MeTTa-based Framework for AI Orchestration and LLM Tooling For Zarqa

Disruptive neural-symbolic architectures developed by Zarqa, a novel venture from SingularityNET, have been mobilizing our engineering expertise in solutions based on scaled neural-symbolic AI. Zarqa is aimed to create a pioneering and a far more powerful next generation of LLMs, capable of disrupting any industry by merging symbolic reasoning and dynamic Knowledge Graphs with the power of large-scale generative AI based on deep neural networks, resulting in unparalleled conversational and problem-solving capacity.

Neural-symbolic MeTTa-based Framework for AI Orchestration is aimed at achieving efficient hybridization via integrating LLMs into a neural-symbolic cognitive architecture. This architecture includes a metagraph-based knowledge storage, a cognitive programming language, developed by SingularityNET, called MeTTa, programs that are the content of the storage and can represent declarative and procedural knowledge, queries to this knowledge including programs themselves (implying full introspection), reasoning and learning rules and heuristics, as well as can perform subsymbolic operations, in particular, by processing information with the use of neural network modules.

The core framework allows for implementation in MeTTa of different inference and reasoning paradigms, such as Probabilistic Logic Networks (PLN) being implemented. The MeTTa language design enables high granular interoperability of reasoning steps and neural information processing operations, which provides features such as storing information into the knowledge base that is produced by generative neural networks in response to user’s queries (one-shot external memory for LLMs), verification of LLMs output via querying symbolic ontologies (hallucination mitigation), LLM conditioning on the Knowledge Graph content, and chaining of LLM inference steps controlled by or altered with symbolic knowledge-based reasoning.

AI services orchestrations and calls are one of the most relevant areas of development, especially in the context of the Platform. Taking into account the multitude of AI services, as well as the organization of the Internet of Knowledge, it should be noted that individual requests to a specific service become redundant and unproductive.

The idea of this direction is to create a system for converting requests in a formal language using LLMs, as well as defining several individual requests to AI services in its context. Thus, the Neural-Symbolic MeTTa-based Framework for AI Orchestration allows for optimized and aggregated calls to a number of AI services, as well as Internet of Knowledge nodes, to process one formal request, which is in fact multi-component.

SingularityNET Platform Assistant

The rising number of services on the SingularityNET marketplace highlighted the need for an automated solution to ensure user comfort and efficient navigation. As a response, we are currently developing an intelligent SingularityNET Platform Assistant, initially envisioned as a chatbot for service-related inquiries. However, further exploration revealed the potential for more comprehensive and innovative functionalities.

The development will prioritize a phased approach, initially introducing a chatbot to answer service-specific and Platform-wide questions. It will leverage existing documentation and development processes to assist users seamlessly. The long-term vision encompasses additional functionalities, such as:

  • Onboarding support: Guiding new users through the Platform’s features and operations;
  • Automated code generation: Streamlining specific tasks by automatically generating code for AI models wrapping, data processing, etc.;
  • Integrated service calls: Enabling users to interact with Platform services directly within the chatbot interface.

The Assistant’s technical foundation harnesses the Internet of Knowledge SingularutyNET Platform ecosystem and cutting-edge technologies like MeTTa-MoTTo, which provides interoperability of LLMs with Knowledge Graphs and reasoning. This innovative approach ensures a robust and adaptable foundation for ongoing and future development.

Decentralized Collaborations on the Platform Architecture and Vertical Tech Stack

ICP Integration and Decentralized AI Marketplace Deployment

We are collaborating closely with Dfinity to leverage the strengths and capabilities of the SingularityNET Platform and the Internet Computer Protocol (ICP) to advance Decentralized AI infrastructure and bring AI-based services to all dApps building on the ICP. This initiative aligns with our shared mission of democratizing AI technology by enhancing Platform functionality and user experience. It also complements our partnerships with other entities such as Input Output Global (IOHK) from the Cardano ecosystem, and our work with HyperCycle to create our own unique Layer 0++ blockchain framework, this Dfinity collaboration exemplifies our commitment to a cross-chain approach to decentralized AI.

A central aspect of the collaboration with Dfinity is the joint development of a decentralized AI Marketplace hosted on the ICP network. The SingularityNET marketplace serves as a hub for accessing and interacting with AI services on the Platform. It offers extensive functionalities such as test requests for services, purchase of services, and exploring and understanding service capabilities. By decentralizing this crucial component, we move closer to achieving complete decentralization and fostering an even broader distribution of AI services and development activities.

To achieve this, we will introduce a universal template for building user-friendly AI service interfaces that can be used by both service providers and users to create UI interfaces with a broad range of functionalities, including web3 authorization, wallet connectors, and, as a result, payment services. This solution aims to significantly reduce the complexity of interface development and streamline the distribution and integration of AI services.

As part of this collaboration, we are considering the creation of pre-built templates for frequently used AI services. These templates would offer standardized UI elements and features specifically designed for specific service categories.

Building upon our commitment to expanding options and platforms for hosting AI services, we are actively exploring the potential of ICP to host AI services. This feasibility study aligns with our vision for modern AI development, emphasizing scalability, dynamic resource redistribution, and greater accessibility.

Concurrently, we are testing the possibility of hosting AI models and Platform services in ICP canisters. Our research aims to answer key questions such as types of AI models suitable for ICP hosting, performance considerations, and model size limitations.

HyperCycle: Steps Toward a Fully AI-Customized Decentralized Software Stack

HyperCycle is building the essential missing components required for the Internet of AI, where AI agents with complementary capabilities can seamlessly transact, enabling them to collectively tackle problems of ever-increasing size and complexity, empowered by agent-to-agent microtransactions for microservices with sub-second finality.

The TODA protocols provide the essentials. The ledgerless TODA/IP consensus protocol minimizes network traversal and performs only the minimum computation necessary to secure the network itself. Building on TODA/FP, Earth64 Sato-Servers ensure that their assets are secured independently of any blockchain ledger.

Beyond the core functionality, HyperCycle offers vast possibilities. SingularityNET envisions an AI marketplace where humans and machines can transact, advancing machine intelligence and paving the way for true AGI. Additionally, marketplaces for high-performance computing can leverage HyperCycle as a service, supporting advancements in various fields such as science, medicine, and technology.

The synergistic combination of AI services, distributed registry, solution architecture, node software client, and the ability to run a node on several computing devices open up significant opportunities. Building upon this foundation, we are exploring the integration of HyperCycle with our Platform, focusing on several promising avenues.

The first

Stay Updated!

Get the latest insights, news, and updates.