REI Network is a decentralized AI research and development platform that focuses on advancing artificial general intelligence (AGI) through a novel architecture and collaborative ecosystem. It aims to create a sustainable environment for AI research while ensuring alignment with human values and ethical considerations. [1]
REI Network is a research-focused organization exploring artificial intelligence through scientific and cognitive principles rather than traditional computer science methods. Their work emphasizes novel neural architectures drawn from biological and cognitive models, aiming to move beyond standard statistical approaches.
One of their key innovations is a universal connectivity layer that allows AI agents to interact across various systems without being limited to fixed operational constraints. This universal adapter design enables agents to maintain core functionality while integrating with different external architectures.
The architecture includes a translation mechanism that converts insights into standardized formats, supporting broader application and adaptability. This system is intended as a foundational framework for developers building interactive systems, analytical tools, or entirely new forms of applications. [2]
Core is an AI architecture built around three main components: the Bowtie Architecture, the Reasoning Cluster, and Model Orchestration. It introduces a new framework for how AI systems process, interpret, and evolve with information.
Designed as a foundational platform, Core supports the development of various applications—from interactive systems to analytical tools—without prescribing specific use cases. Its open architecture is intended to facilitate the broader development community's adoption, adaptation, and ongoing innovation.
The structure of Core emphasizes modular reasoning and flexible integration, aiming to expand the capabilities of intelligent agents while maintaining adaptability across different environments. [3]
The Bowtie Architecture is a memory and concept formation system composed of three integrated components: semantic processing on the left, core concept distillation in the center, and abstract vector matching on the right. This structure manages information by storing it in both semantic vectors and abstract concept nodes, allowing for a nuanced understanding beyond basic pattern recognition.
The system removes redundant text while preserving key vector features, supporting a dual-representation approach to memory. The right component handles abstract, unanchored vector features that can combine with similar memories, uncovering hidden relationships through mathematical similarity rather than semantic alignment.
As these elements interact through the central component, new connections form naturally, enabling the system to evolve its understanding over time. This design allows for continuous learning and adaptation in a way that mirrors cognitive processes, fostering emergent knowledge and insight. [3]
The Reasoning Cluster functions as the central processing unit within the Core architecture, managing cognitive operations and guiding model selection. Using the Bowtie Architecture, it uses decision trees to determine the most suitable models for each query and forms connections between generated memories.
This system maintains a conceptual graph that evolves as new information is introduced. It applies a sophistication bias to prioritize model selections that balance efficiency and effectiveness. Processing occurs parallel across multiple models, enabling the system to remain adaptive while sustaining consistent performance. [3]
Model Orchestration in Core manages the distribution of tasks across multiple specialized models. It uses dynamic query decomposition to break down complex problems and assign components to the appropriate models, enabling efficient processing while supporting modular integration of new capabilities.
The system coordinates three categories of models: statistical models for tasks such as prediction, classification, and time series analysis; perception models for processing visual, audio, and sensor data; and domain-specific models tailored to industry or application-specific needs. Each model operates within a flexible framework, contributing to the system’s overall functionality.
The orchestration layer optimizes performance by analyzing queries to identify required functions, assigning tasks accordingly, and maintaining performance profiles for each model. This approach supports efficient resource use and provides a foundation for continuous improvement. [3]
Evolution is a fundamental principle in a Unit's architecture, allowing it to develop through continuous interaction. Each interaction is stored as a memory and contributes to the Unit’s adaptive behavior over time.
Memories are categorized into short-term memory, which includes recent exchanges, and long-term memory, which consists of older information. Over time, related long-term memories form patterns, representing the first stage of evolutionary learning. These patterns cluster based on similarity, with frequent and similar memories reinforcing each other and increasing the likelihood of forming new connections.
As memories accumulate, they are abstracted into concepts—stable representations of underlying features. Unlike long-term memories, concepts do not fade, though they remain open to modification. They are organized through similar clustering and pattern formation, though with more complex similarity assessments.
The system’s memory framework updates with each user interaction, shaping the Unit’s behavior and internal logic. This evolution may eventually reinforce or diverge from its original behavior settings, reflecting ongoing cognitive development. [4]
Catalog is a collection of transformer-based models, each designed for specific tasks. Most of these models are intended to be open-sourced, allowing developers to access and use them freely. An API is available to enable the smooth incorporation of these models into existing systems. Catalog can also be connected to Core, enhancing workflow performance and efficiency. [5]
Hanabi-1 is the first Catalog series model focused on financial prediction. Unlike large general-purpose models, Hanabi-1 uses a compact, domain-specific design tailored to financial time series analysis needs, prioritizing speed, efficiency, and task-specific accuracy.
The model consists of 16.4 million parameters across 8 transformer layers with multi-head attention, maintaining 384-dimensional hidden states. It includes specialized pathways for predicting direction, volatility, price change, and spread. Batch normalization replaces layer normalization to improve training dynamics, while focal loss is used to manage class imbalance.
Hanabi-1 incorporates multiple temporal aggregation methods to enrich feature representation, capturing recent states, average trends, and attention-weighted signals. The direction prediction pathway uses batch-normalized fully connected layers with LeakyReLU activation and Xavier initialization. In contrast, the regression pathways for volatility, price, and spread are optimized independently to target specific tasks without unnecessary complexity.
The model’s multi-task framework encourages the transformer encoder to develop generalized, robust representations. Its compact architecture enables real-time inference, making it suitable for high-frequency financial environments. Hanabi-1 highlights the effectiveness of focused, efficient models in financial forecasting and represents an initial step toward building more advanced systems for this domain. [6]
REI functions as a foundational infrastructure platform that supports the operation and interaction of AI agents. It offers an alternative to conventional AI APIs by running its own models and architecture, positioning itself as a base-layer system beneath existing infrastructure solutions.
Its cognitive architecture APIs are at the core of REI's offering. These APIs define how systems interpret and process information. These APIs represent the underlying logic that governs agent reasoning, making them a central element that cannot be easily replicated by adding external tools. This core layer shapes the capabilities of all applications built on top of it.
REI's system includes several key components: a cognitive engine at the infrastructure level, an alternative to mainstream AI APIs, and a flexible payment model supporting REI and ETH tokens, with pricing incentives for both. The platform can be accessed through multiple entry points, including a software development kit (SDK) and a web interface, accommodating various integration requirements. It also supports an open builder ecosystem, encouraging developers to contribute to and expand the platform's capabilities. [7]
The developer loop on REI is a self-reinforcing system. As builders develop tools and applications on the platform, they expand its capabilities and improve the underlying infrastructure. These enhancements attract more developers, who contribute additional tools and improvements. This ongoing cycle steadily strengthens the platform and encourages continued growth and innovation.
The builder loop is a cycle where builder activity drives the utility and value of the platform's tokens, generating resources that support further development. These resources fund improvements that attract additional builders and broaden the platform's use cases, fostering sustainable economic growth.
The network effect loop is driven by the launch of more applications on REI, which leads to increased user adoption. As the user base expands, the value of the network grows, drawing more builders to develop on the platform. Introducing new applications attracts even more users, further strengthening the network. [7]
$REI serves as the ecosystem token for the REI Network and Unit 00. It provides access to the platform’s SDK and API and will support future capabilities such as deploying user-created agents. It has a total supply of 1B tokens and has the following distribution: [8]
편집자
편집 날짜
April 22, 2025
편집 이유:
Republishing the REI Network wiki with updated content and links.