Data Flow
Message Processing Flow
User Message Submission:
A user sends a message via the chat interface.
Routing via API Gateway:
The message is forwarded to the backend server.
Hybrid Inference Processing:
The Hybrid Inference Engine selects the appropriate inference backend (OpenAI, Gemini, Anthropic API, local LLM, RAG, or external connector) and processes the message.
Quality Scoring:
A quality scoring pass evaluates the message concurrently.
Response Delivery:
The agent’s response is returned to the chat interface.
Memory Synchronization:
For imported agents, the memory sharing protocol updates the agent’s memory state.
Reward Trigger:
High-quality messages trigger token rewards via the Reward Distribution Module.