An Extensible,
Plugin-Based AI Agent Framework
A ready-to-use, open framework featuring multimodal capabilities, swappable memory infrastructure, and a plugin ecosystem.
A ready-to-use, open framework featuring multimodal capabilities, swappable memory infrastructure, and a plugin ecosystem.
MAIAR agents natively support multimodal input and output—text, audio, vision, and beyond. The framework abstracts modality handling to a runtime level capability registry, enabling forward-compatible support for the accelerating scope of multimodal model capabilities without patching the core.
defineCapability({
id: "multi-modal-text-generation",
description: "create text using text and images",
input: z.object({
prompt: z.string(),
images: z.array(z.string()).optional()
}),
output: z.string()
});
A trigger starts the chain — an executor does the work. MAIAR's runtime lets you declare HTTP routes or hook into native event listeners (Discord, Slack, webhooks — you name it), each one dropping a fully-typed Context
object onto the queue.
Executors live inside our dynamic executor library. Tool names and descriptions are generated on the fly with .liquid
templates, and the same executor interface can wrap popular AI tooling like MCP, OpenAI Codex, Claude, or any custom SDK — no changes required to your agent logic.
Browse or publish capabilities with a single command, and Supercharge your agent with community tooling.
Browse PluginsAutomated, on-chain rewards for high-priority issues — no waiting, no ambiguity.
Uranium Corporation's GitHub Action workflow guarantees transparent payouts and prevents duplicate work through an RFC gate. It's also open sourced and available for any Solana project.
Dive into the docs, join our community spaces, and help shape the next wave of composable AI agents.