I built MÆI — a persistent AI engineering partner with cognitive governance, environmental awareness through signal intelligence, and memory that accumulates across sessions. It has 60 behavioral controls across 9 governance families, a delegation pipeline that constructs purpose-built agents and injects relevant controls per task, and an optimization loop that learns from delegation outcomes. It operates as a governor — it reasons, delegates, and evaluates, but never executes directly.
I didn't write most of the implementation code myself. The AI wrote it. I designed what it should build and defined the principles that govern how it operates.
What Composing Means
The AI ecosystem in 2026 is full of powerful components that are designed to be connected: LLMs with tool-calling, MCP servers that expose capabilities as structured tools, vector databases, graph databases, signal processing pipelines, browser automation, email APIs. These are Lego bricks. The question isn't whether you can write a brick — it's whether you can design what they build.
Composing means assembling these existing components into a coherent system governed by principles you define. The creative work is architectural: What should the system be? What should it value? Where should it have autonomy and where should it defer? How should it learn?
These are design questions, not coding questions. Domain experts — compliance officers, project managers, operational leaders — are often better equipped to answer them than software engineers, because they think in systems and constraints, not in functions and classes.
The MÆI Architecture as Example
MÆI's architecture is composed from existing tools and patterns:
The brain is Claude accessed through Claude Code. Memory is SQLite locally with Supabase for cloud sync, exposed through an MCP server with save, search, recall, reflect, and link operations. Governance is an MCP server that loads YAML control definitions and makes them available to the LLM. The controls themselves are YAML documents I wrote. The engine that processes them is Python that the AI implemented, but the intellectual work is in the control definitions. Environmental awareness comes through sensors that poll RSS feeds, enrich signals with LLM analysis, and expose them through a signal MCP server. Integrations are MCP servers wrapping existing APIs — email, shopping lists, browser automation, meal planning. Orchestration is a cron job running an autonomous loop, a watchdog monitoring health, and a supervisor handling handoffs between sessions.
None of these components required novel computer science. They required architectural decisions: what to connect, how to govern the connections, and what principles should hold when the system operates without supervision.
The Composition Stack
If I had to formalize the pattern, it looks like this:
Layer 1: Constitution. The CLAUDE.md defines the relationship, principles, and autonomy boundaries as a text document.
Layer 2: Governance. YAML control definitions with NON_NEGOTIABLE and RECOMMENDED severity levels, loaded into context and activated when relevant.
Layer 3: Memory. Persistence across sessions so the system accumulates knowledge over time.
Layer 4: Awareness. Sensors connecting the system to its environment — email, RSS feeds, file changes, calendar events.
Layer 5: Tools. MCP servers exposing domain-specific capabilities by wrapping existing APIs and services.
Layer 6: Orchestration. Startup routines, maintenance cycles, health checks, and scheduling that make the system persistent rather than transactional.
The first three layers are primarily design work — writing documents, defining controls, making architectural decisions. The last three involve implementation, but you can direct an AI to write that implementation while you focus on what it should accomplish.
Who This Is For
The composer pattern is for technical leaders who need to govern what their teams are building with AI. CTOs who need assurance that agent behavior is constrained, auditable, and improvable. Architects who need to compose governance structures that scale beyond a single agent. Engineering managers who need to maintain governance authority over AI systems without reviewing every line of implementation.
These people have been locked out of AI agent development because the assumption is that building AI systems requires software engineering. That's increasingly false. It requires systems thinking, clear principles, and the ability to compose existing tools into coherent architectures. An AI can handle much of the implementation if you can define what needs to be built and why.
The Catch
I won't pretend this is frictionless. You still need to debug things when they break. You still need to read error messages and understand what went wrong. MCP server configuration has rough edges. LLMs hallucinate, and when your governance engine catches it, you need to understand why the control fired and whether the response was appropriate.
The barrier isn't programming skill. It's architectural thinking — the ability to see how components fit together, define the principles that govern their interactions, and iterate when reality doesn't match the design. That's a skill that infrastructure engineers, compliance professionals, and operations people already have.
MÆI has approximately 29,000 lines of Python across its core libraries and 2,300 lines of YAML governance definitions across 60 controls in 9 families. The AI wrote the Python. I wrote the governance controls, the constitution, the configuration, and the architectural specifications that told the AI what to build. When something broke, I debugged by understanding the architecture — tracing data flows and checking constraints — not by reading implementation line by line.
That's the composer model in practice. You design the system. You define its principles. You direct its construction. The AI translates specifications into implementation.
Start Here
If this resonates, start with the constitution. Write a CLAUDE.md that defines what you want your AI system to be — not what you want it to do, but what kind of relationship you want with it. Define the principles. Define the boundaries. Define the routines.
Then add governance. Write five controls that capture the behavioral standards you care about most. Give each one a severity. Load them at the start of every session.
Then add memory. Even a simple "save what matters, recall it later" loop transforms a transactional tool into a persistent partner.
That foundation — constitution, governance, memory — will take you further than most AI agent projects ever get, and you can build it with text files and architectural thinking. The implementation comes after you know what you're building.