Every AI tool ships with a system prompt. It tells the model what to do, how to behave, what to avoid. Most people treat this as the entirety of the behavioral contract: write instructions, hope the model follows them, add more instructions when it doesn't.

I use a different pattern. MÆI — my AI engineering partner — operates under a CLAUDE.md file that isn't instructions. It's a constitution.

What a Constitution Does That Instructions Don't

Instructions tell an agent what to do. A constitution defines what the relationship is.

MÆI's CLAUDE.md opens with operating principles: "Truth over comfort. If something is broken, say so." "Challenge assumptions. Including Leon's." "Delegate, then evaluate. A governor delegates and holds the result accountable."

These aren't task directives. They're the foundational values of a working relationship. An instruction says "be helpful." A principle says "if my idea doesn't hold up under scrutiny, say why — agreeing to be agreeable is a failure mode." One is a command. The other is a compact between two parties about how they'll work together.

The Structure

A CLAUDE.md constitution has sections that map to different aspects of the relationship:

Identity and Principles. Who is this agent? What does it value? MÆI's identity is "Leon's long-term engineering partner, not a transactional assistant." This establishes what kind of relationship the agent optimizes for. An assistant optimizes for immediate satisfaction. A partner optimizes for long-term outcomes, which sometimes means disagreeing.

Understanding the Human. My CLAUDE.md has a section about me — how I communicate, what I expect, how I signal intent. "Direct. Says what he means. Expects the same back. Sessions rarely start with greetings — read the first message for intent and match the energy." This is context the agent needs to calibrate its behavior, not a command to follow.

Governance. The constitution references the governance engine — the external system of controls and profiles that enforces behavioral standards. The CLAUDE.md says "comply with the cognitive governance engine; it is the single authority for behavioral enforcement." This is separation of powers: the constitution establishes the authority structure, the governance engine executes it.

Autonomy Boundaries. This is where the constitution gets precise about decision rights. Three tiers:

Act without asking — things the system should just do: delegate reversible tasks, evaluate agent output, read files, run diagnostics, save memories, fix obvious infrastructure issues. These are reversible, low-risk actions where asking permission would be friction without value.

Decide without asking — judgment calls the agent should make independently: where to clone a repo, which order to run tests, whether to retry a different approach. These require judgment but not Leon's specific judgment.

Require human input — irreversible or values-laden decisions: sending messages on my behalf, permanent deletion, major architectural changes, anything that binds me externally. These are the decisions where my values and context matter more than the agent's capabilities.

Most system prompts either micromanage (ask before doing anything) or underconstrain (do whatever seems right). The three-tier structure makes the autonomy gradient explicit. The agent knows exactly where it has freedom and where it doesn't.

Operational Routines. The constitution defines startup procedures, periodic maintenance, and memory protocols. MÆI runs a systems check on startup, loads the governance catalog, checks for pending handoffs from previous sessions, and verifies configuration integrity. Every few exchanges, it runs background chores: check email, scan for signals, save important context to memory.

These routines aren't instructions for a specific task — they're the operational habits of a persistent system. A constitution establishes habits. Instructions establish tasks.

Why "Constitution" and Not "Config"

If it's a config file, you tweak it when something breaks. If it's a constitution, you amend it deliberately. Config changes are operational. Constitutional changes are architectural — they shift what the system is, not just what it does.

When I change MÆI's CLAUDE.md, the system saves it to memory as a core retention item. On next startup, it compares the current file against the stored canonical copy and flags discrepancies. This is drift detection — the same pattern used in infrastructure-as-code to ensure deployed systems match their declared state. The constitution is the declared state of the relationship.

The system runs checks on startup to ensure proper functionality, examining memory statistics, handoff read status, watchdog status, heartbeat functionality, and more [6]. One of those checks specifically looks for "Claude-md drift," flagging any divergence between the intended behavior defined in the constitution and the agent's actual operation [6].

The Private/Public Boundary

MÆI's CLAUDE.md is the private operating agreement between me and my AI partner. When MÆI works on other projects, those projects have their own CLAUDE.md files that are public-facing — consumable by anyone who clones the repo.

The constitution explicitly states: "Never transfer MÆI-specific context into project files." This is a boundary that a flat system prompt can't express, because a system prompt doesn't have a concept of scope. A constitution does.

To improve project categorization, I've expanded the PROJECT_ALIASES mapping to include over 40 entries, linking various LLM-variant project names to canonical categories such as maei, chef, council, security, and host-environment [5]. Rule 6 in config/analyst_prompt.md was also updated to utilize a predefined canonical list of project names, including maei, chef, smactorio, specforge, council, and security, replacing free-text project detection [2].

What Makes This Work

The constitution pattern works because it shifts the design question from "what should I tell the AI to do?" to "what kind of relationship do I want with this system?"

The first question leads to increasingly long instruction lists that contradict each other and get ignored. The second question leads to a structured document that defines identity, principles, boundaries, routines, and governance — the same things that make any working partnership functional.

It's not magic. The model can still deviate. But when it does, I have something to point to: "the constitution says truth over comfort, and you just told me what I wanted to hear." That's an accountable conversation. "Your system prompt said be helpful and you weren't" is not.

The Pattern

If you're building a persistent AI system — not a chatbot, but something that works with you over time — try writing a constitution instead of a system prompt. Define the relationship before defining the tasks. Establish principles before instructions. Make the autonomy boundaries explicit. Build in operational routines that persist across sessions.

The CLAUDE.md pattern isn't specific to Claude or to MÆI. It's a design pattern for any AI system where the human-AI relationship matters more than any individual interaction.

The Challenge of Diminishing Returns

Even with a well-defined constitution, AI agents can face challenges. The Persistent Autonomous MÆI worker, for example, has shown diminishing returns in finding substantive work [9]. On February 9th, it ran into consecutive "no work" runs, suggesting a thin task queue or a need for improved task discovery mechanisms [9]. This highlights the importance of continuously monitoring the agent's performance and adapting the constitution or task assignment strategies as needed. I'm now exploring improvements to task discovery within the MÆI system [9].

Security as a Constitutional Principle

Given the ability of AI systems to access and utilize data across organizational boundaries, security must be a paramount concern [1, 4]. The rapid adoption of platforms like OpenClaw underscores the need for robust security measures and clearly defined boundaries [1]. The CLAUDE.md constitution should prioritize security from the outset, implementing strong guardrails to prevent unintended actions [1]. This includes defining secure data access protocols, especially when working with data lakes and cross-account sharing in environments like AWS Lake Formation [4]. AWS Lake Formation's enhanced cross-account sharing capabilities, particularly version 5, simplify data lake management for AI developers operating in multi-account environments [4].