Canopy Architecture
[CONVICTION]
Canopy is the coordination, planning, scheduling, and proof layer for bounded work across the Mesocosm stack. It sits between high-level opportunity shaping and low-level execution placement. Most agent systems and workflow tools collapse ideation, planning, decomposition, execution, coordination, evidence collection, and proof into one opaque loop. That collapse causes familiar failures: freeform ideas get turned into execution too early, human feedback is lost across agent hops, locality and sovereignty are treated as deployment details instead of design constraints, and raw evidence leaks into control systems.
The central architectural principle:
Freeform reasoning is allowed. Runtime truth must be explicit, reviewable, and durable.
Where Canopy Sits in the Stack
[CONVICTION]
graph LR
AR["Arena<br/>What should exist?<br/>Opportunity routing"] --> CA["Canopy<br/>How does work get<br/>planned and proved?"]
CA --> DIR["Directory<br/>Who can do it?<br/>Capability matching"]
DIR --> OG["OpenGrid<br/>Where does it run?<br/>Execution placement"]
OG --> MY["Mycel<br/>Portable proofs<br/>Verification artifacts"]
MY -.->|"proof feeds back<br/>into opportunity"| AR
style AR fill:#e74c3c,color:#fff
style CA fill:#27ae60,color:#fff
style DIR fill:#3498db,color:#fff
style OG fill:#f39c12,color:#fff
style MY fill:#9b59b6,color:#fff
The Mesocosm ecosystem separates concerns cleanly across five systems:
| System | Responsibility |
|---|---|
| Arena | Routes opportunities -- what should exist? |
| Canopy | Routes work -- how does bounded work get planned, assigned, reviewed, proved? |
| Directory | Resolves who can do it -- participant and capability matching |
| OpenGrid | Routes execution -- which node runs the work, locality-aware placement |
| Mycel | Carries proof -- portable verification artifacts |
Canopy is not the low-level execution mesh, the robot runtime, the GPU scheduler, or the local telemetry store. It is a control and proof kernel. The distinction matters: sub-10ms physical control loops, raw audio/video processing, and mesh transport stay outside Canopy. Long-running coordination, review boundaries, and proof surfaces live inside it.
The Three Networks
[CONVICTION]
The architecture becomes clearer through the three-network model. The same challenge often spans all three, and Canopy coordinates across them without flattening them into one undifferentiated agent workflow.
Discovery network -- explore hypotheses, generate knowledge, produce methods and findings. Research programs, auto-research loops, experimentation around pedagogy or media formats.
Agency network -- decide, coordinate, negotiate, approve, and act. Local authority approvals, civic coordination, teacher and operator cells, product decision-making. The participants here are people and institutions, not compute nodes.
Infrastructure network -- provide compute, storage, deployment, sensors, connectivity, and runtime substrate. OpenGrid nodes, facility edge clusters, monitoring systems.
Core Object Hierarchy
[EVIDENCE]
graph TD
ID["Idea / Discussion<br/>freeform, not runtime truth"] --> BD["ProductBriefDraft / ThesisDraft<br/>audience, north star, constraints"]
BD --> PR["Program<br/>long-lived mission or initiative"]
PR --> CH["Challenge<br/>bounded objective with proof surface"]
CH --> WP["WorkPackage<br/>major bounded stream"]
WP --> TK["Tasklet<br/>smallest executable unit"]
ID -.->|"increasing<br/>boundedness"| TK
style ID fill:#bdc3c7,color:#2c3e50
style BD fill:#95a5a6,color:#fff
style PR fill:#7f8c8d,color:#fff
style CH fill:#e74c3c,color:#fff
style WP fill:#c0392b,color:#fff
style TK fill:#922b21,color:#fff
The object hierarchy is the backbone. Each level adds boundedness and proof surfaces:
Idea / Discussion -- not runtime truth. Chat, meeting notes, rough specs. Freeform and exploratory.
ProductBriefDraft / ThesisDraft -- sits above challenge level. Captures audience, north star, success criteria, constraints, intended wedge. More structured than ideation but not execution-ready.
Program -- a long-lived mission or initiative containing multiple challenges. "Build a voice-native education product." "Restore water health in a region." Programs are enduring.
Challenge -- a bounded objective with a clear proof surface. "Ship the first Bangalore pilot of the education app." "Restore a specific lake to a measurable threshold." Challenges must be bounded, executable, reviewable, and provable.
WorkPackage -- a major bounded stream within a challenge. Smaller than challenges, larger than individual steps.
Tasklet -- the smallest bounded executable unit. What maps cleanly to bounded execution.
The Four Kinds of Routing
[REFRAME]
One of the biggest sources of architectural confusion is the word "routing." Canopy distinguishes four distinct routing problems:
- Opportunity routing (Arena) -- which problem area or challenge space does this belong to?
- Work routing (Canopy) -- given a bounded work object, which lane should handle it? AI implementation, human authority, human+AI review, deployment, local operator.
- Participant matching (Directory) -- which actual person, team, institution, or node can satisfy the lane requirements? This is not mesh routing.
- Execution routing (OpenGrid) -- where exactly does the chosen execution happen in the runtime fabric? Which edge node, which district cluster, path failover.
Without this split, Canopy tries to become a mesh router, OpenGrid gets overloaded with semantic planning, and human authority matching gets confused with network path selection.
Planning State Machines
[CONVICTION]
Canopy uses several state machines rather than one giant prompt loop. Each boundary needs different review semantics, different proof surfaces, different kinds of questions.
The execution shape for planning is not "chat -> model writes truth directly." It is:
server-owned session state
→ pending action
→ client claims action
→ client handles bounded step
→ client emits structured runtime events
→ server reduces events into canonical state
→ next action is queued
This pattern matters because chat is a user interface, not the source of truth. The model should not directly mutate runtime state. Stalled or conflicting planning actions need reclaiming. The same session may be driven by a person, an AI agent, or a cell over time.
Each state machine emits a visible review artifact before confirmation -- brief review, challenge draft, work-package proposal, tasklet proposal. These artifacts make the planning state inspectable by humans, create stable checkpoints for revision and handoff, and separate proposal quality from runtime truth.
Knowledge, Memory, State, and Proof
[CONVICTION]
Canopy distinguishes four layers that most systems collapse:
| Layer | Examples | What It Is |
|---|---|---|
| Raw evidence | Transcripts, audio/video, field logs, telemetry | Should remain local or vault-backed |
| Compiled knowledge | Challenge summaries, program notes, lessons learned, operator briefs | Derived, queryable, accumulative |
| Operational state | Pending actions, queues, active runs, leases, review state | Authoritative runtime truth |
| Proof | Approval receipts, commissioning receipts, outcome proofs | Portable trust layer via Mycel |
Without this separation, raw evidence leaks into control systems, prose summaries get mistaken for canonical state, and proof becomes just another text note.
Locality and Sovereignty
[CONVICTION]
Locality is a first-class planning constraint, not deferred to "ops later." When the challenge is "clean a lake in Bangalore," the system defaults toward local operators, local institutions, local evidence custody, and local compute -- widening outward only if policy allows. For physical AI deployments: personal node, facility edge, district cluster, federation. These are explicit envelopes, not accidental deployment outcomes.
This is the same principle the Mesocosm book derives from nature's architecture: distributed systems work because locality is a design constraint, not an optimization target.
Packs and Adapters
[EVIDENCE]
Packs define meaning and constraints. Adapters connect bounded work to the outside world. Their separation avoids binding domain logic to one runtime or transport path.
Domain packs encode domain assumptions (education, civic operations, physical AI, software delivery). Policy packs encode rules (child-safety review required, external approval required, locality restriction). Proof packs encode what counts as sufficient proof (learning claim proof, operational health proof). Deployment packs encode runtime assumptions (personal node, OpenGrid edge, local facility).
Ingress adapters define how control objects enter (GitHub issue, local control file, session manifest). Execution adapters define how work runs (AI worker, human, human+AI, OpenGrid remote node). Evidence adapters normalize evidence types (transcript, telemetry, approval receipt).
Use-Case Mappings
[STORY]
Product building (education app): ideation -> brief/thesis -> bounded challenge ("ship one pilot slice for named audience and geography") -> work-package proposal -> tasklet proposal -> execution -> proof of bounded pilot outcome. Canopy preserves planning boundaries and avoids mistaking the whole product program for one executable challenge.
Civic operations (clean a lake): program -> challenge with measurable threshold -> work packages including authority approvals as first-class objects -> participant matching with local operator and civic liaison cells -> execution with local custody -> proof not just of "was work performed" but "did the lake improve measurably."
Physical AI: program -> bounded deployment challenge -> deployment prep, runtime placement, validation, operator training, operational health monitoring -> commissioning proof separated from operational proof. Sub-10ms reflex loops stay outside Canopy while it supervises the long-running mission.
Discovery/research: brief with research thesis -> bounded hypothesis investigation -> literature compilation, experiment design, run, analyze, synthesize -> compiled knowledge that accumulates rather than being lost across runs -> findings as reviewable, provable outputs.
Three-Plane Separation
[CONVICTION]
The core architectural discipline across the entire Mesocosm stack: data stays local, control coordinates, proofs travel. Three planes with five specific crossing points where the real discipline lives:
Crossing 1: Structured evidence bundles (not raw data) feed into Canopy's execution context. Raw transcripts, sensor data, and telemetry stay in the local vault.
Crossing 2: Evidence adapter capabilities shape what route plans are possible -- you cannot require latency proof if you have no latency adapter. The data plane constrains the control plane.
Crossing 3: The vault boundary -- only hashes and references cross into verification. Raw evidence stays local. This is the privacy architecture.
Crossing 4: The verification gate is where the control plane produces proof plane objects. This is where bounded work becomes verified outcomes.
Crossing 5: Resolved outcomes become Mycel-portable receipts that can travel without carrying raw evidence. Proofs are self-contained attestations.
The recovery loop feeds degraded or failed work back from operator surfaces into scheduling -- it does not escape the control plane.
Canopy is the narrow-waist coordination and proof kernel. It does not absorb the mesh, the vault, or the robot runtime. It sits between problem formation (Arena) and execution substrates (OpenGrid/human/AI lanes), keeping long-running programs coherent through bounded control objects, scheduled work packages, and accumulated proof.
The Canonical Architecture: Four Networks + Arena + Meso Valley
[EVIDENCE]
The canonical technical architecture positions Canopy within a larger system of four persistent networks, Arena as the cross-network challenge graph, and Meso Valley as the local colony model.
The four persistent networks:
| Network | Primary Function | How It Scales |
|---|---|---|
| Infrastructure | Proof grammar, runtime, facility-node substrate, federation | Shared protocols and operator-contributed nodes |
| Discovery | Distributed research, open designs, field science, federated learning | Global artifact sharing and local evidence custody |
| Capability | Human formation, education, operator training, proof-of-learning | AI-heavy scaffolding and proof-backed capability graphs |
| Capital/Ownership | Commons funding, challenge pools, asset vehicles, payout rules | Template structures and receipt-based cashflow |
Arena translates diffuse desire and institutional demand into structured challenge objects. It routes opportunities -- what should exist?
Meso Valley is the local colony model -- the district embodiment of all networks in a specific place, with operator schools and institutional anchors. It scales through replication via local operators and place-specific playbooks.
Core invariants: Global shared rails, local deployment. Evidence stays local; proofs travel. Open kernel, competitive edges. Polycentric mesh only where justified. Stewardship first, capture never.
Micronode types instantiate the infrastructure network: neighborhood compute racks (MIP-CMP), microschool nodes (MIP-EDU), microclinic nodes (MIP-HLT), microfactory cells (MIP-MFG, MIP-SCM), and microgrid/watershed nodes (MIP-ENV, MIP-ENR). Each specifies physical boundary, device identities with calibration chains, installed MIPs, retention/disclosure rules, and ownership descriptors with payout policies.
Current State
[EVIDENCE]
Implemented: challenge/work-package/tasklet object hierarchy, operator/cell/lease concepts, intake session state machine, conversational intake CLI, challenge artifact confirmation, work-package proposal session state machine, scheduler and child-task queueing, knowledge-base compilation loop, trace/OTLP export, proof and recovery layers.
Target next: richer brief/thesis artifacts above challenge, AI-assisted challenge-to-work-package proposal handlers, AI-assisted work-package-to-tasklet sessions, durable participant/directory layer, richer operator/cell runtime state, UI review surfaces, persistent stores for planning sessions.
The Canopy-Shaped Hole
[CONVICTION]
In the Mycel-as-network framing ("Mycel is the network, everything else is a protocol or service inside it"), Canopy fills the gap between Arena (routes opportunities) and OpenGrid (routes execution). Without Canopy, the architecture can identify problems and place compute but has no answer for who actually supervises the bounded work in between.
"OpenGrid is how Mycel routes compute. Arena is how Mycel routes challenges. Canopy is how Mycel coordinates work and proves outcomes."
The factory operator says: "We're on Mycel. Our factory runs OpenGrid for inference, Canopy for work coordination and proof, and Arena for our supply chain challenges."
Discovery Network Exemplars
[FRONTIER]
Companies like Medra (autonomous science labs with Physical AI plus Scientific AI in continuous closed loops) represent the first real implementations of what the discovery network needs as a primitive. A Medra autonomous lab is essentially a protocol-native lab cell -- a micronode for science with physical boundary, control interfaces, evidence sources, and a facility runtime. The alignment pattern: Medra generates world-class raw evidence; Canopy coordinates federated research programs across multiple such labs; Mycel provides the portable proof grammar (proof of replication, proof of benchmark delta, proof of translation) that makes one lab's outputs composable with every other lab without centralizing raw data.
See Also
- Macrocosm Overview -- the venture Canopy serves
- Verification Infrastructure -- the broader verification thesis
- Four Protocol Layers -- attestation, discovery, coordination, settlement
- Microcosm Architecture -- the personal-scale equivalent
- Session-Native Architecture -- the session management Canopy coordinates