Microcosm Architecture
[CONVICTION]
The state engine is 70% of the system. It runs continuously, senses passively, and maps the attractor landscape. Every dojo session is training data. Every practice session is refinement data. Every real-life detection is validation data. The more you use the system, the better your personal verification AI becomes. The better your verification AI, the higher confidence your proofs carry.
The Five Kosha Sensing Layer
[CONVICTION]
The Vedantic Pancha Kosha (five sheaths) model provides the sensing architecture. Each layer has its own attractor landscape, transitions, and intervention channels. The layers are coupled -- a breath intervention (pranamaya) shifts emotional state (manomaya), which changes decision quality (vijnanamaya). The system tracks propagation across layers to assess genuine change (propagates) versus superficial (stays in one layer).
Breath is the root biomarker -- simultaneously input (you can change it), output (reflects unconscious state), and capacity measure (respiratory dynamics reveal regulation over time). See latent human capacities for the evidence chain.
| Signal | Source | Reveals | Kosha |
|---|---|---|---|
| Breath | Phone mic, watch, sensor | Primary state. All regulation signatures. | Pranamaya |
| HRV | Watch (SDNN) | Autonomic balance. Coherence. Capacity. | Pranamaya |
| Sleep | Watch + phone | Recovery. G restoration. Next-day capacity. | Annamaya |
| Movement | Accelerometer | Physical state. Exercise. Stillness. | Annamaya |
| Eating | Accelerometer pattern | Food-state correlations. No logging. | Annamaya |
| Voice | Mic in sessions | Emotional signatures. Topic-state map. | Manomaya |
| Self-report | Voice check-in | Subjective state. Meaning. What sensors miss. | Vijnanamaya+ |
Voice processing runs on-device wherever feasible. Voice features are weak signals with high uncertainty unless validated within-person through dojo grounding. The system avoids "emotion recognition" claims -- it learns YOUR personal voice-state correlations through labeled data, not generic models.
Five Data Layers
[CONVICTION]
Five explicit layers, each inheriting uncertainty from below and adding its own:
| Layer | Contains | Confidence Source | Access |
|---|---|---|---|
| Signals | Raw sensor data (breath waveform, HRV samples, accelerometer) | Device attestation + calibration chain | Device only. Tier 0. Never leaves sensor. |
| Features | Derived measures (SDNN, sleep stages, breath rate, voice features) | Known measurement limits per device | Local server. Tier 1. |
| State | Probabilistic latent vector aligned to koshas and pillars | Model version + training provenance + N labeled sessions | Local server. Tier 1. Policy-governed. |
| Interpretation | Basin detection, perturbation hypotheses, portal windows, triggers | State confidence + landscape model maturity | Local server. Tier 1-2. |
| Household | Emergent dynamics, cross-person patterns, family-level triggers | Individual confidences + relational model maturity | Privacy-tiered per family member. |
The kosha model lives at the State layer. Attractor/basin detection lives at Interpretation. Household dynamics live above both. This separation prevents the system from treating uncertain sensor readings as ground truth about inner life.
The Tesla-Style Trigger System
[CONVICTION]
Monitors continuously. Acts only on interesting events. Silent 95% of the time.
- State transition: entering/leaving a known basin. Protect flow, offer reset for stress, capture novelty.
- Anomaly: unmapped state. Flag for labeling to expand dictionary.
- Portal window: saddle point where small perturbation catalyzes shift. Surface opportunity.
- Risk pattern: approaching harmful basin. Early intervention.
- Household trigger: family-level pattern. "All kids dysregulated -- likely correlates with parent state."
Every trigger logs what fired, confidence, explanation packet, user response, confirmation/denial. The system tracks precision/recall per trigger type and adjusts thresholds automatically. This calibration feedback loop makes the verification AI improve over time. NO-ESCALATION applies: trigger fires once, single haptic tap, if ignored the system records and moves on.
Dictionary Building
[EVIDENCE]
Three data types with different label quality and learning functions:
| Data Type | Source | Label Quality | Learning |
|---|---|---|---|
| Labeled | Dojo sessions (controlled) | High -- system knows activity + target | Builds dictionary |
| Partial | Practice sessions | Medium -- knows type, not experience | Refines dictionary |
| Unlabeled | Real life (watch sensing) | Low per point, massive volume | Tests + validates |
Dojo sessions are versioned measurement protocols. Standardized scripts that can be re-run for recalibration. Version the protocol so you can track model drift -- your breath signature at Month 1 versus Month 6. Maps across all five koshas: breath (regulation signatures), body scan (somatic holding), voice (topic-state correlations), emotional range (stimulus response), Socratic inquiry (cognitive signatures), meditation (attention baselines), movement (physical capacity).
The Five-Layer Probabilistic Verification Stack
[CONVICTION]
Five layers making fabrication progressively more expensive:
Layer 1 -- Device attestation: Watch has secure enclave. Breath data hardware-signed. Cannot fake physiological signal without device compromise.
Layer 2 -- Personal model: Trained on hundreds of YOUR labeled sessions. Knows difference between deliberate slow breathing and genuine parasympathetic shift. Generic spoofing fails against personalized model.
Layer 3 -- Temporal consistency: Longitudinal landscape model tracks months. Fabricating a three-month capacity trajectory requires consistent spoofing across hundreds of datapoints across multiple modalities. Cost of lying scales with time.
Layer 4 -- Cross-layer integration: Genuine change propagates across koshas. Breath AND HRV AND voice AND self-report AND behavioral patterns all shift coherently. Faking five layers simultaneously is extremely expensive.
Layer 5 -- Perturbation response: System can actively test claims. Claim regulation improved? Present perturbation and measure response. Random verification prompts prevent rehearsed responses.
Anti-Gaming Architecture
[CONVICTION]
When VCR settlement incentivizes "looking improved," the primary adversary is the user themselves. Beyond the five-layer verification stack:
- Randomized active verification: unexpected micro-perturbations compared against claimed capacity
- Baseline integrity: dojo baselines cannot be retroactively modified; historical landscape model detects discontinuity
- Conservative acceptance: Proof of Capacity requires sustained trajectory across months, not acute session deltas
- Cross-person statistical anomaly detection: statistically implausible trajectories decrease proof confidence (anonymized, DP-protected)
- Separation of lifestyle and settlement: settlement incentives are optional and late-phase. Most users will never produce proofs for settlement.
Mycel Integration: Proofs of Human Development
[CONVICTION]
Microcosm is the human development domain of the Mycel protocol. The state engine is a Mycel-compliant proof producer. Five proof types:
| Proof | What It Verifies | Evidence + Confidence |
|---|---|---|
| Proof of State | Your state at a moment | Device TPM + sensor calibration + model version + confidence bounds |
| Proof of Practice | You did a session; entry/exit state delta | State deltas with integration depth (how many kosha layers shifted) |
| Proof of Capacity | Longitudinal development over months | Basin depth, G trajectory, perturbation resilience. Requires sustained consistency. |
| Proof of Experiment | N-of-1 trial with pre-registered hypothesis | Pre-registration + state data + uncertainty bounds. CENT-compatible. |
| Proof of Care | A portal or practitioner contributed to verified state change | Attribution: this intervention preceded this delta with confidence X. |
Settlement weighting explicitly favors longitudinal capacity development over acute session deltas: Proof of Practice gets 0.1x weight, Proof of Capacity gets 5x, Proof of Experiment gets 2x, Proof of Care gets 1x. Portal creators are incentivized toward genuine development, not short-term feel-good.
Privacy Tiers
[CONVICTION]
| Tier | What | Who Sees It |
|---|---|---|
| 0 | Raw signals (breath waveform, HRV samples, voice audio) | Device only. Never leaves sensor. |
| 1 | State estimates (pillar values, kosha vectors, basin detection) | Local server only. Policy-governed. |
| 2 | Session summaries (type, duration, state delta, integration depth) | Shared with portals per consent. Redacted. |
| 3 | Capacity proofs (longitudinal development claims) | ZK-wrapped for external parties. |
| 4 | Aggregate contributions (population evidence) | DP or ZK for research. No individual attribution. |
Zero-knowledge proofs apply strictly at external trust boundaries: tribe matching (prove embedding similarity without revealing either embedding), aggregate evidence (prove capacity improvement without revealing data), portal settlement (prove positive deltas without revealing session content), capacity claims for institutions (prove capacity exceeds threshold without revealing landscape).
Household Architecture
[EVIDENCE]
Multi-person from the ground up:
- Individual profiles: each person has own V, G, dictionary, streams, practice history. Privacy between siblings. Age-gradient parent visibility.
- Household state: emergent from individual states. Family-level attractor dynamics tracked.
- SOUL.md: parent shapes child's learning container. Children have autonomy within it.
- Family experiments: "Outdoor-first mornings for a week" -- track all family members, compare to baseline. "Bedtime 30 minutes earlier" -- track sleep, morning state across household.
Verification Module (App)
[EVIDENCE]
The app's verification module runs after every session as an LLM-as-judge pipeline:
- Cognitive depth classification -- per-response depth level using rubric (retrieval / reasoning / insight / seed)
- Domain localization + depth -- concepts touched, depth per concept, misconceptions detected
- Trait signal extraction -- reasoning style, epistemic behavior, metacognitive calibration observations
- Archetype update -- fuse new observations with existing archetype, weighted by recency and confidence
- Proof emission -- package assessment into Mycel MIP-EDU proof envelope
The archetype stored in the vault is the exterior model of who this person is as a developing being -- reasoning style, pillar balance (curiosity/agency/regulation), cognitive depth profile, depth per domain, trait trajectory. Measured in kilobytes, not gigabytes. States and proofs, not raw data.
Server Components
[EVIDENCE]
State Engine (70%): per-person sensing pipeline with five-layer data model, uncertainty tracking at every layer, provenance on every estimate, trigger system with precision/recall feedback, Mycel proof production.
Policy + Consent Engine: purpose-limited consent objects (not toggles), per-person/per-component/per-portal, age-gradient privacy with automatic policy adjustment, immutable audit log.
Session Engine: modes (Mirror, Guru, Coach, Companion), dojo with versioned protocols, age-adaptive, multi-person family sessions, real-world wrapping.
Thought-Stream Store: domains with embeddings, auto-classification, cross-domain patterns.
Experimentation Engine: individual and household trials, pre-registration, CENT-compatible reporting, Mycel Proof of Experiment artifacts.
Proof Engine (Mycel Boundary): converts internal events into proof envelopes with selective disclosure, settlement weighting applied, anti-gaming checks pre-emission, only activated with explicit consent.
Build Sequence
[EVIDENCE]
Phase 0 -- Seed (Verification + Analytical Validation): one person, your hardware. State engine v0 (breath + HRV from Apple Watch to Mac Mini), dojo v0, triggers v0, mirror v0, policy v0. Gate: sensor pipeline verified, state model correlates with self-report.
Phase 1 -- Depth (Full Analytical Validation): streams, practice library, voice interface, OpenClaw bridge v0 (read-only, isolated, audited), first Mycel Proof of Practice. Gate: dictionary accuracy, trigger precision/recall.
Phase 2 -- Household (Clinical Validation Begins): partner + child, shared streams, Guru/Brih, SOUL.md, family experiments, age-adaptive interfaces. Gate: household pilot (2-5 families), adverse effects monitored.
Phase 3 -- Verification (Clinical Deepens): Proof of Capacity (longitudinal), Proof of Experiment (N-of-1), portal settlement v0 with anti-gaming, full OpenClaw integration.
Phase 4 -- Community: tribe v1 with ZK/PSI matching, multiple households, homeschool network, portal ecosystem with signed manifests and sandboxed execution.
Phase 5 -- Platform: open MIP-DEV protocol, portal library with weighted VCR settlement, federated verification (aggregate without individual exposure), OpenGrid integration with community-owned compute.
Validation Framework
[EVIDENCE]
Three stages passed sequentially (DiMe V3):
| Stage | Question | Method |
|---|---|---|
| Verification | Do the sensors work? Does the pipeline produce what it claims? | Device testing, signal comparison to reference, pipeline audit |
| Analytical Validation | Does the state engine track meaningful constructs? Do triggers fire accurately? | Dojo-labeled data vs model output, trigger precision/recall, self-report correlation |
| Clinical Validation | Does using the system actually improve capacity? Are there harms? | N-of-1 experiments, household pilots, longitudinal tracking, adverse effect monitoring |
Adverse effects tracked from Phase 0: orthosomnia (sleep tracking anxiety), metric obsession (compulsive checking), prompt pressure (mirror becoming burden), family surveillance drift (household dashboard becoming coercive).
The LLM as Fifth Kosha Sensor
[CONVICTION]
The architecture is not: Sensors -> State Engine -> LLM Translation -> User. It is:
Hardware Sensors (annamaya + pranamaya) feed into the State Engine alongside LLM as Sensor (manomaya + vijnanamaya + anandamaya). The State Engine produces the Landscape V_task. The LLM as Narrator reads the State Engine and communicates to the User. The User speaks to the LLM as Labeler which feeds back into the State Engine.
The LLM is on BOTH sides of the state engine. It feeds in (as a sensor for upper koshas plus a labeling engine) and reads out (as a narrator for the person). The state engine is the geometric core, but the LLM is how the upper koshas are perceived and how the whole landscape is communicated.
The landscape without LLM is annamaya-only. If you strip away the LLM and train V_task on sensor data with sensor-derived labels, you get a topology of autonomic states -- what Whoop and Oura already approximate. Two physiologically identical HRV dips might be completely different experiences: one is creative tension (productive, near a vijnanamaya saddle point) and the other is anxiety rumination (destructive, trapped in a manomaya basin). Without the person's language to disambiguate, V_task merges them into one basin. The topology is wrong because the labeling was too shallow.
The LLM processes what the person says, how they say it, what topics they engage with, what connections they make. It is the sensor for the upper koshas that no hardware can reach. This is why it is required for everything -- even for the breath entrainment case, the landscape that the actuation navigates was built with language. The label that taught V_task "this is the coherence attractor" came from a dojo session where the LLM integrated verbal report with sensor data.
This cross-domain pattern generalizes: in Macrocosm, the LLM translates "this soil microbiome is in attractor state X" into intervention hypotheses. In Mycel, it facilitates coordination between humans. In MorphoZero robotics, it could enable robots that explain what they are doing or learn from verbal instructions. The pattern is: the VG framework provides the geometric intelligence, the LLM provides the semantic interface, the combination is what makes the architecture usable by humans and applicable across domains.
See Also
- Microcosm Overview -- the venture thesis and four spaces
- Health -- the pancha kosha intervention stack
- Ascent Spectrum -- regulation to awakening progression
- Latent Human Capacities -- evidence for what Microcosm develops
- Technology as Training Wheels -- scaffolding that graduates
- Verification Infrastructure -- the broader verification thesis
- Canopy Architecture -- the coordination layer for bounded work