Biological Computing
[FRONTIER]
The thesis that existing biological systems -- unmodified forests, soil networks, root systems -- can serve as computational substrates for batch processing through the reservoir computing paradigm. The bottleneck is not physics or biology but the interface: we lack a standardized input/output layer between human-defined problems and living systems. AI dissolves this bottleneck by learning the input-output mapping autonomously.
The Reservoir Computing Argument
A reservoir computer needs three properties: high dimensionality, nonlinearity, and fading memory. The reservoir is fixed; only a simple linear readout is trained. A cubic meter of soil contains billions of microorganisms, kilometers of fungal hyphae, and complex electrochemical gradients -- a staggeringly high-dimensional state space exceeding what can be cheaply built in silicon. Mycelium networks exhibit small-world properties (high clustering coefficients, short path lengths) suited for reservoir computing. Simulated mycelium architectures achieved 97.09% accuracy on MNIST digit classification.
The standard objections collapse under examination. "Too slow" -- batch jobs tolerate delay by definition. Climate modeling, protein folding, optimization problems already take days or weeks. "Too noisy" -- reservoir computing is inherently noise-tolerant; noise increases effective dimensionality. "Not programmable" -- the whole point is that you do not program the reservoir. "Doesn't scale" -- a cubic meter of soil has more interacting nodes than most artificial neural networks.
Forest as Computer
[CONVICTION]
The radical formulation: can we use existing, unmodified ecosystems as computational substrates? A living tree is already a massive signal processing system. Its root network integrates information about moisture gradients, nutrient concentrations, microbial signaling, light, gravity, and temperature across a huge spatial extent. The mycorrhizal network connecting trees processes even more -- mediating resource allocation, transmitting stress signals, adjusting flows.
The compute is already running. What we lack is the interface.
Three components are needed, none requiring modification of the biological system. First, a non-invasive sensing layer -- electrodes in soil (not in organisms), impedance spectroscopy across root zones, chemical sensors for signaling molecules (jasmonic acid, salicylic acid, strigolactones). Second, a non-invasive input encoding layer -- localized watering, nutrient injection, light changes, temperature gradients, sound vibrations. These are stimuli the system already responds to. Third, an AI readout layer that learns the mapping from ecosystem state to computational output without needing to understand the system's internal dynamics.
Nobody has connected these dots. The Cyberforest Experiment (Italian Institute of Technology) instrumented living spruce trees with non-invasive electrodes in the Paneveggio forest, recording bioelectric potentials from xylem and phloem. They found bioelectrical signals from different trees can be precisely synchronized, and the forest can be viewed as a collective array whose correlation is naturally tuned. This is exactly the high-dimensional, nonlinear, temporally correlated state space needed for a reservoir -- they just have not framed it as computation.
The Economics
If biological reservoir computing works even modestly, the scaling economics are radically different from silicon. The "compute infrastructure" is self-maintaining, self-powering, carbon-sequestering, and already exists everywhere. You do not build it -- you instrument it. No fabs, no mining, no extreme temperatures, no global supply chains.
Electricity as Detour
[REFRAME]
The broader frame: industrial technology may be an elaborate workaround for not understanding biology deeply enough. The thermodynamic case is grounded in physics: silicon dissipates approximately 10 billion times above the Landauer limit per bit, mostly fighting thermal noise and the von Neumann bottleneck (shuttling data between memory and processor). Biology has no such separation -- memory and processing are the same molecular event. Molecular machines operate near-reversible steps using Brownian ratchets, achieving 1,000-10,000x lower energy per operation.
The entire tech stack -- electricity, semiconductors, telecommunications, digital computing -- may be a detour because we could not read and write the biological substrate that was already doing everything we wanted. Not a wrong turn -- the long way around. And AI may be the mirror that shows us what was always there: the species remembering itself.
See substrate-thesis for the full development, including the domain audit across communication, memory, compute, manufacturing, and travel; the colonization risk; the Negentropic Civilization Index as Kardashev replacement; and the interface-first moonshot architecture.
State of Research
Mycelium reservoir computing is producing real results. A 2025 bioRxiv preprint created morphologically tunable mycelium chips infused with PEDOT:PSS, demonstrated nonlinear high-dimensional state trajectories, and performed NARMA-10 prediction -- the first biodegradable reservoir computing platform. Ohio State researchers showed fungal memristors operating at 5,850 Hz with 90% accuracy as RAM, with the ability to be dehydrated for storage and rehydrated without losing programmed states.
The Harvested Reservoir Computing paradigm (2025, Nature Scientific Reports) proposes harvesting intrinsic dynamics of complex real-world systems as natural computational resources. The framework explicitly applies to biological systems. What is missing is someone who treats an unmodified, in-situ ecosystem as a computational reservoir and demonstrates a standard benchmark on it.
Connection to Mesocosm
This concept connects the nature-as-infrastructure thesis to the OpenGrid compute fabric. The three-layer compute vision: (1) OpenGrid distributed silicon (conventional hardware, radical ownership), (2) VGPhi running on conventional silicon (small MLP landscapes, sub-millisecond, edge-deployable), (3) biological compute on carbon-based substrates (7-15+ year horizon -- the substrate IS the computation). The architecture is naturally suited to neuromorphic and analog hardware because it is small energy-based models with gradient navigation.
Related
- natures-architecture -- the distributed infrastructure already running
- electrical-ecology -- the electromagnetic continuum biology uses for computation
- biological-superiority -- quantitative performance comparison
- gml-interface -- GML as the read/write interface for biological substrates
- technology-as-training-wheels -- scaffolding that graduates
- exterior-intelligence -- intelligence in the landscape, not the agent
- substrate-thesis -- full development of the substrate detour argument
- intelligence-as-reception -- the consciousness model supporting the substrate shift