What a Neuron Teaches Us About Computation's Limits
Modern neuroscience faces a profound blind spot. When we formalize biological systems through computational models, we don't simply translate biology into another language—we collapse a multi-dimensional concept into a single viewing angle, imposing constraints that eliminate the very properties that make living systems alive.
Three Modes of Knowing: Concept, Description, Computation
The Concept
We have knowledge of the neuron as a concept—an understanding that exists before we try to express it in any particular form. This conceptual knowledge is vast and interconnected. We know the neuron through:
- Its electrical dynamics (action potentials in milliseconds)
- Its biochemical cascades (plasticity over seconds to minutes)
- Its structural plasticity (growth and pruning over hours to days)
- Its metabolic demands (ATP consumption, lactate from astrocytes)
- Its developmental history (migration, differentiation)
- Its evolutionary lineage (500 million years of conservation)
- Its relationships (with other neurons, glia, the soma, the environment)
- Its failures (what happens when energy runs out)
This conceptual understanding doesn't prioritize one aspect over another. It holds multiple perspectives simultaneously—electrical, metabolic, structural, temporal, relational—like viewing a crystal from all angles at once.
The Descriptive Expression
When we express this concept descriptively—in natural language, in biological narrative—we preserve much of that richness. We can shift angles mid-description:
"An action potential races down the axon at 100 meters per second, arriving at the presynaptic terminal in milliseconds. Calcium channels open, ions rush in, forming micro-domains of high concentration that trigger vesicle fusion. But whether fusion actually occurs depends on metabolic state—did the astrocyte supply enough lactate? Were the vesicles already primed by protein synthesis that happened six hours ago in the distant soma? The synapse operates simultaneously on three timescales: microsecond electrical signals, second-long biochemical cascades, and hour-long structural changes. These timescales don't wait for each other—they overlap, interfere, and create effects that feed back across temporal boundaries."
Notice what this description allows:
- Multiple viewpoints: We can look from the electrical perspective, then metabolic, then structural
- Ambiguous causation: Does calcium cause release, or does metabolic state enable calcium to trigger release?
- Temporal flexibility: We move freely between microseconds and hours without forcing synchronization
- Emergent concepts: "Micro-domains" and "metabolic state" aren't predefined—they emerge from the description
- Contextual meaning: What counts as "cause" depends on which timescale we're examining
The descriptive mode is multifaceted. Each reading can emphasize different aspects. A electrophysiologist and a metabolic biologist can both recognize their concerns in the same description.
The Computational Expression
Now watch what happens when we move to computational formalization:
class Neuron:
def __init__(self):
self.voltage = -70.0
self.calcium = 0.0
self.synaptic_weights = [1.0] * 100
def update(self, inputs, dt=0.001):
# Electrical dynamics
self.voltage += sum(inputs * self.synaptic_weights) * dt
# Spike generation
if self.voltage > -55.0:
self.calcium += 10.0
self.voltage = -70.0
return 1 # spike
# Decay
self.calcium *= 0.9
return 0
This computational expression imposes a single viewing angle with three rigid constraints that I call "The Three Locks."
The Three Locks of Computational Formalization
Lock 1: Fixed Causation (No Ambiguity)
In the concept/description: The neuron exists in a web of mutual influences. Does calcium cause vesicle release? Or does metabolic availability enable calcium to trigger release? Or does the spatial arrangement of vesicles near calcium channels determine whether calcium matters? Or does the history of recent activity prime the system to respond? All these causal perspectives coexist without contradiction.
In the computation: Causation must flow in one direction only:
if self.calcium > threshold:
release_vesicles()
There's no room for "the vesicle wasn't ready" or "the ATP wasn't available" unless we add those as additional, pre-specified conditions. The computational model forces us to choose: calcium is either THE cause or it isn't. The causal ambiguity—which might reflect biological reality—is eliminated.
What we lose: The possibility that causation itself is contextual. That calcium "causes" release in a well-fed synapse but merely "correlates with" release in a metabolically stressed one. That the question "what causes what?" doesn't have a single answer independent of timescale and context.
Lock 2: Synchronicity (Global Time)
In the concept/description: The neuron exists on multiple timescales simultaneously and non-hierarchically:
- Fast (0.1-1000 milliseconds): Action potentials fire, neurotransmitters cross synapses
- Medium (1 second to 10 minutes): Biochemical cascades modify synaptic strength, plasticity adjusts weights
- Slow (hours to days): Structural changes reshape architecture, gene expression alters properties
These aren't synchronized—they're overlapping waves with no common clock. A single spike at time t simultaneously triggers an immediate electrical effect (1ms later), initiates a biochemical cascade (30 seconds later), and influences gene expression (6 hours later). This creates causality smearing: effects ripple across temporal boundaries that don't respect each other's existence.
In the computation: We must impose a unified time step:
def update(self, dt=0.001): # 1ms timestep
self.update_voltage(dt)
self.update_calcium(dt)
self.update_plasticity(dt)
self.update_structure(dt)
Even if we use differential equations for continuous time, we're still assuming all processes can be coordinated on a single temporal axis. The soma's hour-long protein dynamics and the synapse's microsecond calcium dynamics must be forced into the same temporal reference frame, synchronized by a global parameter dt.
What we lose: The possibility that temporal incoherence between subsystems is functional. That the synapse doesn't "wait" for protein synthesis because they operate in incommensurate time domains, and this incompatibility itself enables behaviors. The neuron doesn't have a master clock—different parts literally experience time differently.
Lock 3: Stable Concepts (No Emergent Entities)
In the concept/description: New entities can emerge from the description. We might start talking about "calcium micro-domains" and discover they have properties that matter. The interaction between spine head volume and receptor density creates a relationship we call "synaptic strength." The coordination between astrocyte lactate supply and synaptic activity creates what we call "tripartite synapses." These concepts emerge as we explore the biology from different angles.
In the computation: All entities must be declared before the simulation runs:
class Neuron:
def __init__(self):
self.voltage = -70.0
self.calcium = 0.0
self.weights = []
# Must declare everything upfront
You cannot have a "micro-domain" spontaneously become relevant halfway through the simulation unless you programmed that possibility in advance. The computational framework is closed under its own operations—it can only manipulate the concepts it started with.
What we lose: The ability to discover new organizing principles. If biology uses emergent structures we didn't anticipate—if the metabolic negotiation between astrocyte and neuron creates a third entity with its own dynamics—our computational model will never reveal it, because it can only show us what we already specified.
The Deeper Problem: Evolving Coupling Rules
But the Three Locks only scratch the surface of computation's limitation. There's a deeper problem that leads to infinite regress.
Even if we tried to model the neuron at multiple timescales, we still must specify how these timescales couple to each other. We need rules like:
- How do fast spikes (Lock 2 violation: synchronized across timescales) trigger medium-term plasticity?
- How do medium-term changes modulate fast responses?
- How do slow structural modifications affect both?
In computational models, we hard-code these coupling rules. We write functions that define exactly how Layer 1 (fast) influences Layer 2 (medium), how Layer 2 feeds back to Layer 1.
But here's what the neuron actually does: the coupling rules themselves adapt.
Consider metaplasticity, a well-documented phenomenon:
Initially: LTP threshold = 10 Hz
After chronic low activity: LTP threshold = 5 Hz
After chronic high activity: LTP threshold = 20 Hz
What changed? Not just the synaptic weights (medium timescale), but the rule for how spike frequency (fast timescale) triggers weight changes (medium timescale). The coupling function itself evolved.
This means we need meta-rules: rules that govern how the coupling rules change.
But wait—what governs those meta-rules?
The Infinite Regress
Level 1: Model fast processes (voltage, spikes)
Level 2: Model medium processes that adapt based on Level 1 (plasticity)
Level 3: Model slow processes that adapt based on Level 2 (structural changes)
Level 4: Model meta-processes that adapt the coupling between Levels 1-3 (metaplasticity)
But what governs the coupling at Level 4? We need Level 5 to specify that.
And what governs Level 5? We need Level 6.
And so on, infinitely.
This isn't a technical problem waiting for better algorithms. It's a fundamental conceptual limitation: computation requires pre-specified timescales and coupling rules, but the neuron continuously evolves both its operational timescales and the rules that couple them.
From a computational perspective, it's as if the neuron is writing programs that write programs that write programs...
The Neuron Example: Watching the Complete Collapse
Let's trace exactly how the Three Locks and the Infinite Regress eliminate what makes the neuron alive:
Conceptual Knowledge (Before Expression)
We understand the neuron as simultaneously an electrical device, a biochemical factory, a structural entity that grows and shrinks, a metabolic agent that negotiates with astrocytes, and a historical being shaped by evolution and experience. These aren't separate neurons—they're the same neuron viewed from different angles. The neuron operates across multiple timescales without coordinating them, generates its own organizational principles without pre-specification, and contextually determines what causes what.
Descriptive Expression (Multifaceted)
"When an action potential invades the terminal, voltage-gated calcium channels open. But the terminal's readiness to release depends on metabolic state—ATP levels, astrocyte lactate supply, recent activity history. The calcium forms spatial micro-domains that only nearby vesicles experience, but whether those vesicles are docked depends on protein synthesis that happened hours ago. The release probability is simultaneously deterministic (given all factors) and stochastic (given the subset of factors we're tracking). Meanwhile, this very act of release is changing the plasticity rules that will govern future releases—the coupling between fast electrical events and medium-term biochemical changes is itself being modified by slow structural processes, which are being shaped by gene expression programs that respond to the pattern of electrical activity over days."
This description lets us hold multiple truths simultaneously. It moves fluidly between timescales without forcing them into sync. Causation flows from multiple sources depending on which angle we adopt. New entities (micro-domains, metabolic state, activity patterns) emerge naturally from the narrative.
Computational Expression (Single Angle, Forced Locks, Inevitable Regress)
class Neuron:
def __init__(self):
# Lock 3: Must pre-specify all entities
self.voltage = -70.0
self.calcium = 0.0
self.weights = [1.0] * 100
self.plasticity_threshold = 10.0
self.structure_level = 1.0
def update(self, inputs, dt=0.001):
# Lock 2: Everything updates on same unified time
# Lock 1: Fixed causation chain
# Fast: Electrical dynamics
self.voltage += sum(inputs * self.weights) * dt
if self.voltage > -55.0:
self.calcium += 10.0
spike = True
self.voltage = -70.0
else:
spike = False
# Medium: Plasticity (but how does fast couple to medium?)
if spike and self.calcium > self.plasticity_threshold:
self.weights *= 1.01 # strengthen
# Slow: Structure (but how does medium couple to slow?)
if sum(self.weights) > 150:
self.structure_level += 0.001
# Meta: Metaplasticity (but how do we govern THIS coupling?)
if self.structure_level > 1.5:
self.plasticity_threshold = 15.0 # Level 4
# Meta-meta: What governs metaplasticity changes? Need Level 5...
# Meta-meta-meta: What governs Level 5? Need Level 6...
# ... infinite regress
self.calcium *= 0.9
return spike
We've chosen one angle: the neuron as a spike generator with modifiable weights.
Lock 1 forces us to say calcium CAUSES release (line 19), not "calcium enables release if metabolic conditions permit."
Lock 2 forces everything onto a single dt timeline (line 11), eliminating the genuine temporal incoherence where microsecond and hour-long processes don't coordinate.
Lock 3 means we can only work with pre-declared variables (lines 3-7). If a new organizing principle emerges—if the relationship between metabolic state and electrical activity creates something we haven't named—we'll never see it.
And the Regress appears the moment we ask: "Who decides how fast processes couple to medium processes?" We write that (line 20). "Who decides how medium couples to slow?" We write that (line 25). "Who decides when those coupling rules themselves change?" We write that (line 29). But who decides when those meta-coupling rules change? We're trapped.
The computational form has collapsed the conceptual richness into a single, fixed perspective that cannot accommodate the neuron's essential properties: contextual causation, temporal incoherence, emergent organization, and evolving coupling rules.
How the Neuron Escapes the Trap
The neuron doesn't solve the infinite regress problem through logic or computation. It escapes through three complementary mechanisms:
1. Physical Embodiment
Biological constraints naturally limit how timescales emerge and couple. Molecular diffusion rates, protein synthesis times, membrane capacitance—these provide physical boundaries that don't need to be pre-specified. They emerge from what we call "the physics of living matter." The neuron doesn't compute its timing; it is its timing, embodied in physical processes.
2. Evolutionary Pre-Wiring
Some coupling rules are genetically encoded, providing stopping points in the regress. These aren't arbitrary choices—they're couplings sculpted by 500 million years of evolution. The basal ganglia architecture persists across fish, amphibians, reptiles, birds, and mammals because certain coupling patterns work. Evolution changes these couplings at even longer timescales, but that's material for another essay.
3. Environmental Feedback
The environment closes the loop. Success and failure in actual behavior provide the ultimate evaluation criterion, allowing the neuron to discover effective coupling rules without needing infinite meta-levels. The organism either survives or doesn't—there's no regress in death.
The neuron uses all three simultaneously. It's not running a computation—it's a physically constrained system embedded in an environment, with evolutionary heritage, that discovers its own organizational principles through interaction.
Why This Matters: Computation's Proper Domain
This analysis reveals why computation works brilliantly in some domains and fails fundamentally in others:
Computation Succeeds When:
- Systems operate on a single dominant timescale (or clearly separated timescales)
- Coupling rules remain fixed during the process being modeled
- Components can be pre-specified before the model runs
- Causation flows in definable directions
- We're analyzing systems, not systems that analyze themselves
Examples: Classical physics, structural engineering, electronic circuits, orbital mechanics, weather prediction (mostly)
Computation Fails When:
- Components operate at multiple timescales that interact continuously and non-hierarchically
- Coupling rules themselves evolve during operation
- New entities and relationships emerge that weren't pre-specified
- Causation is contextual and ambiguous
- The system generates its own frames of reference
- We're dealing with autonomous creativity
Examples: Living neurons, genuine intelligence, embryological development, evolutionary innovation, consciousness
The Concept of Autonomous Creativity
What unites the neuron's behavior—and what computation cannot capture—is autonomous creativity: the capacity to generate genuinely novel organizational principles, not just novel combinations of existing elements.
The neuron doesn't just process inputs according to fixed rules. It continuously reorganizes its own processing architecture based on its history, current state, and environment. It creates new coupling rules, new timescale relationships, new responses that couldn't be predicted from its prior state.
This creative capacity operates across multiple timescales simultaneously, with the coupling between timescales itself being part of what's created. From a computational perspective, the neuron appears to refuse the very constraints that would make it computable:
- It refuses fixed causation (Lock 1)
- It refuses temporal synchronization (Lock 2)
- It refuses conceptual closure (Lock 3)
- It refuses pre-specified coupling rules (Infinite Regress)
Computation cannot refuse an instruction, but the neuron can. A computational model will execute if calcium > threshold: release() whenever the condition is met. But a real neuron can refuse—if it lacks ATP, if the astrocyte hasn't supplied lactate, if the ion gradients haven't recovered. This isn't a bug; it's a metabolic veto. The neuron says: "I can't afford this operation right now."
This ability to refuse based on context, history, and metabolic economics is precisely what makes the neuron alive and what makes it incomputable.
Beyond Computation: The Path Forward
Recognizing computation's limitation isn't a rejection of formal reasoning or mathematical precision. It's recognition that we need different conceptual frameworks for different kinds of concepts.
For physics and engineering—where components are fixed, timescales are stable, and coupling is predetermined—computation is perfect and should remain our primary tool.
For the neuron and other living systems—where autonomous creativity generates new components, new timescales, and new coupling rules—we need fundamentally different approaches. Not better computation, but frameworks that can accommodate:
- Processes that generate their own timescales rather than operating on pre-specified clocks
- Systems that discover their own coupling rules rather than executing predetermined ones
- Creativity that produces genuinely novel organizational principles
- Autonomous generation of meaning and relevance rather than externally imposed frames
- The ability to refuse based on context and metabolic economics
This is where approaches like Geneosophy become essential. Rather than trying to understand life and intelligence through the computational framework, we need frameworks that can comprehend the generative processes from which computational models themselves arise—the primordial creative capacity that makes all formalization possible but which cannot itself be formalized computationally.
Conclusion: The Neuron's Lesson
The neuron teaches us that formalization isn't a neutral lens—it's a filter. When we formalize biology computationally, we don't reveal neural truth; we impose a single viewing angle while excluding all others.
We lock causation into fixed arrows (Lock 1), synchronize incommensurate timescales (Lock 2), close the system to emergent entities (Lock 3), and fall into infinite regress when we try to specify how our pre-specified levels couple (The Regress).
The neuron operates outside these locks. It lives in causal ambiguity, temporal incoherence, organizational emergence, and evolving coupling rules. It can refuse. It creates.
The question isn't "How can we compute the neuron better?" It's "What kind of conceptual framework can comprehend a process that continuously creates its own computational principles?"
That's the question we must answer if we want to understand not just the neuron, but life and intelligence themselves. The infinite regress we encounter when formalizing living systems computationally isn't a bug to be fixed—it's a signpost pointing beyond computation itself, toward the recognition that some aspects of reality require fundamentally different conceptual tools.
The neuron isn't a biological implementation of a computational principle. It's an expression of autonomous creativity that makes computation possible—along with everything else. To understand it, we must investigate that generative source directly, not through computational proxies that inevitably miss what matters most.