Abstract
Can learning emerge from adaptive equilibrium points rather than synaptic weights? This project tests a novel hypothesis: elastic homeostasis - where neurons maintain stable firing patterns through local regulation, but significant perturbations can permanently shift them to new equilibrium states.
Unlike strict homeostasis (which always returns to the original state) or pure plasticity (which changes freely), elastic homeostasis proposes that small perturbations are corrected back to the current equilibrium, while large perturbations push the neuron past a threshold into a new stable state. Learning emerges as the accumulation of these equilibrium shifts.
The Core Question
Traditional neural networks learn through global error signals (backpropagation), while biological neurons rely on local chemical signals. Most bio-inspired models focus on synaptic plasticity (Hebbian learning, STDP). This project asks: Can learning emerge from adaptive equilibrium points rather than adaptive connection weights?
The Mechanism
Built on the Izhikevich spiking neuron model, adding homeostatic parameter adaptation:
Standard Model: Four parameters (a, b, c, d) control neuron firing behavior
Elastic Extension: Parameters slowly adapt based on deviation from target membrane potential
When membrane potential v > target → Parameters shift to reduce excitability
When v < target → Parameters shift to increase excitability
Slow adaptation rate (ε = 10⁻⁵) filters transients; sustained inputs shift equilibrium
Implementation & Results
Scale: 100,000 neuron simulation with spatial organization (20×20×20 grid)
Success: Single-neuron regulation works - parameters adapt to maintain target activity
Challenge: Network-level learning failed to emerge in behavioral tasks (Pong test)
Key Limitation Identified
Post-implementation analysis revealed a fundamental issue: linear dynamics prevent true equilibrium shifts. The current formulation lacks non-linear terms necessary for genuine bistability. Without non-linear restoring forces creating distinct attractor basins, the system maps input to parameters linearly rather than exhibiting true multiple equilibria.
What Worked
- Conceptual proof: Local homeostatic rules can be implemented computationally
- Filtering behavior: Slow adaptation successfully filters rapid transients
- Scalability: Successfully simulated 100k neurons with spatial organization
What Didn't
- No emergent task learning in behavioral tests
- Runaway excitation in networks
- Linear dynamics - no true attractor basins
- Timescale mismatches between adaptation and task demands
Future Directions
The hypothesis remains valid but needs reformulation. Next steps:
- Non-linear dynamics: Polynomial/exponential restoring forces to create genuine attractor basins
- Potential function formulation: Define energy landscape with local minima
- Explicit threshold gating: Clear transitions between homeostatic modes
- Simpler test cases: 2-3 neuron systems before scaling to networks
Educational Value
This project demonstrates: novel hypothesis formation, computational neuroscience implementation, recognizing theoretical flaws, and planning refinements. The mixed results highlight important lessons about the relationship between mathematical formulation and conceptual ideas.
Status
September 2024: Initial implementation complete
Current: Not actively developed - might be worth revisiting with reformulated approach
Repository: github.com/andrejtetkic/NeuroEvolution