Researchers in Japan have shown that living brain cells can learn to produce precise, repeatable patterns of activity, a task normally associated with artificial neural networks. Their work, described in the paper “Online supervised learning of temporal patterns in biological neural networks under feedback control” and summarized in “Living brain cells enable machine learning computations,” represents a step toward computing systems that blend biological and artificial components. The achievement is not simply that neurons fired in interesting ways, but that they were guided to generate specific time varying signals on command, including rhythms and even chaotic patterns. This required the researchers to bring together cell biology, microengineering, and machine learning in a single closed loop system.[1]
The foundation of the work is the biological neural network. A biological neural network, or BNN, is a group of living neurons that communicate through tiny electrical pulses. These pulses are the basic language of the brain. When neurons connect to one another, their pulses influence the timing and strength of each other’s activity. Even when no outside input is provided, neurons tend to fire spontaneously. This spontaneous activity is not random. It reflects the internal wiring of the network and the natural tendency of neurons to interact. In the brain, this background activity helps support memory, learning, and prediction. In a dish, it provides a constantly shifting landscape of electrical patterns that can potentially be shaped into something useful.
Artificial neural networks, or ANNs, were originally inspired by these biological systems, although they operate in software or hardware rather than in living tissue. An ANN is made of simple mathematical units that take numbers in, transform them, and pass numbers out. By adjusting the strengths of the connections between units, ANNs learn to perform tasks such as recognizing images or predicting sequences. For tasks involving time, such as speech or motor control, a type of ANN called a recurrent neural network is used. These networks have loops that allow past activity to influence future activity, giving them the ability to generate or predict time dependent patterns. Although ANNs are simplified compared to real neurons, they borrow the idea that complex behavior can emerge from many simple units interacting.
A related class of models, spiking neural networks or SNNs, moves closer to biology by using spikes rather than continuous values. In SNNs, the timing of spikes carries information, much like in real neurons. This makes them attractive for neuromorphic hardware, which aims to build energy efficient systems that compute more like brains. SNNs can generate complex rhythms and even chaotic patterns when trained appropriately. The question that motivated the Japanese research team was whether similar training methods could be applied to actual living neurons.
One method that has been proven effective in artificial systems is reservoir computing. In this approach, a recurrent network with fixed internal connections serves as a reservoir that transforms input signals into a rich internal state. Only the output layer is trained, which simplifies learning. A useful analogy is a bowl of water. If you tap the surface in different ways, the ripples interact and create complex patterns. By placing sensors in the water and learning how to interpret the ripples, you can use the bowl as a kind of computer. You do not redesign the water itself. You simply learn how to read it. In reservoir computing, the recurrent network plays the role of the water, and the readout layer plays the role of the sensors.
To train the readout layer in real time, researchers often use a method called First Order Reduced and Controlled Error, or FORCE learning. FORCE learning continuously compares the network’s output to a target signal and adjusts the output weights to reduce the error. It is like teaching someone to clap along with a rhythm by giving immediate feedback. The adjustments happen quickly and incrementally, allowing the system to adapt as it generates the signal. FORCE learning has been used to train artificial networks to produce sine waves, square waves, and even chaotic trajectories. The open question was whether a living network, with its biological variability and spontaneous activity, could be trained in the same way.
The researchers addressed this question by constructing biological neural networks from rat cortical neurons grown on a high-density microelectrode array. This array contained thousands of electrodes that could record spikes and deliver electrical stimulation. To shape the connectivity of the neurons, the team placed a microfluidic film on top of the array. This film contained small wells and narrow channels that guided how neurons grew and connected. The goal was to create modular networks rather than a single dense mass of cells. Modular networks behave more like a collection of interacting groups rather than a single synchronized block. This matters because a synchronized block does not provide the rich internal dynamics needed for reservoir computing. A modular network, by contrast, behaves more like a set of instruments in an orchestra, each contributing its own patterns.
The system operated in a closed loop. The electrodes recorded spikes, which were filtered into smooth signals representing the reservoir state. A linear decoder transformed this state into an output. The output was compared to a target waveform, and FORCE learning adjusted the decoder weights to reduce the error. The output also determined the electrical stimulation sent back into the network, which influenced the next round of activity. This loop ran several times per second, fast enough for the network to adapt in real time.
Using this setup, the researchers trained biological networks to generate a variety of time series. They produced sine waves with periods ranging from a few seconds to half a minute, as well as triangle waves and square waves. They also trained the networks to approximate the Lorenz attractor, a classic example of chaotic dynamics. During training, the patterned networks closely followed the target signals. After training, when weight updates were turned off but feedback stimulation continued, the networks often maintained the learned oscillations, although with some drift. The ability to reproduce both smooth rhythms and chaotic patterns suggests that living networks can serve as flexible computational reservoirs when guided appropriately.
The findings suggest that biological neural networks can serve as physical reservoirs with unique properties. Unlike artificial reservoirs, living networks are plastic. They change over time. This plasticity can be a challenge for stability, but it may also allow the system to adapt in ways that artificial networks cannot. The researchers found that networks with higher firing rates and lower correlations performed better, and that networks with more complex internal dynamics supported more accurate learning. These observations point to design principles for future biological computing systems.
The research team acknowledges several limitations and proposes directions for improvement. One challenge is stability after training. Performance often degraded once weight updates stopped. Variants of FORCE learning might improve robustness. Another challenge is feedback delay. The closed loop cycle limited the system’s ability to reproduce very fast changes or sharp corners in signals. Reducing latency through specialized hardware or predictive compensation could help. The team aims to improve stability and reduce delays in future versions of the system.
The potential benefits of this work extend beyond computing. Because the neurons are real, the system could be used to study drug responses or model neurological disorders. It could also inform brain machine interfaces, where shaping and decoding neural activity in real time is essential. The work also contributes to neuromorphic computing, which seeks to build energy efficient systems inspired by the brain. Living neurons operate with tiny amounts of power compared to conventional chips, suggesting that biological reservoirs might one day complement artificial ones in specialized applications.
The study demonstrates that living neurons can be guided to perform a supervised learning task, generating predictable temporal patterns through a combination of structured connectivity and real time feedback. It shows that ideas developed for artificial networks can be applied to biological ones, and that the natural dynamics of living tissue can be harnessed for computation. As the field advances, biological and artificial systems may increasingly complement one another, offering new ways to build machines that learn, adapt, and compute with the fluidity of living matter.
This article is shared at no charge for educational and informational purposes only.
Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. We provide indicators of compromise information (CTI) via a notification/Tier I analysis service (RedXray) or an analysis service (CTAC). For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or feedback@redskyalliance.com
Weekly Cyber Intelligence Briefings:
- Reporting: https://www.redskyalliance.org/
- Website: https://www.redskyalliance.com/
- LinkedIn: https://www.linkedin.com/company/64265941
Weekly Cyber Intelligence Briefings:
REDSHORTS - Weekly Cyber Intelligence Briefings
https://register.gotowebinar.com/register/5207428251321676122
[1] https://six3ro.substack.com/p/living-neurons-learn-to-compute-brain
Comments