Purkinje Cells (Credit Robert Luck European Center for Angioscience, Medical Faculty Mannheim, Heidelberg University)

An outline for Brain Plasticity: It is not just synapses

Gabriele Scheler
4 min readApr 10, 2021

--

Memory has been extensively studied in neurobiology under the topics consolidation and reconsolidation, extinction learning, retrieval, and memory loss. Information retained in biological memory needs to be accessed, rearranged, transformed, used, stored away etc. To begin to discuss these complex concepts on a theoretical basis, we need to adapt a new paradigm beyond LTP/LTD (long-term potentiation or depression “neurons that fire together wire together”). The current theoretical landscape is inadequate. Exactly why LTP/LTD and the concept of synaptic plasticity fails is not our focus here. We rather want to sketch out an alternative. There are two main reasons to retract and reset and analyze neural plasticity in all complexity:

  1. It is important to organize knowledge. Outdated theories which do not cover the known facts well are harmful. Better theories provide guidance for new experimental results.
  2. Theories covering results from neurobiology allow to build better cognitive models. These have applications in medicine as well as artificial intelligence.

When reviewing and structuring known facts about plasticity in (animal) brains, it turns out the material is not only vast but also highly unstructured and systematic experiments are rare. High-level abstract hypothesis induced from the material are needed which outline a way existing experimental data fit into the paradigm.

We suggest to lead with a hypothesis according to which a neuronal cell operates with a central storage element, located in the nucleus, and a periphery where properties are expressed and information processing in concert with the surrounding neural tissue is performed. Adaptations occur transiently on the periphery but are largely permanent in central storage. Signals are filtered before they reach central storage, and transient peripheral adaptivity is continually re-adjusted based on central storage information.

A neuron or ensemble of neurons is regarded as a self-organizing system which tends towards an optimal state where optimal is defined by an optimization function in regard to its environment. This optimization may also be defined as an “embedding”, where large numbers of parameters together define the properties of the neuron or ensemble within the context of a neuronal network. If such an optimization is imposed on a neuron, it needs to adjust its parameters and “self-organize” in order to fulfill this optimization function, since its environment is constantly changing. This environment has network-internal as well as external data input (e.g. sensory). The optimization happens on several time levels, such that temporary adaptation blends into learning (memory). The temporal structuring provides the background for filtering and integration of information to construct memories.

The main ideas concern self-programming, i. e. neurons as self-programming modules which learn from their environment. An ANN can be seen as self-programming functions using data, setting synaptic weights, and sometimes activation functions. Biological neurons separate the central storage element from the processing periphery. Central storage is a combination of genetic/developmental cellular identity and filtered data processing.

This requires an evolutionary learning procedure which creates genetic identities of neurons and separate brain areas with different types of neurons. Additionally, it requires an ontogenetic learning procedure which continually processes internal and external data, a procedure which is subject to optimization as outlined above.

Neurons may be seen as self-optimizing units which cooperate, which is similar to multi-agent models. From this we can derive the idea of stacking self-programming units. We may also gain ideas about dynamic modularization within a large network — dynamic modules and dynamic assignment of neurons to modules.

The key lesson is that there is no need to ignore vast areas of neurobiology which don’t fit the existing LTP/LTD paradigm, rather it is a more useful approach to search for a new paradigm. Currently, machine learning is often equated with adjusting parameters, but natural learning and memorization is concerned with building and maintaining structures — the hard problem — while setting parameters is a secondary, and an easier to solve issue. Creating structures is at least partly handled by evolutionary-scale mechanisms, which build a functioning brain requiring only minimal and highly restricted data to develop.

While adjusting synaptic strength can do function approximation in various forms, such as perceptual categorization, the whole set of parameters available in cellular plasticity opens up the possibility of building other and more complex cognitive skills from individual elements.

Information storage in the nucleus is of central importance. It has high permanence, it controls the periphery and it is shielded very effectively from peripheral signals which are filtered through complex, and evolutionarily highly conserved mechanisms. Filtering and integration are two tasks that are being performed by the temporal structuring of neural plasticity mechanisms. They crucially rely on existing cellular mechanisms and cellular identities existing at birth. Neural identity and the microstructure of brain areas have developed over evolutionary time scales. This is a process that is very different from optimization of parameters in response to data input. We need to build highly structured units from diverse neurons and then we can apply machine learning techniques to fine-tune them from new data.

--

--

Gabriele Scheler

Computer scientist and AI researcher turned neuroscientist, supporting a non-profit foundation, Carl Correns Foundation for Mathematical Biology.