On the role of Theta oscillations in the hippocampus 

by Paul Verschure

To celebrate the new year and the publication of the paper “Theta oscillations optimize a speed-precision trade-off in phase coding neurons” by specs-lab.com members Adrian Amil, Albert Albesa González, and myself, I thought it would be helpful to provide some context on where the specific question we answer on role of Theta oscillations in the hippocampus came from. The roots of this paper are in the Distributed Adaptive Control theory of mind and brain (DAC), which has been at the center of the theoretical and empirical work in my research group SPECS over the last 30-odd years. Robot models of DAC, realized with Thomas Voegtlin, predicted in 1998 that real-world optimal decision-making operates on conjunctive sensory-action representations. These representations are consolidated in memory in sequences that conserve the serial order of sensory-motor events. In other words, you store events in the sequence you encounter them and link them to the goals that you achieve. The basic idea goes back to the intellectual tradition of decomposing behavior in reflexes and the early connectionist and behaviorist thinking of Pavlov, Thorndike, Watson, and others. Yet, to have an idea is one thing. Understanding its implications and implementation as a biological and psychological fact is another challenge and entails work to obtain evidence to support the idea.

The notion of sequential memory DAC proposed can be mapped directly onto the hippocampus, a critical brain structure for episodic memory. The question was whether this brain structure would construct conjunctive perception-action representations as DAC predicted. My dearly missed friend, the late John Lisman, proposed that this could be the case as the input to the hippocampus from the entorhinal cortex is organized as a dual perception and action stream originating in the lateral and medial entorhinal cortex, LEC and MEC, respectively.

With these ideas in mind, we have investigated various aspects of the hippocampus using theoretical models. On this path were some discoveries of note, such as the first demonstration in 2006 that the Nobel prize-winning grid cells of the medial entorhinal cortex can be explained in terms of a continuous attractor model that implements a twisted taurus topology by Alexis Guanella, which has been consistently confirmed experimentally ever since. Consistent with John’s hypothesis, this model confirmed that the change in grid cell activity can be seen as an action vector, covering the action part of the conjunctive representation. With John Lisman and Cesar Renno Costa (2010) we explained continuous rate remapping in CA3 in terms of the integration of conjunctive “what” and “where” pathways originating in the lateral and medial entorhinal cortex, confirming the prediction made in 1998. In the same collaboration, we “rescued” the continuous attractor model of CA3 memory (2014), i.e. if the firing rate of hippocampal neurons changes continuously with changes in the environment, it violates the discrete transitions an attractor memory should display. More recently (2021), with Diogo Santos-Pata, Ivan Soltesz, Anna Mura, Adrian Amil, and others, we have captured these principles in a comprehensive theoretical model of the entorhinal cortex and hippocampus, building on the notion of a variational autoencoder. This model showed the self-regulation of learning, and thus epistemic autonomy can emerge through the backpropagation of error signals generated by an entorhinal comparator, earlier proposed by Gyorgy Buzsaki, propagated backward into the hippocampus by a network of inhibitory neurons which were known to exist but whose function was still a mystery. Hippocampal learning thus appears to follow gradient descent via countercurrent inhibition.

Having a model of the learning dynamics of the hippocampus derived from the autoencoder, the question became how the content of episodic memory is constructed. With Reto Wyss and Peter Konig, we have shown that place cells can emerge from bounded visual invariance (2003), which, in turn, is driven by only two computational principles: sparsity and decorrelation (2006). Combining these principles driving perceptual learning in the ventral stream feeding the LEC channel of the hippocampus with the autoencoder (2024) revealed that place cells can arise from discretizing and tiling sensory input spaces. This process creates time-independent representations of experience, transcending the need for temporal correlations from position and path-dependent environment sampling. Earlier, I speculated that this independence from real-world dynamics, or virtualization, is a hallmark of consciousness and volition, freeing the agent from the signals originating in the external world and creating representations that serve the internal models of the mind.

 

An interesting footnote is that the 2003-2006 ventral stream deep-learning model preceded the neural network AI revolution by about 10 years. This model could learn invariant representations by watching TV shows (Reto tested it on “24”), but we were not satisfied with its assumption of a global learning objective and gradient. The brain does not have such a luxury. We searched for a model explaining the brain’s learning dynamics, not “brain-inspired” image classifiers, and we moved on. The TICS 2021 paper outlines what such a biologically grounded learning system might look like, while the DAC theory proposes how such a memory system is embedded in overall brain architecture and what its function might be, i.e., virtualization.

So far, we have looked at an understanding of the learning dynamics of the hippocampus and its cortical inputs building on models of LEC, MEC, and the hippocampus itself, but where are the oscillations? More specifically, Theta oscillations. In 2002, Reto Wyss, Peter Konig, and I proposed that cortical networks could encode information in the phase relationships between neuronal activity, i.e. spikes or a temporal population code (TPC). The TPC model also contributed to the ventral stream and LEC models described above and stimulated us to rethink neuronal coding. Indeed, with Andre Luvizotto and Cesar Renno Costa we have shown (2012) that TPC-based models can be integrated into coding hierarchies similar to the deep-learning ventral stream model of 2006. We just didnt figure out the learning dynamics (yet). In particular, TPC resonated with John Lisman’s concept of a Theta-Gamma code, which he advanced in the early 1990s and proposes that slow oscillations in the Theta range (3-8 Hz.) act as an orchestrator of fast neural activity that encodes memory items in the Gamma range (> 30Hz.). We have been able to test these ideas in human epilepsy patients. With Daniel Pacheco, Riccardo Zucca, Nicolai Axmacher, Rodrigo Rocamora, and others, we showed in 2021 that when humans navigate a virtual reality environment, oscillations in the Theta band (3-8 Hz) emerge correlating with the quality of their memory. This correlation between voluntary action and the amplitude of Theta oscillations is consistent with the animal literature. We also showed new and unique features of Theta oscillations, such as a distinct organization of encoding, recall, and even semantics in the phase of the Theta oscillation. In experiments with the same team and Diogo Santos-Pata we also showed that the Theta frequency increased proportional to information content (will you ever get this properly published Diogo?). Theta appears fundamental in the organization of memory, as John Lisman recognized, which leads to the obvious question: why Theta? Is Theta some universal constant in the way evolution has constructed the brain? Now, Adrian Amil and Albert Albesa González have, through a very elegant and sophisticated analysis, answered this question: Theta oscillations optimize a speed-precision trade-off in memory encoding. Given the biophysics of the brain, the dynamics of system environment interaction, and the information needs, the speed-accuracy trade-off can be solved best by sampling the hippocampus’s LEC and MEC input streams in the Theta range.

In a recent paper, “The unbearable slowness of being: Why do we live at 10 bits/s?Jieyu Zheng and Markus Meister wonder why the brain would operate at such a slow information rate. Our paper provides part of the answer: Theta oscillations are one of the “constraints that deconstrains” brain function, a concept linking control theory and biology elegantly developed by John Doyle and Marie Csete. For the rest, consult the paper and let us know what you think about it.

 

For an audio summary, here is a Google Gemini-generated podcast that does a decent job introducing the article.