Special Session

Srinivas C Turaga (HHMI Janelia Research Campus)
[Website]
Simulating the brain and body of the fruit fly
We now have connectomes of the entire fruit fly nervous system at single neuron resolution. How do we translate these new measurements into an understanding of how the brain processes sensory information and produces behavior? I will describe two computational efforts to model the brain and the body of the fruit fly. First, I will describe a new modeling method which makes highly accurate predictions of neural activity in the fly visual system as measured in the living brain, using only measurements of its connectivity from a dead brain [1], joint work with Jakob Macke. Second, I will describe a new whole body physics simulation of the fruit fly which can accurately reproduce its locomotion behaviors, both flight and walking [2], joint work with Google DeepMind. Together, these methods and models point the way towards a whole animal simulation of the fruit fly capable of predicting the neural circuit mechanisms underlying all fly behavior.
[1] Lappalainen JK, Tschopp FD, Prakhya S, McGill M, Nern A, Shinomiya K, Takemura Sy, Gruntman E, Macke JH, and Turaga SC. Connectome-constrained networks predict neural activity across the fly visual system. Nature, 2024.
https://www.nature.com/articles/s41586-024-07939-3
[2] Vaxenburg R, Siwanowicz I, Merel J, Robie AA, Morrow C, Novati G, Stefanidi Z, Card GM, Reiser MB, Botvinick MM, Branson KM, Tassa† Y, and Turaga† SC. Whole-body simulation of realistic fruit fly locomotion with deep reinforcement learning. Nature, 2025.
https://www.nature.com/articles/s41586-025-09029-4

Xiaoyin Chen (Allen Institute)
[Website]
Uncovering circuit wiring rules and variations using barcoded connectomics
Neurons with diverse morphology and gene expression are wired into complex neural circuits. These circuits are specialized across species to enable behaviors best suited to each animal’s ecological niche. Uncovering the wiring rules of diverse neuron types and how these rules adapt across species provides a foundation for understanding how human brains support distinctly human abilities, such as complex cognition and social reasoning. However, mapping and comparing circuit wiring across species at single-cell resolution remains a tremendous challenge. My lab addresses this challenge by developing in situ sequencing and barcoded connectomics tools. These tools are scalable to brain-wide interrogation across populations, can deployed by individual labs, and can reveal molecular and circuit variations with unprecedented detail. In this talk, I will provide an overview of the principles of barcoded connectomics, followed by three case studies in which we use these techniques to reveal how the visual cortex and thalamus develop and adapt from rodents to primates.

Gisella Vetere (ESPCI Paris)
[Website]
Dissecting a Fear Memory Engram
How does the brain process external information and encode it into lasting memories? A key question in neuroscience is which cells represent a memory, when they are engaged, and how they contribute to memory expression. In my laboratory, we investigate the formation and consolidation of associative memories within neural networks.
By visualizing and tagging neurons based on their calcium influx with unparalleled temporal precision, we identified distinct, non-overlapping dorsal CA1 neuronal ensembles that are differentially active during associative fear memory acquisition. We dissected the learning process into key temporal phases, correlating neuronal activity with salient stimuli and specific behavioral events. Our findings reveal that neurons activated during distinct acquisition periods are not only involved in learning but are also sufficient to drive memory expression. Importantly, we identified the core engram cells essential for memory formation and recall.
The study i will present provides novel insights into the neural basis of memory and advances our understanding of how experiences are encoded in the brain.
Special Topic Session

Michael Breakspear (The University of Newcastle)
[Website]
The Hippocampus as a Latent Diffusion Engine for the Brain
The hippocampus is a cognitive hub whose functions are disrupted in most major neurodegenerative dementias. Despite substantial knowledge of its anatomy, physiology, and circuitry, a unifying account that links hippocampal computations, functions and clinical manifestations is lacking. Drawing on recent advances in generative artificial intelligence and systems neuroscience, we conceptualize the hippocampus as a latent diffusion engine that compresses sensory, internal, and cognitive inputs into low-dimensional representations and then regenerates percepts, episodic memories, and imagined scenes for cortical integration. Travelling waves of cortico-hippocampal oscillations, aligned with large-scale functional gradients, schedule and structure this generative process, organizing distributed neural activity into coherent, semantically grounded constructs. Specifically, we propose a stochastic latent oscillatory diffusion (SLOD) framework that maps specific hippocampal–cortical computations onto biological substrates and dynamical processes. The computational architecture mirrors the dominant text-to-image generative AI algorithms (specifically latent diffusion models), adapted and translated into the biological embedding of the cortex and hippocampus. Finally, we demonstrate how major neurodegenerative dementias – including the Alzheimer’s disease spectrum, dementia with Lewy bodies, and the frontotemporal dementia spectrum – can be interpreted as selective breakdowns of the anatomical and computational components of this proposed architecture.
Topic Session

Ai Koizumi (Sony Computer Science laboratories, Inc.)
[Website]
Reconceptualizing Fear Memory as a Dynamic System: Time, Body, and Interpersonal Context
Fear memory is typically studied as a static process within individual human brains. However, real traumatic experiences unfold over time, reshape defensive bodily responses, and often occur in the presence of others. In this talk, I present three lines of work that reconceptualize fear memory as a dynamic system. First, we show how fear memories reorganize across days following temporally structured aversive episodes (Nature Communications, 2024). Second, we demonstrate that fear expression is reflected in whole-body movement dynamics (iScience, 2024), suggesting that bodily dynamics provide a tractable window into underlying fear memory states and may offer novel entry points for intervention. Third, I introduce ongoing work examining how interpersonal interactions shape fear regulation. Together, these studies expand fear memory across time, body, and social context, framing it as a multiscale dynamic process rather than a static trace.

Gerald Pao (Okinawa Institute of Science and Technology)
[Website]
Manifold Learning to Help with Your Next Experiment
Recent advances in large-scale recordings of neurons and glia have created new opportunities for big data analysis in neuroscience. Many machine learning methods can decode behavior from neural activity or reconstruct sensory stimuli from brain signals. However, common approaches such as PCA, UMAP, diffusion maps, VAEs, reservoir computing, and neural network decoders often produce latent variables that do not correspond clearly to real neurons or brain regions. These “black box” models make it difficult to design experiments that test and validate the inferred mechanisms. As a result, much of neuroscience data science lacks direct experimental testability.
Here, we present a set of manifold learning algorithms grounded in topology, differential geometry, and dynamical systems theory. These methods decode behavior from neural recordings across many data types and spatial scales, from single neurons to fMRI. Importantly, the results are directly interpretable from the geometry of the data. The identified variables are real observables—neurons, glial cells, or brain areas—making them natural targets for experimental manipulation.
Our approach identifies which neurons or brain regions contain information about specific behaviors and sensory inputs, without introducing latent variables. The methods can also detect whether important variables are missing and estimate how many additional observations may be required. Furthermore, the algorithms can be organized into networks of interacting manifolds that simulate realistic behavioral dynamics based on recorded neural activity.
These tools provide experimentally actionable hypotheses, enabling neuroscientists to move directly from data-driven discovery to targeted experimental validation.
Keywords: computational neuroscience, Takens theorem, time series embedding, explainable AI, manifold learning, Manifold Dimensional Expansion (MDE), Causal Compression (CC), Generative Manifold Networks (GMN)

Shogo Ohmae (Chinese Institute for Brain Research, Beijing (CIBR, Beijing))
[Website]
Brain–AI Convergence: Predictive World Models as a Basis for Multifunctionality
The neocortex and cerebellum are involved in diverse cognitive functions including language, despite exhibiting remarkably homogeneous circuit architectures across functional domains. This suggests that the brain’s multifunctionality may be realized through learning-driven differentiation of functions and internal representations. Interestingly, recent general-purpose AI has also shown that a single architecture can learn to perform a wide range of tasks. From the perspective of brain-AI parallels and their convergent evolution, we investigated the computational principles underlying the brain’s multifunctionality.
First, at the functional level, we constructed an artificial neural circuit reflecting the biological features of the cerebellum and found that when trained on next-word prediction (a known cerebellar function), the circuit spontaneously acquired syntactic processing, a distinct cerebellar function. This parallels how language AI develops advanced language understanding from next-word prediction. Second, at the internal representation level, we investigated whether representations analogous to AI’s seq2vec (i.e., compressing sequence information into a single vector) exist in the brain. We found that cerebellar granule-cell population activity carried sufficient information to decode motor event sequences with high accuracy, suggesting the presence of seq2vec-like sequence representations. Furthermore, simulations with the cerebellar artificial neural circuit demonstrated that such sequence representations can be formed by next-event prediction learning alone. Third, at the computational theory level, our cross-domain brain-AI comparison points to a shared scheme of predictive-world-model-based multifunctionality (prediction, abstraction, and generation) in the neocortex, the cerebellum, and AI.
Together, these results suggest that biological evolution of the brain and engineering optimization of AI have converged on similar predictive-world-model-based computational principles, providing insights into the essence of brain intelligence.

Chie Hieida (Nara Institute of Science and Technology)
[Website]
Toward Empathetic Robots: Modeling Emotional Mechanisms and Concept Formation
Emotions play a central role in human intelligence, yet their computational mechanisms and conceptual foundations remain largely unresolved. In this talk, I introduce our research addressing this issue from two perspectives: mechanistic simulation and data-driven modeling grounded in human experience. Both studies were developed in accordance with the theory of constructed emotion. In the first study, we developed a deep learning based model that reproduces emotion differentiation within a task simulating interactions between a caregiver and a child. Simulation results revealed the emergence of differentiated emotional states within the internal representations of the proposed model.
In the second study, we modeled the formation of subjective emotion concepts by integrating visual stimuli, physiological signals, and linguistic information collected from multiple participants exposed to emotion-evoking stimuli. Using a multilayered multimodal Latent Dirichlet Allocation, we demonstrated that the latent categories learned by the model correspond to human subjective emotion categories. Through these studies, I aim to develop robots that possess emotions and ultimately achieve empathy.

Hideaki Shimazaki (Kyoto University)
[Website]
Population coding under the scaling law of high-dimensional noise
Neural population activity exhibits scale-invariant noise covariance with a high-dimensional eigenspectrum that follows a power-law, a phenomenon observed universally across brain regions and animal species. However, its implications for information coding remain unclear. In this talk, we clarify the role of noise covariance scaling in population coding and demonstrate that neural populations in mouse primary visual cortex (V1) can transmit information without bound as population size increases. To this end, we establish a theoretical framework that specifies the scaling conditions of noise covariance that determine whether information is bounded or unbounded. Applying this theory to stimulus-evoked activity of neurons in mouse V1, we show that noise components that scale linearly with population size—those capable of limiting information—are not aligned with the signal direction and therefore do not limit stimulus information. Our results demonstrate that the universal scaling laws observed in neural noise covariance can provide a foundation for elucidating the brain’s information-processing capacity.