Special Session
Terence D. Sanger (CHOC, UCI School of Engineering, UCI School of Medicine)
[Website]
The Encrypted Brain: Mechanisms of Efficient Neural Hashcodes for Motor Control
Humans experience as many as 75 trillion distinct experiences in their life, any of which can be immediately recognized as familiar. I propose that high-capacity single-shot learning is possible through the use of neural hashcodes. I extend previous work by showing rapid decoding algorithms for stored hashcodes. I also propose a new locality-sensitive hashcode method, in which inputs that occur closely spaced in time are mapped to similar binary hashcodes. I demonstrate efficient image coding, as well as the use of hashcodes for reinforcement learning algorithms. I show how the hashcode model can map onto the known anatomy of basal ganglia and thalamus, and I suggest how failure of this mechanism could explain some features of dystonia.
金井良太 (株式会社アラヤ)
Ryota Kanai (ARAYA inc.)
[Website]
脳の機能統合プラットフォームとしての意識:人工意識と脳同士の直接コミュニケーションへの示唆
Consciousness as a platform for integrating brain functions: implications for artificial consciousness and brain-to-brain communications
In this talk, I will consider potential links between consciousness and intelligence, and present our viewpoint on how to translate current theories of consciousness into deep learning architectures. We argue such an effort facilitates interpretation of high-level concepts in theoretical consciousness research in terms of more concretely implementable, computational concepts (Juliani et al., 2022). Based on our analysis on potential functions of consciousness (Kanai et al., 2019; Langdon et al., 2022), we argue consciousness has evolved as a platform for general-purpose intelligence: the ability to combine extant functions in a flexible manner. As an example of translation of a theory into an artificial intelligence architecture, we present a re-interpretation of the global workspace theory whereby the global workspace is regarded as a shared latent space connecting multimodal specialized modules (VanRullen & Kanai, 2021). With this re-formulation, we argue that the shared latent space can be used for implementing general-purpose intelligence. We will further discuss the implications of the shared latent space for the development of brain-to-brain communication technologies.
References :
1. Juliani, A., Arulkumaran, K., Sasai, S., & Kanai, R. (2022). On the link between conscious function and general intelligence in humans and machines. Transactions on Machine Learning Research (TMLR), https://openreview.net/forum?id=LTyqvLEv5b.
2. Juliani, A., Kanai, R. & Sasai, S. (2022). The perceiver architecture is a functional global workspace. Proceedings of the Annual Meeting of the Cognitive Science Society, 44, 955-961. https://escholarship.org/uc/item/2g55b9xx
3. Langdon, A., Botvinick, M., Nakahara, H., Tanaka, K., Matsumoto, M., & Kanai, R. (2022). Meta-learning, social cognition and consciousness in brains and machines. Neural Networks, 145, 80-89. https://doi.org/10.1016/j.neunet.2021.10.004
4. VanRullen, R., & Kanai, R. (2021). Deep learning and the global workspace theory. Trends in Neuroscience, 44(9), 692-704. https://doi.org/10.1016/j.tins.2021.04.005
5. Chang, A.Y.C., Biehl, M., Yu, Y., & Kanai, R. Information closure theory of consciousness. Frontiers in Psychology, 11, 1504, 10.3389/fpsyg.2020.01504.
6. Kanai, R., Chang, A., Yu, Y., Magrans de Abril, I., Biehl, M., & Guttenberg, N. (2019). Information generation as a functional basis of consciousness. Neuroscience of Consciousness, 1, niz016, https://doi.org/10.1093/nc/niz016.
Read Montague (Fralin Biomedical Research Institute Department of Physics Virginia Tech)
[Website]
Neuromodulation by monoamine signaling in the conscious human brain: a machine learning approach
The monoamines dopamine, serotonin, and noradrenaline represent a collection of neuromodulatory systems thought to be involved in the control of mood, learning, reward processing, attention, and a host of other important cognitive functions. We are currently living through a kind of renaissance of methods for tracking these systems in model organisms during behavior, but similar tools for use in conscious humans have been lacking. In this talk, I will present the outcome of our decade-long development of machine learning approaches to monoamine detection in human subjects. These methods extend standard voltammetric methods used in model organisms like rodents and we demonstrate two different settings where the methods have been successfully deployed in human subjects – (1) during Deep Brain Stimulating (DBS) electrode implantation and (2) during depth electrode recordings to monitor human subjects in epilepsy monitoring units. The latter approaches piggyback our methodology on electrodes being implanted for the clinical purpose of seizure monitoring and so provide a new use of these depth electrodes to make them into sources of sub-second neurodmodulator dynamics. I will discuss several cognitive paradigms used during such recordings and paint a picture where modifications of these methods could open the door to widespread routine use of sub-second neurochemistry monitoring in humans.
Topic Session
鈴木啓介(北海道大学)
Keisuke Suzuki (Hokkaido University)
[Website]
幻覚の現象学的性質の計算論的メカニズム
Computational mechanisms for the perceptual phenomenology of visual hallucinations.
Hallucinations are perceptions that have no physical counterpart and are typically reported in clinical conditions (e.g. Lewy Body dementia, psychosis, etc.), but can also occur in healthy people under certain circumstances (e.g. hallucinogenic drugs, hypnosis, sensory deprivation, etc.). Although all hallucinations share this core characteristic, there are substantial phenomenological differences between hallucinations that have different aetiologies. We are exploring the computational mechanisms of the phenomenological characteristics of visual hallucinations, simulating the phenomenological properties of visual hallucination with deep neural networks as well as interviewing patients who experience visual hallucinations. In particular, we found higher degrees of perceptual veridicality in neurologically-induced hallucinations (i.e. things look more real) than those drug-induced hallucinations, suggesting perceptual reality monitoring would play a key role to differentiate them. Recently, Generative Adversarial Network (GAN), a machine learning architecture, is proposed as a potential computational mechanism of such ability to discriminate what is real (i.e. external stimuli) from what is fake (i.e. imaginary). In this lecture, I will discuss the possibility to implement the GAN-like network in the brain and how it could explain the different perceptual phenomenology in e.g. hallucinations, imagery, or depersonalization.
References:
1. Gershman SJ (2019) The Generative Adversarial Brain. Front. Artif. Intell. 2:18. doi: 10.3389/frai.2019.00018
2. Seth AK, Suzuki K and Critchley HD (2012) An interoceptive predictive coding model of conscious presence. Front. Psychology 2:395. doi: 10.3389/fpsyg.2011.00395
3. Suzuki, K., Roseboom, W., Schwartzman, D.J. et al. A Deep-Dream Virtual Reality Platform for Studying Altered Perceptual Phenomenology. Sci Rep 7, 15982 (2017). https://doi.org/10.1038/s41598-017-16316-2
4. Suzuki, K., Mariola, A., Schwartzman, D. J., & Seth, A. (2022, May 19). Using extended reality to study the experience of presence. https://doi.org/10.31234/osf.io/uysjw
5. Lau, H. (2022) Chap. 6 ‘A Centrist Manifesto’ and Chap. 7 ‘Are We Alone?’, In Consciousness we Trust: The Cognitive Neuroscience of Subjective Experience, Oxford University Press, USA.
6. Dijkstra, N., Kok, P., Fleming, M. (2022) Perceptual reality monitoring: Neural mechanisms dissociating imagination from reality, Neuroscience & Biobehavioral Reviews, 135.
萩原賢太 (アレン研究所 脳動態部門)
Kenta Hagihara (Allen Institute for Neural Dynamics)
[Website]
ニューロモジュレータの強化学習における役割の解明にむけて
The amygdala, dopamine, and multiplexed neuromodulators in reinforcement learning.
The amygdala and prefrontal cortical areas have been highly implicated in association learning for decades. However, how those distributed circuits work in concert has been unclear. Our recent work identified intercalated amygdala neurons (ITCs) to form a unique mutually inhibitory circuit motif and to dynamically orchestrate those distributed circuits (Hagihara et al., 2021). Further more, we found ITCs are under strong control of dopamine. I will discuss our preliminary results suggesting that dopaminergic signaling in ITCs are indispensable in fear extinction and beyond.
In the latter part of my talk, I would like to share and discuss our concerted effort towards understanding multiple neuromodulators in decision making, reinforcement learning, and working memory, at newly established Allen Institute for Neural Dynamics.
森田 賢治(東京大学大学院教育学研究科身体教育学コース)
Kenji Morita (Physical and Health Education, Graduate School of Education, The University of Tokyo)
[Website]
異なる状態・行動表現を用いた正および負誤差からの学習:機能と不具合
Opponent learning with different representations: function and dysfunction
Animals can take both goal-directed and habitual behaviors, which are analogous to model-based (MB) and model-free (MF) reinforcement learning (RL), respectively. MF-RL is suggested to be implemented in the brain in the way that dopamine (DA) represents reward prediction error (RPE) and DA-dependent cortico-striatal plasticity represents RPE-based value update. Implementation of MB-RL remains more elusive, but recent work suggests that certain apparently MB-like behaviors can be realized through representation of states/actions by their future occupancies of successor states/actions (called the successor representation (SR)) coupled with RPE-based value update and thereby can be implemented similarly to MF-RL, with a difference in whether states/actions are represented by SR or individual (punctate) representation (IR) (or equivalent ones), respectively.
Basal ganglia (BG) contain two major pathways called the D1 and D2 pathways, which have been suggested to be crucial for learning from positive and negative feedbacks, respectively. Since SR and IR may be served by different cortical populations that unevenly project to the two BG pathways, SR-based (MB-like) and IR-based (MF) controllers may learn differently from positive and negative RPEs in the brain. We [1] explored this possibility through simulations of reward navigation tasks, in which reward location dynamically changed. Results suggest that combination of SR-based learning mainly from positive RPEs and IR-based learning mainly from negative RPEs (named appetitive SR & aversive IR) is advantageous in certain dynamic environments. Such a combination actually appears consistent with several anatomical and physiological findings, including activations indicative of SR in limbic/visual cortices and preferential connections from limbic/visual cortices to the D1 pathway.
The appetitive SR & aversive IR agent, advantageous in certain dynamic environments, however deviates from normative RL and so may poorly perform in other environments. Indeed, using a recently proposed environmental model that describes potential development of obsession-compulsion (OC) cycle (Sakai et al., 2022, Cell Reports 40:111275), we[2] found that the appetitive SR & aversive IR agent could develop maladaptive OC cycle, similarly to agent with long/short eligibility trace for positive/negative RPEs in Sakai et al. This was as we expected because SR has a similarity to eligibility trace (its “forward view”), and together with the upside of the appetitive SR & aversive IR combination mentioned above, this potentially explains why even healthy people tended to exhibit shorter eligibility trace for negative RPEs in Sakai et al. We further showed that fitting of behavior of the appetitive SR & aversive IR agent in the two-stage decision task resulted in smaller weights for MB control than SR-only agent, thereby potentially integrating the work by Sakai et al. and the long-standing suggestion that obsessive-compulsive disorder is associated with impairment of MB control. We also explore if dimension-reduction in representation is a key factor (as in our model for addiction [3]).
References:
1. Morita, Shimomura, & Kawaguchi. Opponent learning with different representations in the cortico-basal ganglia circuits. bioRxiv 2021.10.29.466375
2. Sato†, Shimomura†, & Morita (†: equal contribution). Opponent learning with different representations in the cortico-basal ganglia pathways can develop obsession-compulsion cycle. bioRxiv 2022.10.25.513649
3. Shimomura†, Kato†, & Morita (†: equal contribution). Rigid reduced successor representation as a potential mechanism for addiction. Eur J Neurosci 53(11):3768-3790 (2021).
栗川知己 (関西医科大学物理教室)
Tomoki Kurikawa (Department of Physics, Kansai Medical University)
[Website]
認知機能を支える神経活動の集団運動ダイナミクス、および、その役割
Collective dynamics of neural activities underlying cognitive functions and their computational role
Recent in-vivo recordings in neural systems of animals provide activities of a large number of neurons, i.e., the high-dimensional neural dynamics, during performing cognitive functions, such as working memory and decision making. The high-dimensional dynamics often show collective motion constrained onto smaller dimensional space, but not random motion. One of the biggest questions in neuroscience is how the collective dynamics underlies these cognitive functions. We will, in this presentation, introduce the general review of the collective-dynamics approach by training recurrent neural networks (RNNs).
We have two types of RNN approach. 1. Training RNN based on neural activities recorded in electro-physiological methods provides insights into the relationship between neural dynamics and cognitive functions. Characteristics of the neural dynamics from the dynamical system’s theory view, such as the fixed points and separatrix, connect the dynamics and functions. 2. Simple RNNs with simple learning rules suggest the general principle of observed property of neural dynamics for cognitive functions. Chaos and sequential patterns in neural dynamics are focused on. Based on these results, we imply the computational role of collective dynamics in cognitive functions.
References:
1. T Kurikawa and K Kaneko, Embedding responses in spontaneous neural activity shaped through sequential learning, PLoS computational biology 9 (3), e1002943, 2013
2. T Kurikawa, et al., Neuronal stability in medial frontal cortex sets individual variability in decision-making, Nature Neuroscience 21 (12), 1764-1773, 2018
3. T Kurikawa, O Barak, K Kaneko, Repeated sequential learning increases memory capacity via effective decorrelation in a recurrent neural network, Physical Review Research 2 (2), 023307, 2020
4. T Kurikawa and K Kaneko, Multiple-timescale neural networks: generation of history-dependent sequences and inference through autonomous bifurcations Frontiers in Computational Neuroscience 15, 2021
磯村拓哉 (理化学研究所脳神経科学研究センター)
Takuya Isomura (RIKEN Center for Brain Science)
[Website]
すべての神経回路はベイズマシンである
Every neural network is a Bayesian machine
Imagine neural networks that minimise arbitrary cost functions. What functions or characteristics would their dynamics exhibit? One may think that they have too much freedom to figure it out. But indeed, according to the complete class theorem, any system that minimises a cost function can be viewed as Bayesian inference. In light of this notion, we show that any neural network—whose activity and plasticity minimise a common cost function—can be cast as performing (variational) Bayesian inference [1–3]. We establish a formal equivalence between canonical neural networks—that has a certain biological plausibility—and a particular class of partially observable Markov decision processes (POMDPs) by establishing one-to-one correspondences between components of the neural network cost function and those of variational free energy.
Crucially, the existence of this equivalence enables an identification of a natural map from neuronal activity data to a specific generative model (hypothesis about external milieu) under which a biological system operates. As an example, we fitted stimulus evoked responses of in vitro networks comprising cortical cells of rat embryos to a canonical neural network and applied the reverse engineering technique to identify an apt generative model for the given empirical data. Then, we show the predictive validity of the free-energy principle by showing that variational free energy minimisation under this particular POMDP can quantitatively predict the self-organisation of neuronal networks in terms of their responses and plasticity [4]. This provides a formal avenue for experimental application and validation of the free-energy principle.
The virtues of the reverse engineering are that, provided with initial empirical data, it enables to systematically identify what hypothesis the biological system employs to infer the external milieu and offers quantitative predictions about the subsequent self-organisation of the system.
References:
1. Isomura T & Friston K J. Reverse-engineering neural networks to characterize their cost functions. Neural Comput. 32, 2085-2121 (2020).
2. Isomura T, Shimazaki H & Friston K J. Canonical neural networks perform active inference. Commun Biol.5, 55 (2022).
3. Isomura T. Active inference leads to Bayesian neurophysiology. Neurosci Res. 175, 38-45 (2022).
4. Isomura, T., Kotani, K., Jimbo, Y. & Friston, K. J. Experimental validation of the free-energy principle with in vitro neural networks. bioRxiv 10.1101/2022.10.03.510742 (2022).
高木敦士 (NTTコミュニケーション科学基礎研究所)
Atsushi Takagi (NTT Communication Science Laboratories)
[Website]
信号ノイズではなく、運動指令タイミングのバラつきが運動精度を決定する
Command timing variability, not signal-dependent noise, determines motor coordination
Biological movements are imperfect in the sense that no two movements are the same. This limitation in reproducing movements is normally attributed to signal-dependent noise (SDN) in the muscles [1], but an increasing amount of evidence refutes this view [2,3]. In my talk, I will propose that mistimed muscle activity plays a larger role in determining motor performance than SDN does[4]. This command timing variability (CTV) provides a comprehensive picture of motor planning that explains the origins of the speed-accuracy tradeoff and why biological movements are smooth. I will show how our theory explains why the dominant hand is superior at rapid actions, and use computer simulations to argue how neurodegeneration, which leads to poor timing regulation, causes loss of motor coordination characteristic of patients with Huntington’s and Parkinson’s disease, and those with cerebellar ataxia. Essentially our movement precision is constrained by timing errors in initiating muscle activity.
References:
1. Harris, C. M. & Wolpert, D. M. Signal-dependent noise determines motor planning. Nature 394, 780–784 (1998).
2. Churchland, M. M., Afshar, A., & Shenoy, K. V. A central source of movement variability. Neuron, 52(6), 1085-1096 (2006).
3. Wu, H. G., Miyamoto, Y. R., Castro, L. N. G., Ölveczky, B. P. & Smith, M. A. Temporal structure of motor variability is dynamically regulated and predicts motor learning ability. Nat. Neurosci. 17, 312–321 (2014).
4. Takagi, A., Ito, S., & Gomi, H. Command timing variability, not signal-dependent noise, determines motor coordination. Advances in Motor Learning and Motor Control (MLMC), http://www.motor-conference.org/openconf.php (2022).