
- This event has passed.
Brain & Mind Computational Seminar
28.2.2022 @ 10:00 - 11:30
We warmly welcome you to our next Brain & Mind Computational seminar on 28.02.2022. This time Jean-Rémi King (CNRS) and Stéphane Deny (Aalto University) will present and discuss their research work on linking deep learning to human cognition!
Zoom link: https://aalto.zoom.us/j/67072679004
Password: brainmind
Talks
Jean-Rémi King
Language in brains and algorithms
Deep learning has recently made remarkable progress in natural language processing. Yet, the resulting algorithms fall short of the language abilities of the human brain. To bridge this gap, we here explore the similarities and differences between these two systems using large-scale datasets of magneto/electro-encephalography (M/EEG, n=1,946 subjects), functional Magnetic Resonance Imaging (fMRI, n=589), and intracranial recordings (n=176 patients, 20K electrodes). After investigating where and when deep language algorithms map onto the brain, we show that enhancing these algorithms with long-range forecasts makes them more similar to the brain. Our results further reveal that, unlike current deep language models, the human brain is tuned to generate a hierarchy of long-range predictions, whereby the fronto-parietal cortices forecast more abstract and more distant representations than the temporal cortices. Overall, our studies show how the interface between AI and neuroscience clarifies the computational bases of natural language processing.
Stéphane Deny
Bio-inspired Approaches to Disentanglement of Factors of Variations in Images
A core challenge in Machine Learning is to learn to disentangle natural factors of variation in data (e.g. object shape vs. pose). A popular approach to disentanglement consists in learning to map each of these factors to distinct subspaces of a model’s latent representation. However, this approach has shown limited empirical success to date. Here, we show that, for a broad family of transformations acting on images–encompassing simple affine transformations such as rotations and translations–this approach to disentanglement introduces topological defects (i.e. discontinuities in the encoder). Inspired by ‘mental rotation’ in the brain, we study an alternative, more flexible approach to disentanglement which relies on recurrent latent operators, potentially acting on the entire latent space. We theoretically and empirically demonstrate the effectiveness of this approach to disentangle affine transformations.