Network Models for Cognitive Neuroscience
Recurrent Neural Networks (RNNs) trained with machine learning techniques on cognitive tasks have become a widely accepted tool for neuroscientists.
Recurrent Neural Networks (RNNs) trained with machine learning techniques on cognitive tasks have become a widely accepted tool for neuroscientists.
This talk presents a variational framework to understand the properties of functions learned by neural networks fit to data. The framework is based on total variation semi-norms defined in the Radon domain, which is naturally suited to the analysis of neural activation functions (ridge functions).
The human brain is a time machine; We are constantly remembering our past, and projecting ourselves into the future. Capturing the brain’s response as these moments unfold could yield valuable insights into both how the brain works and how to better design human-centered AI systems.