11-13 déc. 2024 Lyon (France)

Recherche par auteur > Badman Ryan

Neural mechanisms of planning and memory in dynamic foraging agents
Ryan Badman  1@  , Riley Simmons-Edler  1  , William Qian  2  , Joshua Lunger  2  , Kanaka Rajan  1  
1 : Harvard Medical School [Boston]
2 : Harvard University

Animals are hypothesized to create spatio-temporal maps of their environment during foraging and maintain episodic memories to plan which food locations to visit or predator locations to avoid. However, difficulty of long-term neural imaging of animals in their natural environments has prevented neuroscientists from understanding how such circuit computations manifest in real world foraging contexts. Here, we developed a novel dynamic foraging computational framework based on artificial agents, containing neural networks with recurrent connectivity and constraints from connectomes, and trained by reinforcement learning and curriculum learning. Artificial agents or “foragers” learn to survive in large, partially observable virtual environments, while also generating neural activity patterns at every time step. These agents learn to locate and sample food and drink from tens of sparse patches to meet hunger and thirst needs, and develop strategies to avoid predators, both during active exploration and while in their sleep phase. We developed an automated analysis pipeline focused on interpretability that first hierarchically labels simpler behavioral states such as patch discovery and revisitation, and then with Bayesian clustering, identifies more complex latent movement strategies of shorter- and longer-range scope. We found that such states correspond to different combinations of physiological needs, long-range versus short-range search, and predator evasion strategies. Through decoding analyses of foragers' recurrent neural networks, we discovered that even relatively simple reinforcement learning agents possess the ability to track prior patch locations, directions and features in our large foraging environments. They also engaged in prospective planning, decodable from neural activity, when deciding which patches to revisit after depleting a current patch. Furthermore, different latent behavioral states mapped onto different neural manifolds that the agents alternated between. Our work shows that during dynamic foraging, neural network-based artificial agents quickly evolve emergent planning and memory abilities. We also provide a flexible computational framework for investigating analogous brain mechanisms of planning and memory from neural and behavioral data across species in naturalistic settings.


Personnes connectées : 6 Vie privée | Accessibilité
Chargement...