Deep Reinforcement Learning And Its Neuroscientific Implications

August 30, 2020

Speaker: Debaditya Bhattacharya, Second Year UG Student of the Department of Physics
Title: Deep Reinforcement Learning And Its Neuroscientific Implications (Review Paper)
Type: Paper
Paper: Deep Reinforcement Learning And Its Neuroscientific Implications-Matthew Botvinick, Jane X. Wang, Will DabneyKevin, J. Miller and Zeb Kurth-Nelson
Abstract:
What if a machine could mimic the way a human learns? Sounds pretty ambitious right? Reinforcement Learning is one of the three paradigms of Machine Learning that can be used for this purpose. In this, there is a single task to be accomplished and the machine looks for the most optimal path to solve the task along with maximising the cumulative reward. Does it sound similar? It seems like a person learning from his/her mistakes. It has a very subtle closeness to human learning and its neuroscientific implications has been focused on in the event.

The emergence of powerful artificial intelligence (AI) is defining new research directions in neuroscience. To date, this research has focused largely on deep neural networks trained using supervised learning in tasks such as image classification. However, there is another area of recent AI work that has so far received less attention from neuroscientists but that may have profound neuroscientific implications: deep reinforcement learning (RL). Deep RL offers a comprehensive framework for studying the interplay among learning, representation, and decision making, offering to the brain sciences a new set of research tools and a wide range of novel hypotheses. In the present review, we provide a high-level introduction to deep RL, discuss some of its initial applications to neuroscience, and survey its wider implications for research on brain and behaviour, concluding with a list of opportunities for next-stage research.
Slides: pptx pdf