Workshop on Reinforcement Learning Theory

Overview

While over many years we have witnessed numerous impressive demonstrations of the power of various reinforcement learning (RL) algorithms, and while much progress was made on the theoretical side as well, the theoretical understanding of the challenges that underlie RL is still rather limited. The best studied problem settings, such as learning and acting in finite state-action Markov decision processes, or simple linear control systems fail to capture the essential characteristics of seemingly more practically relevant problem classes, where the size of the state-action space is often astronomical, the planning horizon is huge, the dynamics is complex, interaction with the controlled system is not permitted, or learning has to happen based on heterogeneous offline data, etc. To tackle these diverse issues, more and more theoreticians with a wide range of backgrounds came to study RL and have proposed numerous new models along with exciting novel developments on both algorithm design and analysis. The workshop's goal is to highlight advances in theoretical RL and bring together researchers from different backgrounds to discuss RL theory from different perspectives: modeling, algorithm, analysis, etc.

This workshop will feature seven keynote speakers from computer science, operation research, control, and statistics to highlight recent progress, identify key challenges, and discuss future directions. Invited keynotes will be augmented by contributed talks, poster presentations, panel discussions, and virtual social events.

Schedule

UTC 16:00 - 16:50 Invited Talks 1 & 2
UTC 17:00 - 17:50 Contributed Talks
UTC 18:00 - 18:50 Invited Talks 3 & 4
UTC 19:00 - 19:30 Social Session
UTC 19:30 - 21:00 Poster Session
UTC 21:00 - 21:50 Invited Talks 5 & 6
UTC 22:00 - 22:50 Contributed Talks
UTC 23:00 - 23:25 Invited Talk 7
UTC 23:30 - 0:00 Panel Discussion
UTC 0:00 - 0:30 Social Session
UTC 0:30 - 2:00 Poster Session

Keynote Speakers

Anima Anandkumar

Professor
California Institute of Technology

Bo Dai

Senior Research Scientist
Google Brain

Emilie Kaufmann

Principal Researcher
CNRS Junior Researcher

Christian Kroer

Assistant Professor
Columbia University

Shie Mannor

Professor
Technion

Art Owen

Professor
Stanford University

Qiaomin Xie

Visiting Assistant Professor
Cornell University

Call for Papers

We will invite submissions on topics such as, but not limited to:

  • Sample complexity of RL
  • RL with function approximation
  • Model-based RL
  • Model-free RL
  • Computation efficiency of RL
  • Exploration
  • Causality and reinforcement learning
  • Game theory in RL
  • Multi-agent reinforcement learning
  • Partially observed RL setting
  • RL under constraints

We encourage participants to submit a 4-page extended abstract using ICML submission template. Please submit a single PDF in ICML format that includes the main paper and supplementary material. Submissions must be anonymized. All submissions will be reviewed and will be evaluated on the basis of their technical content and relevance to the workshop. Accepted papers will be selected for either a short virtual poster session or a spotlight presentation. Submission Link

This workshop will not have a conference proceedings, so we welcome the submission of work currently under review at other archival ML venues.

Important Dates

 

Paper Submission Deadline: June 7th, 2021, 11:59 PM UTC ([CMT])

Author Notification: TBD

Final Version: TBD

Workshop: July 23rd or July 24th

Program Committee

  • David Abel (DeepMind)
  • Sanae Amani (UCLA)
  • Zaiwei Chen (Georgia Tech)
  • Yifang Chen (University of Washington)
  • Xinyi Chen (Princeton)
  • Qiwen Cui (Peking University)
  • Yaqi Duan (Princeton)
  • Vikranth Dwaracherla (Stanford)
  • Fei Feng (UCLA)
  • Dylan Foster (MIT)
  • Botao Hao (DeepMind)
  • Ying Jin (Stanford)
  • Sajad Khodadadian (Georgia Tech)
  • Tor Lattimore (DeepMind)
  • Qinghua Liu (Princeton)
  • Thodoris Lykouris (MSR)
  • Gaurav Mahajan (UCSD)
  • Sobhan Miryoosefi (Princeton)
  • Aditiya Modi (UMich)
  • Vidya Muthukumar (Georgia Tech)
  • Gergely Neu (Pompeu Fabra University)
  • Nived Rajaraman (UC Berkeley)
  • Max Simchowitz (UC Berkeley)
  • Yi Su (Cornell)
  • Jean Tarbouriech (Inria Lille)
  • Masatoshi Uehara (Cornell)
  • Ruosong Wang (CMU)
  • Jingfeng Wu (JHU)
  • Tengyang Xie (UIUC)
  • Jiaqi Yang (Tsinghua Univeristy)
  • Ming Yin (UCSB)
  • Andrea Zanette (Stanford University)
  • Zihan Zhang (Tsinghua University)
  • Kaiqing Zhang (UIUC)
  • Angela Zhou (Cornell)

Workshop Organizers                

Shipra Agarwal

Columbia University

Simon S. Du

University of Washington

Niao He

ETH Zürich

Csaba Szepesvári

University of Alberta / Deepmind

Lin F. Yang

University of California, Los Angeles


We thank Hoang M. Le from providing the website template.