Call for Papers
AIM AND SCOPE
Reinforcement Learning (RL) is promising for continuous learning and discovery of optimal policies for complex tasks. However, a major open challenge is the safety and robustness of decision-making for RL systems that has involved efforts from a variety of communities, including RL, human-robot interaction (HRI), control, and formal methods (FM).
The aim of this multidisciplinary workshop is to bring together researchers both in industry and academia from these communities to identify and clearly define key challenges and propose and debate existing approaches related to safe and robust exploration, formal safety and stability guarantees of control systems, safety in physical human-robot collaborative systems, and discuss methods and benchmarks to accelerate safe and robust RL research. To share ideas between communities, this workshop is designed to encourage fruitful and lively discussions between researchers and is open to anyone.
TOPICS OF INTEREST
Based on the target areas and the discussions during our RL-CONFORM workshop at last year’s IROS, topics of interest include but are not limited to:
* Data-efficiency, sim-to-real gap, and guided exploration in RL;
* Safety guarantees, shielding, invariant sets, and online verification;
* Stability, Lyapunov functions, controllability, and model identification;
* Query sample-efficiency, human-robot interaction, learning from demonstration, and human feedback;
* Existing and new benchmarks to accelerate safe and robust RL research.
CALL FOR CONTRIBUTIONS
We invite extended abstract submissions of preliminary or ongoing work related to the topics of interest on safety and robustness in RL. All accepted abstracts will have the opportunity to be presented at the workshop during a short paper presentation session, in which the authors have a chance to present their work in a 5-minute presentation and engage in a 3-minute live Q&A session. This is a non-archival venue: there will be no formal proceedings, but we strongly encourage the authors to publish their extended abstracts on arXiv; links to the papers will be placed on the workshop’s website and will remain available after the workshop. Abstracts may be submitted to other venues in the future.
Invited Speakers and Panelists:
* Jeanette Bohg , Stanford University, USA.
* Georgia Chalvatzaki, TU Darmstadt, Germany.
* Bradley Hayes , University of Colorado, USA.
* Nils Jansen , Radboud University Nijmegen, the Netherlands.
* Hadas Kress-Gazit , Cornell University, USA.
* Scott Niekum , University of Texas, USA.
* Fabio Ramos , University of Sydney, Australia, and NVIDIA, USA.
* Tesca Fitzgerald, Carnegie Mellon University, USA
* Christian Pek, KTH Royal Institute of Technology, Sweden.
* Alexis Linard, KTH Royal Institute of Technology, Sweden.
* Sanne van Waveren, KTH Royal Institute of Technology, Sweden.
* Hang Yin, KTH Royal Institute of Technology, Sweden.
RL-CONFORM 2022 : 2nd RL-CONFORM Workshop: Reinforcement Learning meets HRI, Control, and Formal Methods will take place in Kyoto, Japan. It’s a 5 days event starting on Oct 23, 2022 (Sunday) and will be winded up on Oct 27, 2022 (Thursday).
RL-CONFORM 2022 falls under the following areas: FORMAL METHODS, CONTROL, HUMAN-ROBOT INTERACTION, etc. Submissions for this Workshop can be made by Sep 1, 2022. Authors can expect the result of submission by Sep 12, 2022. Upon acceptance, authors should submit the final version of the manuscript on or before Sep 19, 2022 to the official website of the Workshop.
Please check the official event website for possible changes before you make any travelling arrangements. Generally, events are strict with their deadlines. It is advisable to check the official website for all the deadlines.
Other Details of the RL-CONFORM 2022
Credits and Sources
| RL-CONFORM 2022 : 2nd RL-CONFORM Workshop: Reinforcement Learning meets HRI, Control, and Formal Methods|