CFP: Learning for Safe and Robust Control

Motivation

Safety-critical tasks are very common in real-life robotic applications. Tasks are considered safety-critical when, for example, robots act in close proximity to humans, especially when they have limited knowledge about the environment. Possible scenarios include robots that share workspaces with humans in manufacturing, autonomous cars that navigate in urban contexts, or quadrupeds operating in worksites. Therefore, developing controllers that adapt to dynamic environments whilst simultaneously providing theoretical safety guarantees will be essential for real-life robotic applications.

Although several approaches have been published in recent years, a significant amount of effort is needed to design safe, robust robots that work in real-world scenarios. It is clear that the argument of this SI represents a hot topic in the landscape of robotics research.

The proposed SI hinges on contributions that advance the field of learning for robust and safe control. Our interest encompasses both theoretical and empirical results which aim to show that combining machine learning strategies with model-based controllers is the key to developing stable and robust autonomous systems that are capable of adapting to dynamic environmental changes, are resilient to external disturbances, and unmodelled dynamic effects, and that behave in accordance to the theoretical guarantees of controllers.

Topical Areas

Topics of interest for this special issue include and are not limited to:

  • Model learning for robust control
  • Control approaches for safe policy learning
  • Optimal control strategies for robust lifelong learning systems
  • Model predictive control combined with learning techniques
  • Safe planning algorithms
  • Safe transfer learning for model-based autonomous system
  • Safe exploration for model learning and control
  • Cautious controllers that reduce the epistemic error

Timeline

Call for papers announced: October 15, 2022
Papercept open for submission: January 3, 2023
Submission deadline: March 31, 2023
Authors receive RA-L reviews and recommendation: June 25, 2023
Authors of accepted MS submit final RA-L version: July 9, 2023
Authors of R&R MS resubmit revised MS: July 25, 2023
Authors receive final RA-L decision: August 29, 2023
Authors submit final RA-L files: September 12, 2023
Camera ready version appears in RA-L on Xplore: September 17, 2023
Final Publication: September 27, 2023

Guest Editors

Marco Cognetti portrait
Marco Cognetti
SI - Learning for Safe and Robust Control
Maynooth University
Ireland
Valerio Modugno portrait
Valerio Modugno
SI - Learning for Safe and Robust Control
University College London
United Kingdom
Akshara Rai portrait
Akshara Rai
SI - Learning for Safe and Robust Control
Facebook AI Research
United States of America
Elmar Rueckert portrait
Elmar Rueckert
SI - Learning for Safe and Robust Control
University of Leoben
Germany
Angela Schoellig portrait
Angela Schoellig
SI - Learning for Safe and Robust Control
University of Toronto
Toronto, Canada
Freek Stulp portrait
Freek Stulp
SI - Learning for Safe and Robust Control
German Aerospace Center (DLR)
Munich, Germany

Supervising RA-L Senior Editor

Jens  Kober portrait
Jens Kober
Senior editor - Learning for Safe and Robust Control
Delft University of Technology
Delft, Netherlands