CFP: Learning for Safe and Robust Control

Motivation

Safety-critical tasks are very common in real-life robotic applications. Tasks are considered safety-critical when, for example, robots act in close proximity to humans, especially when they have limited knowledge about the environment. Possible scenarios include robots that share workspaces with humans in manufacturing, autonomous cars that navigate in urban contexts, or quadrupeds operating in worksites. Therefore, developing controllers that adapt to dynamic environments whilst simultaneously providing theoretical safety guarantees will be essential for real-life robotic applications.

Although several approaches have been published in recent years, a significant amount of effort is needed to design safe, robust robots that work in real-world scenarios. It is clear that the argument of this SI represents a hot topic in the landscape of robotics research.

The proposed SI hinges on contributions that advance the field of learning for robust and safe control. Our interest encompasses both theoretical and empirical results which aim to show that combining machine learning strategies with model-based controllers is the key to developing stable and robust autonomous systems that are capable of adapting to dynamic environmental changes, are resilient to external disturbances, and unmodelled dynamic effects, and that behave in accordance to the theoretical guarantees of controllers.

Topical Areas

Topics of interest for this special issue include and are not limited to:

  • Model learning for robust control
  • Control approaches for safe policy learning
  • Optimal control strategies for robust lifelong learning systems
  • Model predictive control combined with learning techniques
  • Safe planning algorithms
  • Safe transfer learning for model-based autonomous system
  • Safe exploration for model learning and control
  • Cautious controllers that reduce the epistemic error

Timeline

Call for papers announced: October 15, 2022
Papercept open for submission: March 1, 2023
Submission deadline: April 30, 2023
Authors receive RA-L reviews and recommendation: July 25, 2023
Authors of accepted MS submit final RA-L version: August  9, 2023
Authors of R&R MS resubmit revised MS: August 25, 2023
Authors receive final RA-L decision: August 29, 2023
Authors submit final RA-L files: October  12, 2023
Camera ready version appears in RA-L on Xplore: October 17, 2023
Final Publication: October 27, 2023

Guest Editors

Valerio Modugno

Valerio Modugno

University College London

United Kingdom

Email: v.modugno@ucl.ac.uk

Freek Stulp

Freek Stulp

German Aerospace Center (DLR)

Munich, Germany

Email: Freek.Stulp@dlr.de

Akshara Rai

Akshara Rai

Facebook AI Research

United States of America

Email: akshararai@fb.com

Elmar Rueckert

Elmar Rueckert

University of Leoben

Germany

Email: rueckert@unileoben.ac.at

Angela Schoellig

Angela Schoellig

University of Toronto

Toronto, Canada

Email: schoellig@utias.utoronto.ca

Marco Cognetti

Marco Cognetti

Maynooth University

Ireland

Email: marco.cognetti@my.ie

 

Supervising RA-L Senior Editor

Jens  Kober portrait
Jens Kober
Senior editor - Learning for Safe and Robust Control
Delft University of Technology
Delft, Netherlands

 

Easy Links