News

Recent Technical Innovations in Robot Learning

Robot Learning has continued to be a popular and fast-developing area in recent years (2019-2022), with diverse technical innovations in a wide range of topics. Here we list the top three technical innovations. These innovations are associated with data, one core topic in robot learning, including utilizing differentiable simulations to efficiently generate training data and leveraging previous data from diverse domains.

Differentiable physics simulation: With recent advances in automatic differentiation tools and libraries, a number of techniques have been proposed to make physics simulation differentiable. These simulations not only provide forward-step functions but also calculate the gradients for the backward inverting to solve robotic control/learning problems and system identifications for rigid bodies and non-rigid bodies.

Large language models applied to robotics: Large language models (LLMs) are recognized with the potential to encode large-scale common knowledge. They have been demonstrated to be valuable for high-level robotic planning and natural language instructions, e.g., Ichter et al. "Do As I can, Not As I Say: Grounding Language in Robotic Affordances". In the Conference on Robot Learning (CoRL), 2022.

Offline/model-based reinforcement learning algorithms: Offline reinforcement learning (RL) aims to learn from collected large static datasets to extract policies. Model-based RL learns or relies on a model of the environment to get the optimal policies. Various algorithms under these two paradigms have been proposed in the past three years to improve data efficiency and expand RL’s applicability in the real world.

List of Notable Recent Publications

The sections below list papers from 2019-2022 related to robot learning that have received a large number of citations (per year).

Benchmarks / libraries:

  • Raffin, Antonin, et al. "Stable-baselines3: Reliable reinforcement learning implementations". Journal of Machine Learning Research (JMLR) Open Source Software, 2021.
  • Savva, Manolis, et al. "Habitat: A platform for embodied AI research". Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019.
  • Yu, Tianhe, et al. "Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning." Conference on Robot Learning. PMLR, 2020.
  • Fu, Justin, et al. "D4rl: Datasets for deep data-driven reinforcement learning." arXiv preprint arXiv:2004.07219, 2020.

Review / survey papers:

  • Zhang, Kaiqing, Zhuoran Yang, and Tamer Başar. "Multi-agent reinforcement learning: A selective overview of theories and algorithms". Handbook of Reinforcement Learning and Control (2021): 321-384.
  • Kiran, B. Ravi, et al. "Deep reinforcement learning for autonomous driving: A survey". IEEE Transactions on Intelligent Transportation Systems (2021).
  • Dulac-Arnold, Gabriel, et al. "Challenges of real-world reinforcement learning: definitions, benchmarks and analysis". Machine Learning 110.9, 2021.
  • Grigorescu, Sorin, et al. "A survey of deep learning techniques for autonomous driving". Journal of Field Robotics 37.3, 2020.
  • Levine, Sergey, et al. "Offline reinforcement learning: Tutorial, review, and perspectives on open problems". arXiv preprint arXiv:2005.01643, 2020.
  • Arora, Saurabh, and Prashant Doshi. "A survey of inverse reinforcement learning: Challenges, methods and progress". Artificial Intelligence 297, 2021.
  • Nguyen, Thanh Thi, Ngoc Duy Nguyen, and Saeid Nahavandi. "Deep reinforcement learning for multiagent systems: A review of challenges, solutions, and applications". IEEE transactions on cybernetics 50.9, 2020.
  • Murphy, Jamie, Ulrike Gretzel, and Juho Pesonen. "Marketing robot services in hospitality and tourism: the role of anthropomorphism". Future of Tourism Marketing. Routledge, 2021.
  • Kroemer, Oliver, Scott Niekum, and George Konidaris. "A review of robot learning for manipulation: Challenges, representations, and algorithms". The Journal of Machine Learning Research 22.1, 2021.
  • Hewing, Lukas, et al. "Learning-based model predictive control: Toward safe learning in control". Annual Review of Control, Robotics, and Autonomous Systems 3, 2020.
  • Garrett, Caelan Reed, et al. "Integrated task and motion planning". Annual review of control, robotics, and autonomous systems 4, 2021.
  • Ravichandar, Harish, et al. "Recent advances in robot learning from demonstration". Annual review of control, robotics, and autonomous systems 3, 2020.

New methods in robot learning:

  • Andrychowicz, OpenAI: Marcin, et al. "Learning dexterous in-hand manipulation". The International Journal of Robotics Research 39.1, 2020.
  • Zeng, Andy, et al. "Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching". The International Journal of Robotics Research 41.7, 2022.
  • Hwangbo, Jemin, et al. "Learning agile and dynamic motor skills for legged robots". Science Robotics 4.26, 2019.
  • Burger, Benjamin, et al. "A mobile robotic chemist". Nature 583.7815, 2020.
  • Lee, Joonho, et al. "Learning quadrupedal locomotion over challenging terrain". Science robotics 5.47, 2020.
  • Akkaya, Ilge, et al. "Solving Rubik's cube with a robot hand". arXiv preprint arXiv:1910.07113, 2019.
  • Berkenkamp, Felix, Andreas Krause, and Angela P. Schoellig. "Bayesian optimization with safety constraints: safe and automatic parameter tuning in robotics". Machine Learning, 2021.
  • Kendall, Alex, et al. "Learning to drive in a day". 2019 International Conference on Robotics and Automation. IEEE, 2019.
  • Sarlin, Paul-Edouard, et al. "From coarse to fine: Robust hierarchical localization at large scale". Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
  • Kahn, Gregory, Pieter Abbeel, and Sergey Levine. "Badgr: An autonomous self-supervised learning-based navigation system". IEEE Robotics and Automation Letters 6.2, 2021.
  • Chebotar, Yevgen, et al. "Closing the sim-to-real loop: Adapting simulation randomization with real world experience". International Conference on Robotics and Automation. IEEE, 2019.
  • Zeng, Andy, et al. "Transporter networks: Rearranging the visual world for robotic manipulation". Conference on Robot Learning. PMLR, 2021.

Relevant new methods in reinforcement learning:

Note: many of these papers only evaluate on simulated environments, but a significant portion the robot learning community generally looks to these papers as inspiration for robot learning.

  • Hafner, Danijar, et al. "Learning latent dynamics for planning from pixels". International Conference on Machine Learning, 2019.
  • Kumar, Aviral, et al. "Conservative q-learning for offline reinforcement learning". Advances in Neural Information Processing Systems 33, 2020.
  • Fujimoto, Scott, David Meger, and Doina Precup. "Off-policy deep reinforcement learning without exploration". International Conference on Machine Learning, 2019.
  • Jaderberg, Max, et al. "Human-level performance in 3D multiplayer games with population-based reinforcement learning". Science 364.6443, 2019.
  • Kaiser, Lukasz, et al. "Model-based reinforcement learning for Atari". International Conference on Learning Representations, 2019.
  • Janner, Michael, et al. "When to trust your model: Model-based policy optimization". Advances in Neural Information Processing Systems 32, 2019.
  • Laskin, Misha, et al. "Reinforcement learning with augmented data". Advances in neural information processing systems 33, 2020.
  • Yu, Tianhe, et al. "Gradient surgery for multi-task learning". Advances in Neural Information Processing Systems 33, 2020.
  • Yu, Tianhe, et al. "Mopo: Model-based offline policy optimization". Advances in Neural Information Processing Systems 33, 2020.
  • Stooke, Adam, et al. "Decoupling representation learning from reinforcement learning". International Conference on Machine Learning, 2021.
  • Laskin M, Srinivas A, Abbeel P. "CURL: Contrastive Unsupervised Representations for Reinforcement Learning". International Conference on Machine Learning, 2020.
  • Chaplot, Devendra Singh, et al. "Learning to explore using active neural slam". International Conference on Learning Representations, 2020.