Distinguished Lecturers

To request a Distinguished Lecturer (DL) for your next event, complete the DL Application Form. For more information or to see the full list, go to the Distinguished Lecturers page.

Arash Ajoudani portrait
Arash Ajoudani
Collaborative Automation for Flexible Manufacturing
Istituto Italiano di Tecnologia
Genova, Italy
RAS Geographic Region 2
Arash Ajoudani is a tenured senior scientist at IIT, where he leads the HRI² laboratory. He is a recipient of the European Research Council (ERC) starting grant 2019, the coordinator of the Horizon-2020 project SOPHIA, and the co-coordinator of the Horizon-2020 project CONCERT. He also coordinates /is a principal investigator of a number of technology transfer initiatives such as JOiiNT lab. He is a recipient of the IEEE-RAS Early Career Award 2021, and winner of the Amazon Research Awards 2019, of the Solution Award 2019 (MECSPE2019), of the KUKA Innovation Award 2018, of the WeRob best poster award 2018, and of the best student paper award at ROBIO 2013. His PhD thesis was a finalist for the Georges Giralt PhD award 2015 - best European PhD thesis in robotics. He was also a finalist for the Solution Award 2020 (MECSPE2020), the best conference paper award at Humanoids 2018, for the best interactive paper award at Humanoids 2016, for the best oral presentation award at Automatica (SIDRA) 2014, and for the best manipulation paper award at ICRA 2012. He is the author of the book "Transferring Human Impedance Regulation Skills to Robots" in the Springer Tracts in Advanced Robotics (STAR). He is currently serving as the executive manager of the IEEE-RAS Young Reviewers' Program (YRP), and as chair and representative of the IEEE-RAS Young Professionals Committee.

Talk #1

Ergonomic human-robot interaction and collaboration
This talk will present the new concept of ergonomic human-robot interaction and collaboration. In the first part, an overview of the reduced-complexity human dynamic modeling and the associated ergonomic factors such as overloading torques, body compressive forces, fatigue, will be presented. Next part will focus on the control of human-robot interaction and collaboration, through an optimization problem that takes into account the ergonomic requirements of the human co-worker. Using the reduced-complexity whole-body dynamic model of the human, the position of the interactive/collaborative tasks in the workspace will be optimized. In this configuration, the ergonomic indexes, such as the overloading torques, i.e. the effects of an external load in human body joints, are minimized. In addition, the optimization process includes several constraints, such as human arm manipulability properties, to ensure that the human has a good manipulation capacity in the given configuration. The main advantage of this approach is that the robot can potentially help to reduce the work-related strain and increase the productivity of the human co-worker.

Talk #2

A novel framework of context-aware and adaptive interaction between the robot and uncertain environments
Nowadays, robots are expected to enter various application scenarios and interact with unknown and dynamically changing environments. This highlights the need for creating autonomous robot behaviors to explore such environments, identify their characteristics and adapt, and build knowledge for future interactions. To respond to this need, this talk presents a novel framework that integrates multiple components to achieve a contextaware and adaptive interaction between the robot and uncertain environments. The core of this framework is a novel self-tuning impedance controller that regulates robot quasi-static parameters, i.e., stiffness and damping, based on the robot sensory data and vision. The tuning of the parameters is achieved only in the direction(s) of interaction or movement, by distinguishing expected interactions from external disturbances. A vision module is developed to recognize the environmental characteristics and to associate them to the previously/newly identified interaction parameters, with the robot always being able to adapt to new changes or unexpected situations. This enables a faster robot adaptability, starting from better initial interaction parameters. The application of this framework in various interaction scenarios such as soft material handling, item sorting, load carrying, will be presented.
Show More
Yi Guo portrait
Yi Guo
Collaborative Automation for Flexible Manufacturing
Stevens Institute of Technology
Hoboken (NJ), United States

Yi Guo joined the Department of Electrical and Computer Engineering at Stevens Institute of Technology in 2005, and is currently Thomas E. Hattrick Chair Professor.  Before 2005, she was a Visiting Assistant Professor in the ECE Department of the University of Central Florida.  Dr. Guo received her Ph.D. degree in Electrical and Information Engineering in 1999 from the University of Sydney, Australia. After her Ph.D, she worked as a postdoctoral research fellow at Oak Ridge National Laboratory for two years. Dr. Guo’s research interests are mainly in distributed and collaborative robotic systems, human-robot interaction, and dynamic systems and controls. She author one book, edited one book, and published over 150 journal and conference papers. She is currently the Editor-in-Chief of the IEEE Robotics and Automation Magazine. She has been Associate Editor of IEEE Robotics and Automation Letters, IEEE/ASME Transactions on Mechatronics, and IEEE Access.  Her awards include Best Application Paper Award at WCICA2018, and Provost’s Award for Research Excellence from Stevens in 2020. She frequently gave invited talks and presented a Keynote Speech at IROS in 2019. She served in Organizing Committees of the premier robotics conference ICRA and IROS.

Talk #1

Robot-assisted guidance and regulation

The use of autonomous mobile robots in human environments is on the rise. Assistive robots have been seen in real-world environments, such as robot guides in airports, robot polices in public parks, and patrolling robots in supermarkets. In this talk, I will first present current research activities conducted in the Robotics and Automation Laboratory at Stevens. I’ll then focus on robot-assisted pedestrian regulation, where pedestrian flows are regulated and optimized through passive human-robot interaction. We design feedback motion control for a robot to efficiently interact with pedestrians to achieve desirable collective motion. Both adaptive dynamic programming and deep reinforcement learning methods are applied to the formulated problem of robot-assisted pedestrian flow optimization. Simulation results in a robot simulator show that our approach regulates pedestrian flows and achieves optimized outflow learning from the real-time observation of the pedestrian flow. Potential crowd disasters can be avoided as the critical crowd pressure is reduced by the proposed approach.

Talk #2

Decentralized cooperative control method for multi-robot formation

Multi-robot cooperative control has been extensively studied using model-based distributed control methods. However, such control methods rely on sensing and perception modules in a sequential pipeline of design, and the separation of perception and controls may cause processing latency and compounding errors that affect control performance. End-to end learning overcomes such limitation by learning directly from onboard sensing data, and outputs control command to robots. Challenges exist in end-to-end learning for multi-robot cooperative control and previous results are not scalable. We propose a novel decentralized cooperative control method for multi-robot formation using deep neural networks, in which inter- robot communication is modeled by a graph neural network. Our method takes the LIDAR sensor data as input, and the control policy is learned from demonstration provided by an expert controller in a decentralized way. While trained with a fixed number of robots, the learned control policy is scalable. Evaluation in a robot simulator demonstrates triangulation formation behavior of multi-robot teams with varying sizes using the learned control policy.

Show More
Eiichi  Yoshida portrait
Eiichi Yoshida
Collaborative Automation for Flexible Manufacturing
Tokyo University of Science
Tokyo, Japan
RAS Geographic Region 3

Eiichi Yoshida is Professor of Tokyo University of Science, at Department of Applied Electronics, Faculty of Advanced Engineering. He received M. Eng. and Ph. D degrees from Graduate School of Engineering, the University of Tokyo in 1996. He then joined former Mechanical Engineering Laboratory, later reorganized in 2001 as National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan. He served as Co-Director of AIST-CNRS JRL (Joint Robotics Laboratory) at LAAS-CNRS, Toulouse, France, from 2004 to 2008, and at AIST, Tsukuba, Japan from 2009 to 2021. He was also Deputy Director of Intelligent Systems Research Institute from 2015 to 2018, Industrial Cyber-Physical Systems Research Center, and TICO-AIST Cooperative Research Laboratory for Advanced Logistics in AIST from 2020 to 2021. He was previously invited as visiting professor at Karlsrule Institute of Technology and University of Tsukuba. He is an IEEE Fellow, and member of RSJ, SICE and JSME. He has published more than 200 scientific papers in journals and peer-reviewed international conferences and co-edited some books. He received several awards including Best Paper Award in Advance Robotics Journal, and the honor of Chevalier l’Ordre National du Mérite from French Government. His research interests include robot task and motion planning, human modeling, humanoid robotics, and advanced logistics technology.

Talk #1

Cyber-Physical Human: Humanoid and Digital Actor together for Understanding and Synthesizing Anthropomorphic Motions

Humanoid robots can be used as a "physical twin" of humans to analyze and synthesize human motions, and furthermore behaviors, while those robots themselves are already useful for applications in industries like large-scale assembly. We intend to integrate humanoids and digital actors into "cyber-physical humans" in a complementary manner to understand, predict and synthesize the behavior of anthropomorphic systems in various aspects. Since it is difficult to measure the control output of humans, we may use humanoids to validate the physical interactions with real world, and digital actors to simulate control and interaction strategies using parameterized models like musculo-skeletal systems. Optimization is one of the key techniques to tackle this challenge. A comprehensive framework is introduced for efficient computation of derivatives of various physical quantities essential for optimization purpose, allowing real-time motion retargeting and musculo-skeletal analysis. I introduce some practical applications such as quantitative evaluation of wearable devices and monitoring of human workload. I believe the human model in cyber-physical space will become important for symbiotic robotic system supporting humans naturally and efficiently responding to societal demands. Some future directions such as remote perception and workspace are also discussed.

Talk #2

Planning Whole-Body Humanoid Motions

Motion planning is generally a technique allowing a robot to move from initial to goal configurations without being obstructed by obstacles in a given environment. Following classical methods based on configuration space (C-space) and artificial potential field, randomized sampling-based motion planning has been proposed in 1990's. It drastically broadened its application areas to high- dimensional robots together with rapid growth of computational power, and started being applied to humanoid robots since 2000's. At early stages humanoid motion planning was done in two stages, first a kinematic and geometric path is planned with simplified model, which is then transformed into dynamic motion including walking. The planning technique evolved to whole-body motion planning unifying kinematics and dynamics of full humanoid structure. Those methods have been validated through various humanoid platforms. In addition, contacts with the environments, which should be avoided in usual motion planning, are actively exploited to improve its ability of exporting in cluttered areas or manipulating bulky objects as humans do. Recent research trends on motion planning have been experiencing integration with machine learning, demonstrating promising results. Future possibilities of humanoids for wider application by benefiting from those advances in motion planning and other robotic technologies are discussed

Show More