Tamim Asfour is full Professor of Humanoid Robotics at the Institute for Anthropomatics and Robotics, High Performance Humanoid Technologies at the Karlsruhe Institute of Technology (KIT). His research focuses on the engineering of high performance 24/7 humanoid robotics as well as on the mechano-informatics of humanoids as the synergetic integration of mechatronics, informatics and artificial intelligence methods into humanoid robot systems, which are able to predict, act and interact in the real world. Tamim is the developer of the ARMAR humanoid robot family. In his research, he is reaching out and connecting to neighboring areas in large-scale national and European interdisciplinary projects in the area of robotics in combination with machine learning and computer vision. Tamim is the Founding Editor-in-Chief of the IEEE-RAS Humanoids Conference Editorial Board, co-chair of the IEEE-RAS Technical Committee on Humanoid Robots (2010-2014), Editor of the Robotics and Automation Letters, Associate Editor of Transactions on Robotics (2010-2014). He is scientific spokesperson of the KIT Center “Information ? Systems ? Technologies (KCIST)”, president of the Executive Board of the German Robotics Society (DGR) and member of the founding Board of Directors of euRobotics (2013-2015).
Talk # 1
Engineering Humanoid Robots to Assist Humans and Augment Human Performance
Humanoid robotics has made significant progress and will continue to play central role in robotics research and many applications of the 21st century. Engineering complete humanoid robots, which are able to learn from human observation and sensorimotor experience, to predict the consequences of actions and exploit the interaction with the world to extend their cognitive horizon remains a research grand challenge. The talk will address recent progress towards building integrated 24/7 humanoid robots able to perform complex grasping and manipulation tasks and to learn from human observation and sensorimotor experience. In particular, I will show how learning from human observation and natural language methods can be combined to build a motion alphabet and robot internet of skills as the basis for intuitive and flexible robot programming. In the second part of the talk, I will discuss the vision of robot suits for every human body and the transformative impact of humanoid robotic on other research areas and application fields. As a result, humanoid robots will become 24/7 wearable robot companions for augmentation or replacing human performance in daily and working environments, and humanoid technologies will contribute to personalized rehabilitation in medicine, human support and protection in human-made and natural disasters.
Talk # 2
Generation of multi-contact whole-body motions based on natural language models learned from human motion data
Generating multi-contact whole-body motions for humanoid robots and whole-body exoskeletons constitutes an open and, due to its high dimensionality, a very challenging problem of vital interest for humanoid robotics research. In this talk, we present a data-driven approach for generating sequences of whole-body poses with multi-contacts, which is inspired by techniques from natural language processing. The approach uses human motion data for the autonomous generation of sequences of whole-body pose transitions for whole-body tasks that use the environment to enhance balance. To this end, we present a large-scale whole body human motion database together with techniques for systematic organization, annotating, classification of human motion data as well as for contact-based segmentation of whole-body motion into support poses. These poses are subsequently used to train an n-gram language model, whose words are whole-body poses and whose sentences are sequences of these poses that characterize a motion. Using this language model, a sequence of whole-body pose transitions, which satisfies the constraints imposed by the task is generated as the sequence of transitions with the highest probability according to the language model.
Stanford (CA), United States
Oussama Khatib received his PhD from Sup’Aero, Toulouse, France, in 1980. He is Professor of Computer Science and Director of the Robotics Laboratory at Stanford University. His research focuses on methodologies and technologies in human-centered robotics. He is a Fellow of IEEE, Co-Editor of the Springer Tracts in Advanced Robotics (STAR) series, and the Springer Handbook of Robotics. Professor Khatib is the President of the International Foundation of Robotics Research (IFRR). He is recipient of the IEEE RAS Pioneer Award, the George Saridis Leadership Award, the Distinguished Service Award, the Japan Robot Association (JARA) Award, the Rudolf Kalman Award, and the IEEE Technical Field Award. In 2018, Professor Khatib was elected to the National Academy of Engineering.
Talk # 1
The Age of Human-Robot Collaboration
Robotics is undergoing a major transformation in scope and dimension with accelerating impact on the economy, production, and culture of our global society. The generations of robots now being developed will increasingly touch people and their lives. They will explore, work, and interact with humans in their homes, workplaces, in new production systems, and in challenging field domains. The emerging robots will provide increased support in mining, underwater, hostile environments, as well as in domestic, health, industry, and service applications. Combining the experience and cognitive abilities of the human with the strength, dependability, reach, and endurance of robots will fuel a wide range of new robotic applications. The discussion focuses on design concepts, control architectures, task primitives and strategies that bring human modeling and skill understanding to the development of this new generation of collaborative robots.
Tomomichi Sugihara is a researcher of Preferred Networks, Inc. He received his BS and MS degrees in mechanical engineering from the University of Tokyo, Japan, in 1999 and 2001, respectively. He also received his PhD from the University of Tokyo in 2004. He was an academic research assistant from 2004 to 2005 at the University of Tokyo, and became a research associate. He had worked at Kyushu University as a guest associate professor from 2007 to 2010, and at Osaka University from 2010 to 2019 as an associate professor. After that, he held the current position. His research interests include kinematics and dynamics computation, motion planning, control, hardware design, and software development of anthropomorphic robots. He also studies human motor control based on robotic technologies. He is a member of IEEE.
Talk # 1
Dynamics Morphing --- a paradigm to integrate various robot motions
"Dynamics morphing" is a paradigm to enable a robot to do not only individual motions but also seamless transitions between them. The bipedalism, for instance, implies a capability of locomoting in environments by discontinuously alternating the support leg and keeping standing at the desired point. From the viewpoint of dynamical system analysis, to stand means to asymptotically stabilize the system, while to step means to locally destabilize the system and move to another stable equilibrium. A self-excited cycle of the above stabilization and destabilization forms a walking. Such an interpretation of motions as dynamical systems suggests a controller design to integrate various motions into a `morphing' dynamical system, which provides the robot with flexibility against perturbations. In this talk, the above concept and some implementations of a humanoid robot controller that has been enhanced to a variety of behaviors are presented.
Talk # 2
Successive information processing for a biped locomotion in the real world
In this talk, how to design the locomotion system of a biped robot that can move flexibly and robustly in the real world is discussed. The robot should perceive how the world and the robot itself are and determine how to behave in real-time even from uncertain, incomplete, noisy and unpredictable sensory information. The overall system should not form a sequential architecture, where each process waits for complete information from the former process, as employed in the conventional systems, but "the subsumption architecture", where a homeostatic controller works in the lower layer and upper layer subsumes it. This is not easy since each process including navigation, mapping and perception has to be implemented in a successive way, i.e., with a form of differential equation rather than a batch process. Some key technologies to build up such a system are presented.