Yasemin is an Assistant Professor in the Automatic Control group at Chalmers University of Technology and Senior Research Fellow in Statistical Machine Learning Group at University College London. She completed her Ph.D. at the Royal Institute of Technology (KTH), Sweden, in 2012. As a researcher at KTH, she was involved in the EU projects CogX(Cognitive Systems that Self-Understand and Self-Extend) and RoboHow (Web-enabled and Experience-based Cognitive Robots that Learn Complex Everyday Manipulation Tasks). Later, she worked as a post-doctoral researcher at University of Birmingham contributing to the EU project RoMaNs (Robotic Manipulation for Nuclear Sort and Segregation), and as a research scientist at ABB, Corporate Research, Sweden, coordinating the EU project SARAFun (Smart Assembly Robot with Advanced Functionalities), and as a roboticist at Vicarious AI, CA, USA, leading R&D activities on robotic grasp planning for industrial tasks. Her research is focused on data efficient learning from multisensory data for robotic grasping and manipulation applications. She received the Best Paper Award at IEEE International Conference on Robotics and Automation for Humanitarian Applications (RAHA) in 2016 and the Best Manipulation Paper Award at IEEE ICRA in 2013, and was IEEE/RSJ IROS CoTeSys Cognitive Robotics Best Paper Award Finalist in 2013.
Talk #1: Learning from multi-modal data for grasping
The central question of my research is how we can create robots that are capable of adapting such that they can co-inhabit our world. This means designing systems that are capable of functioning in unstructured environments that are continuously changing with unlimited combinations of shapes, sizes, appearance, and positions of objects; adapting to changes and learning from experience and humans, and importantly do so from small amounts of data. In specific, my work focuses on grasping and manipulation, fundamental aspects to enable a robot to interact with the environment, humans and other agents, along with dexterity (e.g. to use objects/tools successfully) and high-level reasoning (e.g. to decide about which object/tool to use). Despite decades of research, robust autonomous grasping and manipulation at a level approaching human skills remains an elusive goal. One main difficulty lies in dealing with the inevitable uncertainties in how a robot perceives the world, based on sensory measurements that can be noisy and incomplete, which poses challenges to avoid failures. In my research I study how to enable a robot to interact with natural objects and learn about object properties and relations between tasks and sensory streams. I develop tools that allow a robot to use multiple streams of sensory data in a complementary fashion. In this talk I will address how a robot can use sensory data, e.g. visual and tactile, in various steps involved to achieve robot grasping and manipulation tasks.
New Haven (CT), USA
Aaron Dollar is a Professor of Mechanical Engineering & Materials Science and Computer Science at Yale University, where he has been on faculty since 2009. He earned a B.S. in Mechanical Engineering at the University of Massachusetts at Amherst, S.M. and Ph.D. degrees in Engineering Sciences at Harvard, and conducted two years of Postdoctoral research at the MIT Media Lab. Professor Dollar directs the Yale GRAB Lab, with research primarily focused on human and robotic grasping and dexterous manipulation, mechanisms and machine design, and upper-limb prosthetics. He has received a number of best paper and other prestigious awards, including junior faculty awards from NASA, DARPA, AFOSR, and NSF. His service to the Robotics research community includes the YCB benchmarking initiatives, the Yale OpenHand Project, OpenRobotHardware.org, RoboticsCourseware.org, and founding the IEEE RAS TC on Robotic Mechanisms and Design. His work on robotic grasping and manipulation focuses primarily on the mechanics of the problem, including contacts, passive and active degrees of freedom, and other constraints, and how proper focus on those, combined with clever mechanical design can facilitate excellent performance with even minimal sensing and control.
Talk #1: “Mechanical Intelligence” in Robotic Manipulation: Towards Human-level Dexterity in Robotic and Prosthetic Hands
The human hand is the pinnacle of dexterity – it has the ability to powerfully grasp a wide range of object sizes and shapes as well as delicately manipulate objects held within the fingertips. Current robotic and prosthetic systems, however, have only a fraction of that manual dexterity. In this talk, I will discuss how this gap can be addressed in three main ways: examining the mechanics and design of effective hands, studying biological hand function as inspiration and performance benchmarking, and developing novel control approaches that accommodate task uncertainty. I will place a special focus on examining the mechanics of the open- and closed-loop hand/object/environment system during robot manipulation and how contact and kinematic constraints dominate performance limits. Using this understanding, I will show how prioritizing passive mechanics in robot hand design, including incorporating adaptive underactuated transmissions and carefully tuned compliance, can maximize open-loop performance while minimizing complexity.
Cosimo Della Santina is Assistant Professor at TU Delft and Research Scientist at the German Aerospace Institute (DLR) since 2020. He received his PhD in robotics (cum laude, 2019) from University of Pisa. He was then a visiting PhD student and a postdoc (2017 to 2019) at the Massachusetts Institute of Technology (MIT), and a postdoc (2020) and guest lecturer (2021) at the Technical University of Munich (TUM). His work has been recognized through awards including the euRobotics Georges Giralt Ph.D. Award (2020), the “Fabrizio Flacco” Young Author Award of the RAS Italian chapter (2019), and the European Embedded Control Institute PhD award (finalist, 2020). Cosimo is currently a member of several EBs (ICRA, IROS, RAL, Frontiers). His main research interests include (i) Modelling for Control and Model Based Control of Soft Robots, (ii) Combining Machine Learning and Model Based Strategies, (iii) Soft Robotic Hands/prostheses.
Talk #1: Combining physical and artificial intelligence in hand-centric grasping and manipulation
The classic approach to grasping and manipulation with rigid robotic hands generally favored object-centric analytical solutions, which - although very elegant and theoretically sound - has not yet produced the desired outcomes in the practice. This talk aims at discussing a different path, which entails moving the focus from the object to the robotic hand. This shift of perspective opens up several challenges which require a full integration of models, materials, machine learning, and bio-inspiration to be tackled effectively. Combining these ingredients, control intelligence can be embedded directly in the hand mechanics. As a result, soft end-effectors can achieve high-level grasping and manipulation performance when operated by humans. However, such a level of dexterity is still unmatched in autonomous grasp execution. Indeed, classic approaches cannot be applied to this kind of hands, which - by their own nature - do not allow fingertips placement with the required precision and relative independence. On the contrary, data driven approaches could be the key to learn from humans how to manage soft hands, towards higher levels of autonomous grasping capabilities.
Kusatsu, Shiga, Japan
Zhongkui Wang is currently an associate professor at the Research Organization of Science and Technology, Ritsumeikan University, Japan. He received his Ph.D. degree in Robotics from Ritsumeikan University, in 2011. He was a research associate in the Department of Robotics of Ritsumeikan University from 2011 to 2012. From 2012 to 2014, he was a postdoctoral fellow at the same University and visited the Swiss Federal Institute of Technology Zurich (ETHZ) as a guest researcher from 2012 to 2013. From 2014 to 2019, he was an assistant professor in the Department of Robotics of Ritsumeikan University. His research interests include soft robotics, robotic hands, grasping and manipulation, biomedical engineering, and tactile sensing. He is member of the editorial board of the Chinese Journal of Mechanical Engineering, topics board of the Actuators, and committee member of several IEEE conferences (ROBIO, RCAR, UR, WRC SARA, ARM, WCICA, ISR, HAVE). His work was awarded best papers at UR2020, RCAR2018, and M2VIP2017.
Talk #1: Robotic Handling of Food and Agricultural Products
Automation in food agricultural industries is not as developed as in the automotive and manufacturing industries. Several challenges hinder the rapid introduction of robots into the food and agricultural industries, for example, the required processing speed, the cost benefit of using robot systems, lack of handling strategies and robotic end-effectors designed to cope with the large variety and variability of food and agricultural products, and lack of sufficient understanding of the product properties as an “engineering” material for handling tasks. In this talk, I will introduce my work on robotic systems, particularly on robotic grippers, for handling various food and agricultural products, and the modeling and measurements of physical properties of food materials.