CFP: Benchmarking Protocols for Robotic Manipulation

Motivation

Benchmarks are crucial for analyzing the effectiveness of an approach against a common basis, providing a quantitative means for interpreting performance. Carefully designed and widely recognized benchmarks encourage the research community to focus on certain key research challenges, promote competition, foster a climate for novel solutions, and, therefore, contribute dramatically to the advancement of a field. While some robotics-related fields (such as object recognition and segmentation) actively utilize benchmarks, there are essentially no robotic manipulation benchmarks that are widely adopted by the research community despite their highly acknowledged necessity. 

Discussions within the robot manipulation research community via a number of workshops and similar meetings have identified some primary obstacles to the development and adoption of benchmarking procedures in our field, including:

  • Lack of communication and agreement between researchers for the standards and characteristics of a benchmark
  • Lack of widely utilized data sets that target manipulation research
  • Lack of a reputable and central venue to distribute the benchmarks
  • Lack of professional rewards to encourage researchers to develop and utilize benchmarks

This special issue seeks to help to break some of these barriers by encouraging collaborations among different research groups, encouraging the use of existing data sets, and boosting the visibility and dissemination of benchmarking procedures via a reputable publishing venue.

Topical Areas

This special issue is dedicated to papers that propose and demonstrate novel and widely useful benchmarking protocols for robotic manipulation research. Submitted papers should focus on describing well-defined experimental procedures that are ready to be applied by other researchers in similar topic areas for quantifying performance of research approaches in robotic manipulation or sub-fields, including but not limited to:

  • Manipulation planning (e.g. performance of grasp planners)
  • Mechanism design (e.g. performance of robotic hands)
  • Machine learning (e.g. learning manipulation abilities)
  • Cognitive robotics (e.g. task representations)
  • Etc.

One of the primary challenges in developing effective benchmarking procedures relates to the balance of specificity versus generality. For instance, high-level system performance metrics (such as that done within the Amazon Picking Challenges) can be used by the widest range of research groups, but tell little about the performance of the specifics of the approaches being used. For instance, was the good or bad performance due to the hardware design, the perception system, or the planning approach? On the opposite side of the spectrum, a very narrowly designed evaluation procedure that specifies the hardware platforms and many software subsystems might be able to speak very specifically to the effectiveness of specific grasp planner, for instance, but might not generalize and would only be used by researchers with that particular combination of subsystems available to them. It is therefore left to the authors of proposed benchmarking procedures to find a suitable middle ground to provide sufficient quantitative evaluation of specific research approaches while enabling as many researchers as possible to implement them.

In addition, authors of submitted papers are highly encouraged to:

  • work in collaboration with multiple research groups from similar areas to develop benchmarking procedures (to avoid overfitting to a particular approach and boost the overall impact),
  • make use of existing published data sets when possible (e.g. standard objects and models), unless they are specifically inappropriate
  • utilize the provided templates to detail the protocol (procedure and constraints) and benchmark (reporting of results)
  • provide multimedia files that illustrate and demonstrate the protocol
  • report baseline experimental results obtained when applying the protocol on their own system(s)

Authors of prospective papers are also strongly encouraged to contact the special issue editors and submit a letter of intent regarding the scope of their benchmark to receive feedback prior to full submission. Letters of intent should be approximately 1-2 pages long and should describe the purpose of the benchmark, the overall task protocol, and how performance will be quantified and reported. Please e-mail it to bcalli@wpi.edu and aaron.dollar@yale.edu.

Please refer to the templates and their explanations in the following website for the expected aspects to be addressed:
http://www.ycbbenchmarks.com/protocols-and-benchmarks/

Timeline

Call for papers announced: February 1, 2019
Letter of intent due: April 15, 2019
Submission opens: July 15, 2019
Submission closes: August 15, 2019
First Decision: November 9, 2019 (at the latest)
Final Decision: January 13, 2020 (at the latest)
Special Issue publication: Papers on IEEEXplore by February 11, 2020 (at the latest), Special Issue Introduction within one month later.

Guest Editors

Berk Calli portrait
Berk Calli
SI on Benchmarking Protocols for Robotics Manipulation
Worcester Polytechnic Institute
Worcester (MA), USA
Aaron Dollar portrait
Aaron Dollar
SI on Benchmarking Protocols for Robotics Manipulation
Yale University
New Haven (CT), USA
Maximo Roa portrait
Maximo Roa
SI on Benchmarking Protocols for Robotics Manipulation
DLR - German Aerospace Center
Weßling, Germany
Sidd Srinivasa portrait
Sidd Srinivasa
SI on Benchmarking Protocols for Robotics Manipulation
University of Washington
Seattle (WA), USA
Yu Sun portrait
Yu Sun
SI on Benchmarking Protocols for Robotics Manipulation
University of South Florida
Tampa (FL), USA

 

Easy Links