Haptics for Human-Interaction-Oriented Teleoperation: Sensorimotor Augmentation, Interfaces, and Control

Aims and scope

Teleoperation is rapidly evolving from remote control into a human-interaction-oriented paradigm in which operators and robots cooperate through rich sensorimotor loops. In this setting, haptics is not an accessory: it is the primary channel that enables physically grounded interaction, trustworthy contact, dexterous manipulation, and shared autonomy with meaningful human oversight. However, achieving effective teleoperation haptics remains challenging due to uncertain contacts and unstructured environments; device/interface limitations and human factors; networking constraints (latency, jitter, packet loss); and the need to integrate perception, learning, and control without compromising safety, stability, and transparency.

This Special Issue solicits top-quality, haptics-centered research that advances the science and engineering of teleoperation through haptic sensing, feedback, interaction design, evaluation, modeling, and haptics theory. We encourage contributions spanning tactile/force sensing, haptic rendering and display, human-subject evaluation and psychophysics, and haptics-informed control and shared autonomy, especially work that demonstrates rigorous validation and clear relevance to touch-mediated interaction. Submissions must feature an explicit and substantial haptics contribution. Papers whose main contribution is robotic control, planning, learning, or teleoperation architecture without a clear haptics contribution (and haptics-centric results) are outside the scope of this Special Issue.

This Special Issue focuses on topics in the area of, but not restricted to:

  • Novel haptic devices/interfaces and tactile sensing tailored for teleoperation: exoskeletons, fingertip devices, soft haptics, variable impedance
  • Haptic rendering of contact, compliance, friction, texture, and dynamic interaction
  • Perception and psychophysics in teleoperation, including transparency, embodiment, agency, presence, workload, and trust
  • Human-centered evaluation protocols and metrics for teleoperation haptics: repeatable benchmarks, datasets, and user studies
  • Sensorimotor augmentation via haptics: intent inference, skill transfer, guidance, and training
  • Haptics-aware teleoperation control/rendering (e.g., passivity/stability, transparency trade-offs) under delay, jitter, and packet loss
  • Shared / supervisory control for haptic teleoperation – scalable feedback, coordination, role assignment
  • Digital twins/XR telepresence with haptic synchronization and contact-consistent simulation
  • Deployments and rigorous evaluations in safety- and mission-critical domains (e.g., surgery, nuclear, subsea, space, manufacturing, confined spaces)

If you are unsure whether your work fits, please contact the Guest Editors before submission.

Submission instructions

We welcome top-quality original (unpublished) articles. All submissions will be screened during the initial editorial assessment to ensure alignment with IEEE Transactions on Haptics and the haptics-centered scope of this Special Issue before entering external peer review. Submissions that do not demonstrate (i) a substantial haptics contribution (device/ sensing/ rendering/ perception/ modeling), and (ii) haptics-centric validation (user study / psychophysics / standard haptics metrics) will be returned without external review. Manuscripts must follow the IEEE Transactions on Haptics author guidelines and be submitted via the journal’s submission system. When submitting, please select: “Special Issue: Haptics for Human-Interaction-Oriented Teleoperation”.

Important dates

  • Submission deadline: July 30th, 2026
  • First decision: approximately three months after submission
  • Final decision: approximately six months after submission
  • Final publication material due from authors: approximately seven months after submission

Guest Editors

  • Ziwei Wang, School of Engineering, Lancaster University, UK
  • Angela Faragasso, Finger Vision, Japan
  • Parag Khanna, Division of Robotics, Perception and Learning, KTH Royal Institute of Technology, Sweden
  • Carlo Tiseo, School of Engineering and Informatics, University of Sussex, UK
  • Elmira Yadollahi, Department of Computing and Communications, Lancaster University, UK
  • Bin Liang, Department of Automation, Tsinghua University, China
  • Eiichi Yoshida, Department of Medical and Robotic Engineering Design, Tokyo University of Science, Japan