Speech-Vision Based Multi-Modal AI Control of a Magnetic Anchored and Actuated Endoscope
Refereed conference paper presented and published in conference proceedings

Altmetrics Information
.

Other information
AbstractIn minimally invasive surgery (MIS), controlling the endoscope view is crucial for the operation. Many robotic endoscope holders were developed aiming to address this prob-lem,. These systems rely on joystick, foot pedal, simple voice command, etc. to control the robot. These methods requires surgeons extra effort and are not intuitive enough. In this paper, we propose a speech-vision based multi-modal AI approach, which integrates deep learning based instrument detection, automatic speech recognition and robot visual servo control. Surgeons could communicate with the endoscope by speech to indicate their view preference, such as the instrument to be tracked. The instrument is detected by the deep learning neural network. Then the endoscope takes the detected instrument as the target and follows it with the visual servo controller. This method is applied to a magnetic anchored and guided endoscope and evaluated experimentally. Preliminary results demonstrated this approach is effective and requires little efforts for the surgeon to control the endoscope view intuitively.
All Author(s) ListJixiu Li, Yisen Huang, Wing Yin Ng, Truman Cheng, Xixin Wu, Qi Dou, Helen Meng, Pheng Ann Heng, Yunhui Liu, Shannon Melissa Chan, David Navarro-Alarcon, Calvin Sze Hang Ng, Philip Wai Yan Chiu, Zheng Li
Name of Conference2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)
Start Date of Conference05/12/2022
End Date of Conference09/12/2022
Place of ConferenceXishuangbanna
Country/Region of ConferenceChina
Year2022
PublisherIEEE
LanguagesEnglish-United Kingdom

Last updated on 2023-25-10 at 02:55