Deep learning-based recognition of key anatomical structures during robot-assisted minimally invasive esophagectomy

R B den Boer, T J M Jaspers, C de Jongh, J P W Pluim, F van der Sommen, T Boers, R van Hillegersberg, M A J M Van Eijnatten, J P Ruurda

Research output: Contribution to journalArticleAcademicpeer-review

7 Downloads (Pure)

Abstract

OBJECTIVE: To develop a deep learning algorithm for anatomy recognition in thoracoscopic video frames from robot-assisted minimally invasive esophagectomy (RAMIE) procedures using deep learning.

BACKGROUND: RAMIE is a complex operation with substantial perioperative morbidity and a considerable learning curve. Automatic anatomy recognition may improve surgical orientation and recognition of anatomical structures and might contribute to reducing morbidity or learning curves. Studies regarding anatomy recognition in complex surgical procedures are currently lacking.

METHODS: Eighty-three videos of consecutive RAMIE procedures between 2018 and 2022 were retrospectively collected at University Medical Center Utrecht. A surgical PhD candidate and an expert surgeon annotated the azygos vein and vena cava, aorta, and right lung on 1050 thoracoscopic frames. 850 frames were used for training of a convolutional neural network (CNN) to segment the anatomical structures. The remaining 200 frames of the dataset were used for testing the CNN. The Dice and 95% Hausdorff distance (95HD) were calculated to assess algorithm accuracy.

RESULTS: The median Dice of the algorithm was 0.79 (IQR = 0.20) for segmentation of the azygos vein and/or vena cava. A median Dice coefficient of 0.74 (IQR = 0.86) and 0.89 (IQR = 0.30) were obtained for segmentation of the aorta and lung, respectively. Inference time was 0.026 s (39 Hz). The prediction of the deep learning algorithm was compared with the expert surgeon annotations, showing an accuracy measured in median Dice of 0.70 (IQR = 0.19), 0.88 (IQR = 0.07), and 0.90 (0.10) for the vena cava and/or azygos vein, aorta, and lung, respectively.

CONCLUSION: This study shows that deep learning-based semantic segmentation has potential for anatomy recognition in RAMIE video frames. The inference time of the algorithm facilitated real-time anatomy recognition. Clinical applicability should be assessed in prospective clinical studies.

Original languageEnglish
Pages (from-to)5164-5175
Number of pages12
JournalSurgical endoscopy
Volume37
Issue number7
Early online date22 Mar 2023
DOIs
Publication statusPublished - Jul 2023

Keywords

  • Anatomy recognition
  • Computer vision
  • Deep learning
  • Robotics
  • Surgery

Fingerprint

Dive into the research topics of 'Deep learning-based recognition of key anatomical structures during robot-assisted minimally invasive esophagectomy'. Together they form a unique fingerprint.

Cite this