Biomedical Signal and Image Computing Laboratory

Augmented Reality for Minimally Invasive Surgery

People

Alborz Amir-Khalili, Ivan Figueroa (Grad Students)
BiSICL, Department of Electrical and Computer Engineering, UBC

Masoud Nosrati, Jeremy Kawahara (Grad Students)
Medical Image Analysis Lab (MIAL), School of Computing Science, SFU

Rafeef Abugharbieh (Principal Investigator)
BiSICL, Department of Electrical and Computer Engineering, UBC

Ghassan Hamarneh (Lead Investigator)
Medical Image Analysis Lab (MIAL), School of Computing Science, SFU

Dr. Julien Abi-Nahed (Co-Lead Investigator), Dr. Jean-Marc Peyrat
Qatar Robotic Surgery Centre (QRSC), Qatar Science and Technology Park, Doha

Keywords

Augmented Reality, Laparoscopic Partial Nephrectomy, Minimally Invasive Surgery, Computer Assisted Intervention, Da Vinci Robot, Robotic Assisted Surgery, Stereovision, Multi-modal Registration, Multi-modal Segmentation

Description

Although it has been clearly demonstrated that pre-operative and intra-operative medical images offer great benefits to MIS in general, and to RAPN in particular, full utilization of such medical image data in the operating room is not yet realized due to challenging outstanding obstacles. For a start, the real-time stereo endoscopic views show only exposed surfaces in the direct line-of-sight of the cameras. Consequently, ultrasound is now typically used alongside endoscopy, but the common B-mode US provides only cross-sectional 2D views, and suffers from low signal-to-noise ratios (SNR), high levels of speckle noise, as well as significant shadowing, reverberation, and refraction artifacts. Additionally, the US images are typically not overlaid properly with respect to the stereoscopic field of view of the surgeon, requiring him extra mental efforts to visualize the scene. Finally, clearly absent during the surgery are high resolution 3D images updated in real-time without introducing exorbitant costs nor high levels of radiation.

Diagram showing all the modalities being registered together.

In this project, we propose a software-based solution to remedy the aforementioned problem. Our objective is to overlay data derived from pre-operative CT images onto the surgeon's stereo endoscopic view. The overlaid data will include both pre-operative CT images, in the form of 2D slices or 3D volumes, as well as pre-segmented anatomy and pathology (e.g. vasculature and tumor). Moreover, the pre-operative data will be first positioned and then deformed (e.g. stretched, bent, or cut) continuously to match the current state of the patient's anatomy. Lastly, left and right (stereo) projective views of the overlaid data will be streamed onto the left and right camera views provided to the surgeon console, thus augmenting the 3D endoscopic view of the operation field. This MIS enhancement will radically improve the surgeon's experience and efficiency, increase the precision of the surgery, decrease the surgery time, and reduce collateral damage, which in turn will lead to improved surgery outcomes and patient outlook and recovery.

Automatic Detection of Hidden Vasculature

Potentially serious complications arise when the occluded blood vessels, concealed by fat, are missed in the endoscopic view and as a result not appropriately clamped. To aid in vessel discovery, we propose a novel automatic method to segment occluded vasculature from labelling minute pulsatile motion that is otherwise imperceptible with the naked eye.

Oral presentation at MICCAI 2014 in Cambridge, MA

Uncertainty-Encoded Augmented Reality

In most augmented reality surgical guidance systems, surgeons are often unable to reliably assess the levels of trust they can bestow on what is overlaid on the screen. We have presented the proof-of-concept of an uncertainty-encoded augmented reality framework and novel visualizations of the uncertainties derived from the pre-operative CT segmentation onto the surgeon's stereo endoscopic view. To verify its clinical potential, the proposed method is applied to an ex vivo lamb kidney. The results are contrasted to different visualization solutions based on crisp segmentation demonstrating that our method provides valuable additional information that can help the surgeon during the resection planning.

 

Traditional crisp visualization of tumor boundaries (left) compared to our uncertainty-encoded visualization (right) on an ex vivo phantom.

Robust Surface Reconstruction with Shape-Priors

Reconstructing the surface geometry from camera information alone remains a very challenging problem in robot-assisted MIS mainly due to a small baseline between the optical centres of the cameras, presence of blood and smoke, specular highlights, occlusion, and smooth/textureless regions. In this paper, we propose a method for increasing the overall surface reconstruction accuracy by incorporating patient specific shape priors extracted from pre-operative images.

  

Reconstructed surface in 3-space (in millimeters) with texture using: (left to right) simple dense matching method, robust dense matching method, and our proposed method with shape-prior

Biomechanical Kidney Model

To plan the resection, surgeons rely on preoperative scans of the patient. However, at surgery time, the shape of abdominal organs differ from these images due to factors such as patient position, insufflation and manipulation with surgical instruments. In this work, we focus on the simulation of kidney deformation due to an external pressure load, e.g. during insufflation, to provide a better estimation of the tumor mass position that is particularly important to plan resection with proper margins. Results show that the biomechanical simulation improves by 29% the tumor localization.

 

Model construction: (left) Shows a CT-scan slice with visible tumor, (right) generated model with surface capsule and inner tetrahedral meshes for parenchyma and tumor.

Selected Publications

  • A. Amir-Khalili, J.-M. Peyrat, J. Abinahed, O. AlAlao, A. Al-Ansari, G. Hamarneh, R. Abugharbieh. “Auto Localization and Segmentation of Occluded Vessels in Robot-Assisted Partial Nephrectomy”. Medical Image Computing and Computer Assisted Intervention (MICCAI), Boston-USA, Sept 2014, pp. 407-414 [PDF] Best Paper Award, Student Travel Award, Young Scientist Award
  • M. Nostari, J.-M. Peyrat, J. Abinahed, O. AlAlao, A. Al-Ansari, R. Abugharbieh, G. Hamarneh. “Efficient Multi-Organ Segmentation in Multi-View Endoscopic Videos Using Pre-operative Priors”. Medical Image Computing and Computer Assisted Intervention (MICCAI), Boston-USA, Sept 2014, pp. 324-331 [PDF]
  • J. Kawahara, J.-M. Peyrat, J. Abinahed, Osama Al-Alao, A. Al-Ansari, R. Abugharbieh, G. Hamarneh. “Automatic Labelling of Tumourous Frames in Free-Hand Laparoscopic Ultrasound Video”. Medical Image Computing and Computer Assisted Intervention (MICCAI), Boston-USA, Sept 2014, pp. 676-683 [PDF] Student Travel Award
  • I. Figueroa-Garcia, J. M. Peyrat, G. Hamarneh, R. Abugharbieh, “Biomedical Kidney Model for Predicting Tumor Displacement in the Presence of External Pressure”. IEEE International Symposium on Biomedical Imaging (ISBI), Beijing-China, April 2014, pp. 810-813 [PDF]
  • G. Hamarneh, A. Amir-Khalili, M. Nosrati, I. Figueroa, J. Kawahara, O. Al-Alao, A. Al-Ansari, J. Abi-Nahed, J. M. Peyrat, R. Abugharbieh, “Towards Image-Guided Tumour Identification in Robot-Assisted Partial Nephrectomy”, Middle East Conference on Biomedical Engineering (MECBME), Doha-Qatar, Feb 2014, pp. 159-162 [PDF]
  • A. Amir-Khalili, J. M. Peyrat, G. Hamarneh, R. Abugharbieh. “3D Surface Reconstruction of Organs using Patient-Specific Shape Priors in Robot-Assisted Laparoscopic Surgery”. MICCAI workshop on Computational and Clinical Applications in Abdominal Imaging (ABDI), Nagoya-Japan, Sept 2013, pp. 184-193 [PDF]
  • A. Amir-Khalili, M.S. Nosrati, J. M. Peyrat, G. Hamarneh, R. Abugharbieh. “Uncertainty-Encoded Augmented Reality for Robot-Assisted Partial Nephrectomy: A Phantom Study”. MICCAI workshop on Medical Imaging and Augmented Reality (MIAR), Nagoya-Japan, Sept 2013, pp. 182-191 [PDF]
  • S. Bernhardt, J. Abi-Nahid, R. Abugharbieh. “Robust Dense Endoscopic Stereo Reconstruction for Minimally Invasive Surgery”. MICCAI workshop on Medical Computer Vision (MCV), Nice-France, Oct 2012, pp. 198-207 [PDF]

a place of mind, The University of British Columbia

The University of British Columbia
Vancouver Campus
2329 West Mall
Vancouver, B.C., Canada V6T 1Z4
Tel (Directory Assistance): 604.822.2211
The University of British Columbia
Okanagan Campus
3333 University Way
Kelowna, B.C., Canada V1V 1V7
Tel: 250.807.8000