Natalia Mathá finished her PhD!

On April 6th, 2022, Natalia Mathá (former Sokolova) successfully defended her thesis on “Relevance Detection and Relevance-Based Video Compression in Cataract Surgery Videos” under the supervision of Assoc.-Prof. Klaus Schöffmann and Assoc.-Prof. Christian Timmerer. The defense was chaired by Univ.-Prof. Hermann Hellwagner and the examiners were Assoc.-Prof. Konstantin Schekotihin and Assoc.-Prof. Mathias Lux. Congratulations to Dr. Mathá for this great achievement!

Negin Ghamsarian finished her PhD!

On October 27th, 2021, Negin Ghamsarian successfully defended her thesis on “Deep-Learning-Assisted Analysis of Cataract Surgery Videos” under the supervision of Assoc.-Prof. Klaus Schöffmann. The defense was chaired by Univ.-Prof. Hermann Hellwagnerand the examiners were Prof. Henning Müller (University of Applied Sciences Western Switzerland and the University of Geneva) and Prof. Raphael Sznitman (University of Bern). Congratulations to Dr. Ghamsarian for this great achievement!

One day to Negin’s Talk at MICCAI 2021

Tomorrow, Negin Ghamsarian will present her paper about LensID at MICCAI 2021, which has started today!

Title: LensID: A CNN-RNN-Based Framework Towards Lens Irregularity Detection in Cataract Surgery Videos.

Authors: Negin Ghamsarian, Mario Taschwer, Doris Putzgruber-Adamitsch, Stephanie Sarny, and Klaus Schoeffmann.

Abstract: A critical complication after cataract surgery is the dislocation of the lens implant leading to vision deterioration and eye trauma. In order to reduce the risk of this complication, it is vital to discover the risk factors during the surgery. However, studying the relationship between lens dislocation and its suspicious risk factors using numerous videos is a time-extensive procedure. Hence, the surgeons demand an automatic approach to enable a larger-scale and, accordingly, more reliable study. In this paper, we propose a novel framework as the major step towards lens irregularity detection. In particular, we propose (I) an end-to-end recurrent neural network to recognize the lens- implantation phase and (II) a novel semantic segmentation network to segment the lens and pupil after the implantation phase. The phase recognition results reveal the effectiveness of the proposed surgical phase recognition approach. Moreover, the segmentation results confirm the proposed segmentation network’s effectiveness compared to state-of-the-art  rival approaches.

ReCal-Net: Joint Region-Channel-Wise Calibrated Network for Semantic Segmentation in Cataract Surgery Videos

Our paper about ReCal-Net hast been accepted at ICONIP 2021.

Title: ReCal-Net: Joint Region-Channel-Wise Calibrated Network for Semantic Segmentation in Cataract Surgery Videos.

Authors: Negin Ghamsarian, Mario Taschwer, Doris Putzgruber-Adamitsch, Stephanie Sarny, Yosuf El-Shabrawi and Klaus Schoeffmann.

Abstract: Semantic segmentation in surgical videos is a prerequisite for a broad range of applications towards improving surgical outcomes and surgical video analysis. However, semantic segmentation in surgical videos involves many challenges. In particular, in cataract surgery, various features of the relevant objects such as blunt edges, color and context variation, reflection, transparency, and motion blur pose a challenge for semantic segmentation. In this paper, we propose a novel convolutional module termed as ReCal module, which can calibrate the feature maps by employing region intra-and-inter-dependencies and channel-region cross-dependencies. This calibration strategy can effectively enhance semantic representation by correlating different representations of the same semantic label, considering a multi-angle local view centering around each pixel. Thus the proposed module can deal with distant visual characteristics of unique objects as well as cross-similarities in the visual characteristics of different objects. Moreover, we propose a novel network architecture based on the proposed module termed as ReCal-Net. Experimental results confirm the superiority of ReCal-Net compared to rival state-of-the-art approaches for all relevant objects in cataract surgery. Moreover, ablation studies reveal the effectiveness of the ReCal module in boosting semantic segmentation accuracy.

 

Pupil Segmentation in Cataract Videos

Our workshop paper on iris and pupil segmentation in cataract surgery videos has been accepted for presentation at the ISBI 2020 conference.

Title: Pixel-Based Iris and Pupil Segmentation in Cataract Surgery Videos Using Mask R-CNN

Authors: Natalia Sokolova, Mario Taschwer, Stephanie Sarny, Doris Putzgruber-Adamitsch,  Klaus Schoeffmann

Abstract: Automatically detecting clinically relevant events in surgery video recordings is becoming increasingly important for documentary, educational, and scientific purposes in the medical domain. From a medical image analysis perspective, such events need to be treated individually and associated with specific visible objects or regions. In the field of cataract surgery (lens replacement in the human eye), pupil reaction (dilation or restriction) during surgery may lead to complications and hence represents a clinically relevant event. Its detection requires automatic segmentation and measurement of pupil and iris in recorded video frames. In this work, we contribute to research on pupil and iris segmentation methods by (1) providing a dataset of 82 annotated images for training and evaluating suitable machine learning algorithms, and (2) applying the Mask R-CNN algorithm to this problem, which – in contrast to existing techniques for pupil segmentation – predicts free-form pixel-accurate segmentation masks for iris and pupil. The proposed approach achieves consistent high segmentation accuracies on several metrics while delivering an acceptable prediction efficiency, establishing a promising basis for further segmentation and event detection approaches on eye surgery videos.

Tool Segmentation in Cataract Videos

Our conference paper on instrument segmentation in cataract surgery videos has been accepted for presentation at the CBMS 2020 conference.

Title: Pixel-Based Tool Segmentation in Cataract Surgery Videos with Mask R-CNN

Authors: Markus Fox, Mario Taschwer, and Klaus Schoeffmann

Abstract: Automatically detecting surgical tools in recorded surgery videos is an important building block of further content-based video analysis. In ophthalmology, the results of such methods can support training and teaching of operation techniques and enable investigation of medical research questions on a dataset of recorded surgery videos. Our work applies a recent deep-learning segmentation method (Mask R-CNN) to localize and segment surgical tools used in ophthalmic cataract surgery. We add ground-truth annotations for multi-class instance segmentation to two existing datasets of cataract surgery videos and make resulting datasets publicly available for research purposes. In the absence of comparable results from literature, we tune and evaluate Mask R-CNN on these datasets for instrument segmentation/localization and achieve promising results (61% mean average precision on 50% intersection over union for instance segmentation, working even better for bounding box detection or binary segmentation), establishing a reasonable baseline for further research. Moreover, we experiment with common data augmentation techniques and analyze the achieved segmentation performance with respect to each class (instrument), providing evidence for future improvements of this approach.

Relevance-based Exploration of Cataract Videos

The doctoral symposium paper of Negin on ‘Relevance-based Exploration of Cataract Videos‘ has been accepted for publication at the ACM International Conference on Multimedia Retrieval (ICMR 2020).

Title: Enabling Relevance-Based Exploration of Cataract Videos

Author: Negin Ghamsarian

Abstract: Training new surgeons as one of the major duties of experienced expert surgeons demands a considerable supervisory investment of them. To expedite the training process and subsequently reduce the extra workload on their tight schedule, surgeons are seeking a surgical video retrieval system. Automatic workflow analysis approaches can optimize the training procedure by indexing the surgical video segments to be used for online video exploration. The aim of the doctoral project described in this paper is to provide the basis for a cataract video exploration system, that is able to (i) automatically analyze and extract the relevant segments of videos from cataract surgery, and (ii) provide interactive exploration means for browsing archives of cataract surgery videos. In particular, we apply deep-learning-based classification and segmentation approaches to cataract surgery videos to enable automatic phase and action recognition and similarity detection.