Cristin-resultat-ID: 1823303
Sist endret: 4. desember 2020, 13:53
NVI-rapporteringsår: 2020
Resultat
Vitenskapelig artikkel
2020

PS-DeVCEM: Pathology-sensitive deep learning model for video capsule endoscopy based on weakly labeled data

Bidragsytere:
  • Ahmed Kedir Mohammed
  • Ivar Farup
  • Marius Pedersen
  • Sule Yildirim Yayilgan og
  • Øistein Hovde

Tidsskrift

Computer Vision and Image Understanding
ISSN 1077-3142
e-ISSN 1090-235X
NVI-nivå 2

Om resultatet

Vitenskapelig artikkel
Publiseringsår: 2020
Publisert online: 2020
Volum: 201:103062
Sider: 1 - 11
Open Access

Importkilder

Scopus-ID: 2-s2.0-85089809020

Beskrivelse Beskrivelse

Tittel

PS-DeVCEM: Pathology-sensitive deep learning model for video capsule endoscopy based on weakly labeled data

Sammendrag

Abstract We propose a novel pathology-sensitive deep learning model (PS-DeVCEM) for frame-level anomaly detection and multi-label classification of different colon diseases in video capsule endoscopy (VCE) data. Our proposed model is capable of coping with the key challenge of colon apparent heterogeneity caused by several types of diseases. Our model is driven by attention-based deep multiple instance learning and is trained end-to-end on weakly labeled data using video labels instead of detailed frame-by-frame annotation. This makes it a cost-effective approach for the analysis of large capsule video endoscopy repositories. Other advantages of our proposed model include its capability to localize gastrointestinal anomalies in the temporal domain within the video frames, and its generality, in the sense that abnormal frame detection is based on automatically derived image features. The spatial and temporal features are obtained through ResNet50 and residual Long short-term memory (residual LSTM) blocks, respectively. Additionally, the learned temporal attention module provides the importance of each frame to the final label prediction. Moreover, we developed a self-supervision method to maximize the distance between classes of pathologies. We demonstrate through qualitative and quantitative experiments that our proposed weakly supervised learning model gives a superior precision and F1-score reaching, 61.6% and 55.1%, as compared to three state-of-the-art video analysis methods respectively. We also show our model’s ability to temporally localize frames with pathologies, without frame annotation information during training. Furthermore, we collected and annotated the first and largest VCE dataset with only video labels. The dataset contains 455 short video segments with 28,304 frames and 14 classes of colorectal diseases and artifacts. Dataset and code supporting this publication will be made available on our home page.

Bidragsytere

Ahmed Kedir Mohammed

  • Tilknyttet:
    Forfatter
    ved Institutt for datateknologi og informatikk ved Norges teknisk-naturvitenskapelige universitet

Ivar Farup

  • Tilknyttet:
    Forfatter
    ved Institutt for datateknologi og informatikk ved Norges teknisk-naturvitenskapelige universitet
Aktiv cristin-person

Marius Pedersen

  • Tilknyttet:
    Forfatter
    ved Institutt for datateknologi og informatikk ved Norges teknisk-naturvitenskapelige universitet
Aktiv cristin-person

Sule Yildirim-Yayilgan

Bidragsyterens navn vises på dette resultatet som Sule Yildirim Yayilgan
  • Tilknyttet:
    Forfatter
    ved Institutt for informasjonssikkerhet og kommunikasjonsteknologi ved Norges teknisk-naturvitenskapelige universitet

Øistein Hovde

  • Tilknyttet:
    Forfatter
    ved Div Gjøvik/Lillehammer ved Sykehuset Innlandet HF
  • Tilknyttet:
    Forfatter
    ved Gastromedisinsk avdeling ved Universitetet i Oslo
1 - 5 av 5