Cristin-resultat-ID: 1606989
Sist endret: 11. mars 2019, 09:54
NVI-rapporteringsår: 2018
Resultat
Vitenskapelig artikkel
2018

Urban land cover classification with missing data modalities using deep convolutional neural networks

Bidragsytere:
  • Michael C. Kampffmeyer
  • Arnt Børre Salberg og
  • Robert Jenssen

Tidsskrift

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
ISSN 1939-1404
e-ISSN 2151-1535
NVI-nivå 1

Om resultatet

Vitenskapelig artikkel
Publiseringsår: 2018
Volum: 11
Hefte: 6
Sider: 1758 - 1768
Open Access

Importkilder

Scopus-ID: 2-s2.0-85048608109

Beskrivelse Beskrivelse

Tittel

Urban land cover classification with missing data modalities using deep convolutional neural networks

Sammendrag

Automatic urban land cover classification is a fundamental problem in remote sensing, e.g., for environmental monitoring. The problem is highly challenging, as classes generally have high interclass and low intraclass variances. Techniques to improve urban land cover classification performance in remote sensing include fusion of data from different sensors with different data modalities. However, such techniques require all modalities to be available to the classifier in the decision-making process, i.e., at test time, as well as in training. If a data modality is missing at test time, current state-of-the-art approaches have in general no procedure available for exploiting information from these modalities. This represents a waste of potentially useful information. We propose as a remedy a convolutional neural network (CNN) architecture for urban land cover classification which is able to embed all available training modalities in the so-called hallucination network. The network will in effect replace missing data modalities in the test phase, enabling fusion capabilities even when data modalities are missing in testing. We demonstrate the method using two datasets consisting of optical and digital surface model (DSM) images. We simulate missing modalities by assuming that DSM images are missing during testing. Our method outperforms both standard CNNs trained only on optical images as well as an ensemble of two standard CNNs. We further evaluate the potential of our method to handle situations where only some DSM images are missing during testing. Overall, we show that we can clearly exploit training time information of the missing modality during testing.

Bidragsytere

Michael Christian Kampffmeyer

Bidragsyterens navn vises på dette resultatet som Michael C. Kampffmeyer
  • Tilknyttet:
    Forfatter
    ved Institutt for fysikk og teknologi ved UiT Norges arktiske universitet

Arnt-Børre Salberg

Bidragsyterens navn vises på dette resultatet som Arnt Børre Salberg
  • Tilknyttet:
    Forfatter
    ved Avdeling for bildeanalyse, maskinlæring og jordobservasjon BAMJO ved Norsk Regnesentral
Aktiv cristin-person

Robert Jenssen

  • Tilknyttet:
    Forfatter
    ved Institutt for fysikk og teknologi ved UiT Norges arktiske universitet
1 - 3 av 3