Cristin-resultat-ID: 2081445
Sist endret: 8. desember 2022, 10:40
NVI-rapporteringsår: 2022
Resultat
Vitenskapelig Kapittel/Artikkel/Konferanseartikkel
2022

Explainable Tsetlin Machine Framework for Fake News Detection with Credibility Score Assessment

Bidragsytere:
  • Bimal Bhattarai
  • Ole-Christoffer Granmo og
  • Jiao Lei

Bok

Proceedings of the Thirteenth Language Resources and Evaluation Conference
ISBN:
  • 979-10-95546-72-6

Utgiver

European Language Resources Association
NVI-nivå 1

Om resultatet

Vitenskapelig Kapittel/Artikkel/Konferanseartikkel
Publiseringsår: 2022
Sider: 4894 - 4903
ISBN:
  • 979-10-95546-72-6

Klassifisering

Fagfelt (NPI)

Fagfelt: IKT
- Fagområde: Realfag og teknologi

Beskrivelse Beskrivelse

Tittel

Explainable Tsetlin Machine Framework for Fake News Detection with Credibility Score Assessment

Sammendrag

The proliferation of fake news, i.e., news intentionally spread for misinformation, poses a threat to individuals and society. Despite various fact-checking websites such as PolitiFact, robust detection techniques are required to deal with the increase in fake news. Several deep learning models show promising results for fake news classification, however, their black-box nature makes it difficult to explain their classification decisions and quality-assure the models. We here address this problem by proposing a novel interpretable fake news detection framework based on the recently introduced Tsetlin Machine (TM). In brief, we utilize the conjunctive clauses of the TM to capture lexical and semantic properties of both true and fake news text. Further, we use clause ensembles to calculate the credibility of fake news. For evaluation, we conduct experiments on two publicly available datasets, PolitiFact and GossipCop, and demonstrate that the TM framework significantly outperforms previously published baselines by at least 5% in terms of accuracy, with the added benefit of an interpretable logic-based representation. In addition, our approach provides a higher F1-score than BERT and XLNet, however, we obtain slightly lower accuracy. We finally present a case study on our model’s explainability, demonstrating how it decomposes into meaningful words and their negations.

Bidragsytere

Bimal Bhattarai

  • Tilknyttet:
    Forfatter
    ved Institutt for informasjons- og kommunikasjonsteknologi ved Universitetet i Agder

Ole-Christoffer Granmo

  • Tilknyttet:
    Forfatter
    ved Institutt for informasjons- og kommunikasjonsteknologi ved Universitetet i Agder

Lei Jiao

Bidragsyterens navn vises på dette resultatet som Jiao Lei
  • Tilknyttet:
    Forfatter
    ved Institutt for informasjons- og kommunikasjonsteknologi ved Universitetet i Agder
1 - 3 av 3

Resultatet er en del av Resultatet er en del av

Proceedings of the Thirteenth Language Resources and Evaluation Conference.

Calzolari, Nicoletta; Béchet, Frédéric; Blache, Philippe; Choukri, Khalid; Cieri, Christopher; Declerck, Thierry; Goggi, Sara; Isahara, Hitoshi; Maegaard, Bente; Mariani, Joseph mfl.. 2022, European Language Resources Association. Vitenskapelig antologi/Konferanseserie
1 - 1 av 1