Cristin-resultat-ID: 2200399
Sist endret: 22. november 2023, 15:05
Resultat
Mastergradsoppgave
2023

Accountability Module: Increasing Trust in Reinforcement Learning Agents

Bidragsytere:
  • Eyosiyas Bisrat Taye

Utgiver/serie

Utgiver

Universitetet i Oslo
NVI-nivå 0

Om resultatet

Mastergradsoppgave
Publiseringsår: 2023
Antall sider: 130

Beskrivelse Beskrivelse

Tittel

Accountability Module: Increasing Trust in Reinforcement Learning Agents

Sammendrag

Artificial Intelligence requires trust to be fully utilised by users and for them to feel safe while using them. Trust, and indirectly, a sense of safety, has been overlooked in the pursuit of more accurate or better-performing black box models. The field of Explainable Artificial Intelligence and the current recommendations and regulations around Artificial Intelligence require more transparency and accountability from governmental and private institutes. Creating a self-explainable AI that can be used to solve a problem while explaining its reasoning is challenging to develop. Still, it would be unable to explain all the other AIs without self-explainable abilities. It would likely not function for different problem domains and tasks without extensive knowledge about the model. The solution proposed in this thesis is the Accountability Module. It is meant to function as an external explanatory module, which would be able to function with different AI models in different problem domains. The prototype was inspired by accident investigations regarding autonomous vehicles but was created and implemented for a simplified simulation of vehicles driving on a highway. The prototype's goal was to attempt to assist an investigator in understanding why the vehicle crashed. The Accountability Module found the main factors in the decision that resulted in an accident. It was also able to facilitate the answering of whether the outcome was avoidable and if there were inconsistencies with the agent's logic by examining different cases against each other. The prototype managed to provide useful explanations and assist investigators in understanding and troubleshooting agents. The thesis and the Accountability Module indicate that a similar explanatory module is a robust direction to explore further. The chosen explainability methods and techniques were highly connected to the problem domain and limited by the scope of the thesis. Therefore, a more extensive test of the prototype with different problems needs to be performed to check the system's rigidity and versatility as well as the significance of the results. Nevertheless, in a collaboration between an Accountability Module expert and a domain expert, I expect a modular explainability solution to create more insight into an AI model and its significant incidents.

Bidragsytere

Eyosiyas Bisrat Taye

  • Tilknyttet:
    Forfatter

Kai Olav Ellefsen

  • Tilknyttet:
    Veileder
    ved Forskningsgruppe for robotikk og intelligente systemer ved Universitetet i Oslo
1 - 2 av 2