Cristin-resultat-ID: 527647
Sist endret: 14. februar 2011, 13:24
NVI-rapporteringsår: 2010
Resultat
Vitenskapelig artikkel
2010

Temporal dynamics of prediction error processing during reward-based decision making

Bidragsytere:
  • Marios G. Philiastides
  • Guido Biele
  • Niki Vavatzanidis
  • Philipp Kazzer og
  • Hauke R. Heekeren

Tidsskrift

NeuroImage
ISSN 1053-8119
e-ISSN 1095-9572
NVI-nivå 2

Om resultatet

Vitenskapelig artikkel
Publiseringsår: 2010
Volum: 53
Sider: 221 - 232
Open Access

Beskrivelse Beskrivelse

Tittel

Temporal dynamics of prediction error processing during reward-based decision making

Sammendrag

Adaptive decision making depends on the accurate representation of rewards associated with potential choices. These representations can be acquired with reinforcement learning (RL) mechanisms, which use the prediction error (PE, the difference between expected and received rewards) as a learning signal to update reward expectations. While EEG experiments have highlighted the role of feedback-related potentials during performance monitoring, important questions about the temporal sequence of feedback processing and the specific function of feedback-related potentials during reward-based decision making remain. Here, we hypothesized that feedback processing starts with a qualitative evaluation of outcome-valence, which is subsequently complemented by a quantitative representation of PE magnitude. Results of a model-based single-trial analysis of EEG data collected during a reversal learning task showed that around 220ms after feedback outcomes are initially evaluated categorically with respect to their valence (positive vs. negative). Around 300ms, and parallel to the maintained valence-evaluation, the brain also represents quantitative information about PE magnitude, thus providing the complete information needed to update reward expectations and to guide adaptive decision making. Importantly, our single-trial EEG analysis based on PEs from an RL model showed that the feedback-related potentials do not merely reflect error awareness, but rather quantitative information crucial for learning reward contingencies.

Bidragsytere

Marios G. Philiastides

  • Tilknyttet:
    Forfatter
    ved Max-Planck-Institut für Kognitions- und Neurowissenschaften
  • Tilknyttet:
    Forfatter
    ved Tyskland
Aktiv cristin-person

Guido Biele

  • Tilknyttet:
    Forfatter
  • Tilknyttet:
    Forfatter
    ved Max-Planck-Institut für Kognitions- und Neurowissenschaften
  • Tilknyttet:
    Forfatter
    ved Tyskland

Niki Vavatzanidis

  • Tilknyttet:
    Forfatter
    ved Tyskland

Philipp Kazzer

  • Tilknyttet:
    Forfatter
    ved Tyskland

Hauke R. Heekeren

  • Tilknyttet:
    Forfatter
    ved Max-Planck-Institut für Kognitions- und Neurowissenschaften
  • Tilknyttet:
    Forfatter
    ved Tyskland
  • Tilknyttet:
    Forfatter
    ved Freie Universität Berlin
1 - 5 av 5