Optimal decision-making depends on accurate value and outcome representations. To date, scientists have used EEG and fMRI independently to either identify activation latencies or brain regions related to decision signals. In turn, a full spatiotemporal characterization of the process underlying simple value-based decisions and reward learning is still lacking. Here are a series of human multimodal neuroimaging studies in which I studied the separate influence of outcome valence and surprise on learning. Importantly, I show that linking fMRI brain activations with temporally specific EEG information can help us identify distributed neural representations of interest and uncover latent brain states that would likely have remained unobserved with more conventional (e.g., univariate) analysis tools.
People and other animals learn the values of choices by observing the contingencies between them and their outcomes. However, decisions are not guided by choice-linked reward associations alone; macaques also maintain a memory of the general, average reward rate – the global reward state – in an environment. Remarkably, global reward state affects the way that each choice outcome is valued and influences future decisions.
Here coupling single-trial electroencephalography with simultaneously acquired functional magnetic resonance imaging, we uncover the spatiotemporal dynamics of two separate but interacting value systems encoding decision-outcomes.
Here, we offer evidence of temporally overlapping but largely distinct spatial representations of outcome valence and surprise in the human brain.
We carried out several meta‐analyses on a large set of fMRI studies investigating the neural basis of the prediction error: the building block of learning in the brain. Our study points to a sequential and distributed encoding of different components of the error signal, with potentially distinct functional roles.