Anya Skatova
Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task
Skatova, Anya; Chan, Patricia A.; Daw, Nathaniel
Authors
Patricia A. Chan
Nathaniel Daw
Abstract
Prominent computational models describe a neural mechanism for learning from reward prediction errors, and it has been suggested that variations in this mechanism are reflected in personality factors such as trait extraversion. However, although trait extraversion has been linked to improved reward learning, it is not yet known whether this relationship is selective for the particular computational strategy associated with error-driven learning, known as model-free reinforcement learning, vs. another strategy, model-based learning, which the brain is also known to employ. In the present study we test this relationship by examining whether humans' scores on an extraversion scale predict individual differences in the balance between model-based and model-free learning strategies in a sequentially structured decision task designed to distinguish between them. In previous studies with this task, participants have shown a combination of both types of learning, but with substantial individual variation in the balance between them. In the current study, extraversion predicted worse behavior across both sorts of learning. However, the hypothesis that extraverts would be selectively better at model-free reinforcement learning held up among a subset of the more engaged participants, and overall, higher task engagement was associated with a more selective pattern by which extraversion predicted better model-free learning. The findings indicate a relationship between a broad personality orientation and detailed computational learning mechanisms. Results like those in the present study suggest an intriguing and rich relationship between core neuro-computational mechanisms and broader life orientations and outcomes.
Citation
Skatova, A., Chan, P. A., & Daw, N. (2013). Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task. Frontiers in Human Neuroscience, 7, Article 525. https://doi.org/10.3389/fnhum.2013.00525
Journal Article Type | Article |
---|---|
Publication Date | Sep 3, 2013 |
Deposit Date | Apr 23, 2014 |
Publicly Available Date | Apr 23, 2014 |
Journal | Frontiers in Human Neuroscience |
Electronic ISSN | 1662-5161 |
Publisher | Frontiers Media |
Peer Reviewed | Peer Reviewed |
Volume | 7 |
Article Number | 525 |
DOI | https://doi.org/10.3389/fnhum.2013.00525 |
Public URL | https://nottingham-repository.worktribe.com/output/718118 |
Publisher URL | http://journal.frontiersin.org/Journal/10.3389/fnhum.2013.00525/full |
Additional Information | [This Document is Protected by copyright and was first published by Frontiers. All rights reserved. it is reproduced with permission.] |
Files
fnhum-07-00525_(1)_(1).pdf
(2.2 Mb)
PDF
Copyright Statement
Copyright information regarding this work can be found at the following address: http://creativecommons.org/licenses/by/4.0
Downloadable Citations
About Repository@Nottingham
Administrator e-mail: discovery-access-systems@nottingham.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search