Show simple item record

The Quality of Response Time Data Inference: A Blinded, Collaborative Assessment of the Validity of Cognitive Models

dc.contributor.authorJeffrey, Annis
dc.identifier.citationDutilh, G., Annis, J., Brown, S. D., Cassey, P., Evans, N. J., Grasman, R., Hawkins, G. E., Heathcote, A., Holmes, W. R., Krypotos, A. M., Kupitz, C. N., Leite, F. P., Lerche, V., Lin, Y. S., Logan, G. D., Palmeri, T. J., Starns, J. J., Trueblood, J. S., van Maanen, L., van Ravenzwaaij, D., … Donkin, C. (2019). The Quality of Response Time Data Inference: A Blinded, Collaborative Assessment of the Validity of Cognitive Models. Psychonomic bulletin & review, 26(4), 1051–1069.
dc.descriptionOnly Vanderbilt University affiliated authors are listed on VUIR. For a full list of authors, access the version of record at
dc.description.abstractMost data analyses rely on models. To complement statistical models, psychologists have developed cognitive models, which translate observed variables into psychologically interesting constructs. Response time models, in particular, assume that response time and accuracy are the observed expression of latent variables including 1) ease of processing, 2) response caution, 3) response bias, and 4) non-decision time. Inferences about these psychological factors, hinge upon the validity of the models' parameters. Here, we use a blinded, collaborative approach to assess the validity of such model-based inferences. Seventeen teams of researchers analyzed the same 14 data sets. In each of these two-condition data sets, we manipulated properties of participants' behavior in a two-alternative forced choice task. The contributing teams were blind to the manipulations, and had to infer what aspect of behavior was changed using their method of choice. The contributors chose to employ a variety of models, estimation methods, and inference procedures. Our results show that, although conclusions were similar across different methods, these "modeler's degrees of freedom" did affect their inferences. Interestingly, many of the simpler approaches yielded as robust and accurate inferences as the more complex methods. We recommend that, in general, cognitive models become a typical analysis tool for response time data. In particular, we argue that the simpler models and procedures are sufficient for standard experimental designs. We finish by outlining situations in which more complicated models and methods may be necessary, and discuss potential pitfalls when interpreting the output from response time models.en_US
dc.description.sponsorshipGEH was supported by an Australian Research Council Discovery Early Career Researcher Award DE170100177. JA was supported by National Eye Institute Training Grant T32-EY07135. JST and WRH were supported by National Science Foundation Grant SES-1556325. JV and CNK were supported by grants number 1230118, 1534472, and 1658303 from the National Science Foundation. TJP was supported by National Science Foundation Grant SBE-1257098 and National Eye Institute Grants RO1-EY021833 and P30-EY008126.en_US
dc.publisherPsychonomic Bulletin & Reviewen_US
dc.rightsCopyright © The Author(s) 2018 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
dc.subjectCognitive modelingen_US
dc.subjectResponse Timesen_US
dc.subjectDiffusion Modelen_US
dc.titleThe Quality of Response Time Data Inference: A Blinded, Collaborative Assessment of the Validity of Cognitive Modelsen_US

Files in this item


This item appears in the following Collection(s)

Show simple item record