Psychology Ph.D. Dissertations


Inferred Statistics and Ecological Validity in Bayesian Reasoning

Date of Award


Document Type


Degree Name

Doctor of Philosophy (Ph.D.)



First Advisor

Richard Anderson (Advisor)

Second Advisor

Mary Hare (Committee Member)

Third Advisor

Michael Zickar (Committee Member)

Fourth Advisor

Tong Sun (Committee Member)


Research on Bayesian reasoning has indicated that people struggle with performance, usually providing non-normative responses. However, the problems used to assess Bayesian reasoning have been criticized on account of lacking ecological validity. In order to allow computation of a normative response, numerical information has been explicitly provided and responses have been shown to demonstrate base rate neglect. However, previous research has shown people to be highly attentive to base rates learned from experience (Nelson, Biernat, & Manis, 1990; Mastropasqua, Crupi, & Tentori, 2010). I hypothesize that performance is improved when the relationship between variables can be visualized. Although normative Bayesian judgment is determined using a mathematical formula, in most situations people make decisions without such explicit information and may develop effective heuristics for making likelihood judgments. The present study asked participants to make judgments using inferred base rate as well as inferred diagnostic information, a design which has not been used in any previous research. Because all of the statistics are inferred from experience it was expected that likelihood judgments would be made relatively intuitively and efficiently, avoiding base rate neglect and Bayesian conservativism. Performance on the inferred statistics problem was compared within-participants to performance on problems using natural frequencies and probabilities. Previous research has not compared performance in the same sample using inferred and explicit statistics. Contrary to my hypothesis, participants did not perform better when using inferred statistics than when using probabilities or natural frequencies. The present research implies that poor performance on Bayesian reasoning tasks is not the result of the way that information is presented or conflict between participants’ beliefs and provided statistics. Instead, it provides evidence that people are able to evaluate provided statistics in a way that is equivalent to judgments made using inferred statistics and that participants may use heuristics when provided with probabilities or natural frequencies. Although the results provided evidence for individual differences in Bayesian reasoning, numeracy did not predict performance.