This dissertation employs experimental designs and methods of assessing individual differences to answer questions about the degree (if any) to which people can be taught to outperform a mechanical equation, the degree (if any) to which assessors can learn to improve the accuracy of their judgments, the degree (if any) to which judges can be made less overconfident in their judgment strategies, the relationship of any changes in accuracy to any changes in confidence, the individual differences that define those who predict and learn to predict most accurately, and the timing of and extent to which (if any) assessors gain insight about the most accurate predictive approach.
The most extensive background information provided prior to discussion of the studies conducted by this author concerns the Lens Model as a toolkit for measuring accuracy as well as the determinants of accuracy. Although this portion of the dissertation is somewhat detailed and intricate, it is necessary. First, understanding the Lens Model leads to understanding the determinants of judgment accuracy. Second, understanding the Lens Model leads to understanding how the judge can and cannot outperform the mechanical approach. Third, understanding the Lens Model leads to understanding the limitations of prior research. Fourth, understanding the Lens Model is essential if the reader is to fully understand results, discussion, and conclusions of the author’s experiments.
Also reviewed are the “skill score” as an alternative to the Lens Model for measuring accuracy as well as the major considerations involved when teaching people to improve their accuracy and lessen in confidence. The “skill score” provides information about elevation and scatter that is not available from the Lens Model. Final preliminaries focus on experimental design, namely how and why use of a disordinal interaction is central to the experiments conducted by the author, as well as issues concerning the number of experimental cues (predictors) employed, cue redundancy (intercorrelation), the importance of representative design in the experiments, the conduciveness of various types of experimental feedback to learning, and the impact of incentives on judgment accuracy in the experiments.
The author conducted two studies – one in Fall 2009 and another in Spring 2010. In the experimental portions of the studies, subjects were asked to make predictions of job performance for hypothetical job candidates based on the cognitive ability test score for each candidate as well as how interesting or boring the candidate was expected to find the job. The most accurate clinical prediction strategy would involve applying knowledge that the correlation between cognitive ability and job performance was positive when the applicant was expected to find the job interesting but negative when the applicant was expected to find the job boring (i.e. a disordinal interaction). The competing mechanical model was a linear version of a model that incorporated the disordinal interaction. Subjects were asked about their confidence in how accurately they were making predictions, and in order to assess insight, subjects were asked to narratively self-report the nature of their judgment strategies.
Results were fascinating, although many were inconclusive (often due to lack of statistical significance). Although subjects could outperform the mechanical model under certain experimental conditions, this superiority was not statistically significant. Some of the individuals, experimental groups, and/or subject pool means increased or declined in accuracy, the determinants of accuracy, and confidence over time as expected, but often these results were not statistically significant. Nevertheless, there was some evidence that criterion-related feedback about the disordinal interaction led to improved accuracy and decreased confidence while lack of it had the opposite effects. Several individual differences were significantly associated with accuracy, with cognitive ability being the difference most pervasively related to accuracy to a statistically significant degree. Findings for insight were complicated by the inconsistent nature of subjects’ narratives. Nevertheless, there was relatively high agreement between raters of subjects’ insight, and ratings of insight often had statistically significant correlations with objective measures of accuracy. Moreover, insight as variously measured was often achieved, and if achieved was usually achieved early. (Abstract shortened by UMI.)