'The Effect of Incentives on Biases and Heuristics (Updated)' (AsPredicted #23463)
Author(s) This pre-registration is currently anonymous to enable blind peer-review. It has one author.
Pre-registered on 2019/05/15 - 05:32 AM (PT)
1) Have any data been collected for this study already? It's complicated. We have already collected some data but explain in Question 8 why readers may consider this a valid pre-registration nevertheless.
2) What's the main question being asked or hypothesis being tested in this study? We study the effect of incentives on biases in judgement and decision-making. The biases we consider are anchoring, base-rate neglect (BRN), the cognitive reflection task (CRT), and the Wason selection task.
Our hypothesis is that increasing incentives will result in longer decision times, reflecting higher effort. We expect that this will reduce mistakes that result from inattention, and hence increase success in tasks such as the CRT, anchoring, BRN (when presented intuitively in the format of frequencies) and Wason (when presented intuitively in a social context). We expect that increased effort does not reduce biases that are caused by cognitive limitations, such as the Wason Selection task (abstract formulation) and BRN (in the classic probability format).
3) Describe the key dependent variable(s) specifying how they will be measured. Anchoring (two questions per participant): the mean answer by anchor.
BRN (two questions per participant): the average distance (in absolute terms) between the reported and correct (Bayesian) posterior. A secondary measure will be the proportion of reports that are within 5 percentage points of the correct Bayesian posterior.
CRT (two questions per participant): the mean number of correct answers.
Wason (two questions per participant): the fraction of participants that turns over the correct cards.
All biases: decision times.
4) How many and which conditions will participants be assigned to? Each participant will be in one of two conditions: "No Incentive + Low Incentive" or "No Incentive + High Incentive." The randomization is at the session level.
We present two biases to each participant; first one in the no-incentive condition and then one in the low/high incentive condition. In the no-incentive condition, there is no reward for a correct answer. In the low and high incentive conditions, a bonus can be earned for a correct answer. We randomize which two biases a participant works on and in which order.
Participants receive a fixed fee ($3.50 transport and $1.50 participation) and have an opportunity to earn a bonus of $1.30 (low incentive) or $130 (high incentive). All amounts are according to the current market exchange rate.
The bonus is determined as follows. We randomly select one of the two questions. The bonus is awarded if the answer to that question is correct (or in case of anchoring and BRN, if the answer is at most 2 p.p. away from the correct answer).
5) Specify exactly which analyses you will conduct to examine the main question/hypothesis. We make pairwise comparisons of the outcome variables between incentive conditions (no/low/high). We use two-sided tests. In addition to the tests specified, where applicable we perform non-parametric (Mann-Whitney) tests to check for robustness against outliers.
Anchoring. We use a low and a high anchor. We test how the difference in mean answers across the anchors varies across conditions. We perform unpaired t-tests and a regression analysis with subject and question FEs. We predict a decrease in the effect of an anchor between no and low incentives, and a decrease between low and high incentives.
BRN: We compare the mean difference (in absolute terms) between the reported answer and the Bayesian posterior across conditions. We perform unpaired t-tests. We also compare the proportion of subjects that gives an approximately correct answer (within 5 percentage points of the correct answer) across conditions, using proportion tests. For the version with frequencies, we predict an increase in performance (lower average distance to posterior, higher fraction of approx. correct answers) between no and low incentives, and an increase in performance between low and high incentives. For the classic probability format we predict no difference between no and low incentives and no difference between low and high incentives.
CRT: We compare the mean number of correct answers across conditions, performing unpaired t-tests. We predict an increase in performance between no and low incentives, and an increase in performance between low and high incentives.
Wason: We compare the proportion of subjects that gives the correct answer, performing proportion tests. For the intuitive version, we predict an increase in performance between no and low incentives, and an increase in performance between low and high incentives. For the abstract version, we predict no difference between no and low incentives and no difference between low and high incentives.
We also report for each bias the mean decision time across conditions, performing unpaired t-tests. We expect an increase in decision times between no and low incentives and an increase between high and low incentives.
Robustness of results after the addition of controls and potential moderators will be identified using a linear regression for the above main dependent variables with the addition of control variables (see section 8).
6) Describe exactly how outliers will be defined and handled, and your precise rule(s) for excluding observations. No participant that completed the experiment will be removed from the analysis.
7) How many observations will be collected or what will determine sample size? No need to justify decision, but be precise about exactly how the number will be determined. We aim for 1140 participants in the main experiment (excluding pilots to test software and understanding), determined by the available budget. We stop in case we run out of budget or are unable to recruit more participants on the site. Participants are recruited in Nairobi (Kenya) by Busara.
8) Anything else you would like to pre-register? (e.g., secondary analyses, variables collected for exploratory purposes, unusual analyses planned?) Participants complete a follow-up survey, including test questions to verify they know what the bonus level was, their confidence in getting the answer right, Raven matrices, risk aversion questions, sociodemographic information such as gender, age, and estimated monthly consumption, and academic information such as university, area of study, standardized test scores. These questions will be used to both summarize the set of participants and to identify potential moderators of the results.
We have already collected pilot data on site. No data for the main study have already been collected. The primary aim of the pilots was to test comprehension and making sure that baseline performance (no incentive condition) provided an appropriate benchmark level (not too close to the boundaries)
NOTE: this is an update of the pre-registered plan #22112. That plan was written before piloting the questions with the relevant participants. Another Wason question has been added as well as confidence questions as part of the survey at the end. The link to that plan will be available upon request.