Author(s) This pre-registration is currently anonymous to enable blind peer-review. It has 2 authors.
Pre-registered on 11/30/2020 10:04 AM (PT)
1) Have any data been collected for this study already? No, no data have been collected for this study yet.
2) What's the main question being asked or hypothesis being tested in this study? H1: Do accuracy incentives improve people’s ability to discern true versus false news?
H2: Do accuracy incentives reduce the political gap in accuracy judgements?
H3: Do accuracy incentives also lead people to share more accurate (and less inaccurate) news on social media?
3) Describe the key dependent variable(s) specifying how they will be measured. Following Pennycook & Rand (2019), we will calculate a truth discernment score, a perceived accuracy of fake news scores, and a perceived accuracy of true news score. The scores will be calculated as follows:
Fake news score: The average perceived accuracy of 8 fake news items (4 Democrat and 4 Republican).
True news score: The average perceived accuracy 8 true news items (4 Democrat and 4 Republican).
Truth discernment score: The true news score minus the fake news score.
We will also calculate similar scores for participants’ self-reported likelihood of sharing real and fake news.
We will create a “Political Gap” score, which refers to the average accuracy score of politically congruent minus the average accuracy score of politically incongruent headlines. We will create separate political gap scores for true versus false news.
4) How many and which conditions will participants be assigned to? Participants will be assigned to an 1) accuracy incentive condition and a 2) control condition. In the accuracy incentive condition, participants will receive a bonus of up to $1.00 for correctly identifying true and false news.
5) Specify exactly which analyses you will conduct to examine the main question/hypothesis. Analyses will be performed using t-tests, ANOVA, correlations, and regressions.
Primary Analyses:
1. We will test if there is a main effect of condition on the truth discernment and accuracy scores.
2. We will test if accuracy incentives reduce the partisan gap in accuracy judgements.
3. We will test whether people report sharing more accurate (and less inaccurate) news in the experimental condition as compared to the control condition.
Secondary Analyses:
4. We will also test these hypotheses separately for news type (fake news items and real news items), and probe for interactions between the condition and news type. We expect the effects of accuracy incentives to be larger for fake news items, as these are more obviously false, but are interested in both fake and true news.
5. We will test these hypotheses for both politically-consistent and politically-inconsistent headlines, and probe for an interaction between the accuracy incentives and politically-consistency.
6. We will test these hypotheses separately for Republicans participants and Democrat participants, and probe for an interaction between party identification and our main DVs.
7. We will test whether the main effects are moderated by cognitive reflection, political knowledge, education, or feelings toward Republicans and Democrats.
6) Describe exactly how outliers will be defined and handled, and your precise rule(s) for excluding observations. We will exclude participants who fail our bot check, our attention check, or say they were responding randomly.
7) How many observations will be collected or what will determine sample size? No need to justify decision, but be precise about exactly how the number will be determined. A power analysis conducted using G*Power indicates that we would need a sample size of 210 to detect a medium effect size of d = 0.50. We doubled that sample size because we want to test for an effect in both Republicans and Democrats, and oversampled to account for potential exclusions. Thus, we will sample 500 USA participants. 250 participants will be Republicans and the other 250 will be Democrats, as defined by the online participant recruitment platform Prolific Academic.
8) Anything else you would like to pre-register? (e.g., secondary analyses, variables collected for exploratory purposes, unusual analyses planned?)