#67932 | AsPredicted

'Accuracy Incentives and Misinformation -- Study 3'
(AsPredicted #67932)

Created:       06/07/2021 09:18 PM (PT)

This is an anonymized version of the pre-registration.  It was created by the author(s) to use during peer-review.
A non-anonymized version (containing author names) should be made available by the authors when the work it supports is made public.

1) Have any data been collected for this study already?
No, no data have been collected for this study yet.

2) What's the main question being asked or hypothesis being tested in this study?
This study is a replication and extension of Studies 1 and 2. We plan to test the following questions that we also tested in Studies 1 and 2:

H1: Do accuracy incentives improve people's ability to discern true versus false news?
H2: Do accuracy incentives reduce partisan bias in accuracy judgements?
H3: Do accuracy incentives specifically increase belief in politically-incongruent true news?

As an extension of Studies 1 and 2, we plan to add another condition where we eliminate the sources from the news stimuli. We predict that:

H4: There will be a smaller difference between the incentives and the control condition when no sources are present on the news stimuli as opposed to when sources are present on the news stimuli.

3) Describe the key dependent variable(s) specifying how they will be measured.
As with Study 1 and Study 2, we will calculate a truth discernment score, a perceived accuracy of fake news scores, a perceived accuracy of true news score, and a partisan bias score. The scores will be calculated as follows:

Fake news score: The average perceived accuracy of 8 fake news items (4 Democrat and 4 Republican)
True news score: The average perceived accuracy 8 true news items (4 Democrat and 4 Republican)
Truth discernment score: The true news score minus the fake news score.
Partisan bias score: The average perceived accuracy score of politically-congruent minus the average perceived accuracy score of politically-incongruent headlines. We will create separate partisan bias scores for true and false news, as well as a total partisan bias score.

We will also calculate similar scores for participants' self-reported likelihood of sharing real and fake news.

Note: while perceived accuracy is measured on a continuous scale within the survey, we will code it on a dichotomous scale for analysis. We will also report results for the continuous scale as a robustness check.

For this specific study, we have added 8 additional true and false news stimuli. We will run analysis with the 16 original stimuli in addition to the complete set of 24 stimuli.

4) How many and which conditions will participants be assigned to?
Participants will be assigned to an 1) accuracy incentives condition (with sources), a 2) control condition (with sources), as well as an 3) accuracy incentives condition (without sources), and a 4) control condition (without sources).

In the accuracy incentive condition, participants will receive a bonus of up to $1.00 for correctly identifying true and false news.

5) Specify exactly which analyses you will conduct to examine the main question/hypothesis.
Analyses will be performed using t-tests, ANOVA, correlations, and regressions.

Following Study 1 and 2:
1. We will test if there is a main effect of condition on the truth discernment and accuracy scores.
2. We will test if accuracy incentives reduce the partisan gap in accuracy judgements.
3. We will test whether people report sharing more accurate (and less inaccurate) news in the experimental condition as compared to the control condition.
4. We will test for an interaction between the condition and veracity, as well as the condition and political congruence. We will test if the accuracy incentives specifically increase belief in politically-incongruent true news.
5. We will test these hypotheses separately for the headlines with and without source cues, and will probe for an interaction between source cues and our main DVs.
6. We will also combine the results from studies 1, 2, and 3 (only the accuracy incentive and control conditions with sources present – approximately 1,500 participants) to have a larger sample size to test for moderation. We will test if the effect of the accuracy incentives is moderated by political knowledge, cognitive reflection, income, education, out-party animosity, political partisanship, identity strength, and intellectual humility.

6) Describe exactly how outliers will be defined and handled, and your precise rule(s) for excluding observations.
We will exclude participants who fail our bot check, our attention check, or say they were responding randomly.

7) How many observations will be collected or what will determine sample size?
No need to justify decision, but be precise about exactly how the number will be determined.

As in studies 1 and 2, we will aim for 250 people per condition (a total sample size of 1000). A power analysis is in the pre-registrations for Studies 1 and 2. However, the final sample size may be smaller than this due to challenges recruiting a large enough nationally representative sample and the exclusions described above.

8) Anything else you would like to pre-register?
(e.g., secondary analyses, variables collected for exploratory purposes, unusual analyses planned?)