#48266 | AsPredicted

As Predicted:Quality of evidence communication in an ecologically valid context (#48266)


Created:       09/24/2020 07:07 AM (PT)

This is an anonymized version of the pre-registration.  It was created by the author(s) to use during peer-review.
A non-anonymized version (containing author names) should be made available by the authors when the work it supports is made public.

1) Have any data been collected for this study already?
No, no data have been collected for this study yet.

2) What's the main question being asked or hypothesis being tested in this study?
In this study we are investigating quality of evidence communication in an ecologically valid context, by exploring the effects of different types of public health information (in the form of an infographic-type visual) on people’s trust in the information, perceptions of the effectiveness of the intervention detailed in the public health information, behavioural uptake, and policy support. Specifically, we are providing participants with information on what behaviours protect against COVID-19 infection or transmission. People receive information on the chance of infection or transmission with and without wearing eye protection (in percent). This information is taken from a real-world infographic which therefore situates our study into an ecologically valid context. Apart from providing information on the chance of infection or transmission with and without eye protection, people are shown the quality of the evidence level that underlies the reported effectiveness of the intervention.
For the purposes of this study we are varying the quality of evidence levels between ‘high’ and ‘low’. We are also testing potential effects of terminology: the original infographic used the term ‘certainty of evidence’. We are in addition testing effects when using the term ‘quality of evidence’. Both terms are frequently used in the literature and in real world contexts, however, no empirical assessments have been conducted yet to our knowledge. Thus, this study is employing a 2x2 (quality terminology x quality level) between-subjects design.
We are seeking to answer several research questions:
Does ‘certainty of evidence’ and ‘quality of evidence’ mean the same to people? Are there any differences in effects based on whether the quality of evidence information is presented as ‘quality of evidence’ versus ‘certainty of evidence’? If participants understand the two terms as the same thing there should be no main effect of terminology.
Are effects different depending on whether the effectiveness is presented as having low versus high quality underlying evidence? We hypothesize that people’s trust in the information, their perception of the effectiveness of the intervention, and their likelihood of behavioural uptake and policy support are higher for the group that is shown ‘high’ quality of evidence compared to the group that is shown ‘low’ quality of evidence (main effect of quality level).
As a secondary research question we are also exploring whether the different terminology for referring to quality of evidence (i.e. ‘quality of evidence’ versus ‘certainty of evidence’) is affecting people’s understanding.

3) Describe the key dependent variable(s) specifying how they will be measured.
Key measures are perceived trust (e.g. ‘How trustworthy do you think the information you saw on the effectiveness of eye protection is?, 7 point scale, not trustworthy at all – very trustworthy), perceived effectiveness of the intervention detailed in the public health information (‘How effective do you think eye protection is for reducing the chance of infection or transmission of COVID-19?’, 7 point scale, not effective at all – very effective), likelihood of behavioural uptake (‘How likely are you to wear eye protection when in busy public places?’, 7 point scale, not at all likely – very likely), and policy support (‘To what extent do you think the government should require people to wear eye protection in busy public places?’, 7 point scale, not at all – very much).
Secondary measures:
Understanding of the information (e.g. ‘How easy or difficult did you find the information on the effectiveness of eye protection to understand?’, 7 point scale, very easy - very difficult).
We are also assessing how much the presented infographic has shifted people’s trust and behavioural intentions (e.g. ‘How much more or less likely are you to wear eye protection as a result of the information in the infographic?’, 7 point scale, a lot less likely – a lot more likely), and we are exploring people’s priors (e.g. ‘Independent of what we might have told you in this study, how high or low do you think the quality of the evidence underlying the effectiveness of eye protection is?’, slider low – high) and the (in-)congruency between priors and presented info (e.g. ‘To what extent did the quality of evidence level underlying the effectiveness of eye protection as shown in the infographic match what you thought it was?, 7 point scale, I thought the quality of evidence level was much lower – I thought the quality of evidence level was much higher).


4) How many and which conditions will participants be assigned to?
Participants will be randomly assigned to one of the four above described conditions, i.e. quality of evidence low, quality of evidence high, certainty of evidence low, and certainty of evidence high.

5) Specify exactly which analyses you will conduct to examine the main question/hypothesis.
To address our research questions detailed above, we will run generalized linear models (including OLS and ANOVA) to investigate effects on our various dependent measures. All tests are two-tailed. We will explore correlations between our conceptually related measures (e.g. our various measures of perceived trustworthiness). If correlations are 0.7 or higher we will combine the measures into an index (testing for adequate levels of Cronbach’s alpha). In addition to the main effects analyses outlined above, we will furthermore run exploratory interaction analysis between quality level and quality terminology, as well as between quality level and people’s priors of the effectiveness and the quality of evidence level of the intervention, in addition to controlling for people’s priors in our models. We will also run a mediation analysis which will explore the mechanism of the hypothesized difference in effects between the high and low quality groups. We are predicting that providing people with high versus low quality of evidence leads to increased trust in the information which can in turn affect their behaviour and decision making, e.g. making them more likely to wear eye protection.

6) Describe exactly how outliers will be defined and handled, and your precise rule(s) for excluding observations.
We will exclude participants who do not complete our survey. Additionally, we will exclude participants who fail an attention check as those participants likely did not engage with the study which makes their data not-usable.

7) How many observations will be collected or what will determine sample size?
No need to justify decision, but be precise about exactly how the number will be determined.

We will sample 949 participants, providing 95% power at alpha level 0.05 for small effects (f=0.12). This sample includes a buffer to account for attrition.

8) Anything else you would like to pre-register?
(e.g., secondary analyses, variables collected for exploratory purposes, unusual analyses planned?)

We may run exploratory analyses investigating the effects of demographics (such as education, numeracy, and political affiliation) as different sections of the population may engage differently with the various treatments, as well as effects of other relevant background attitudes and characteristics, such as people’s prosociality levels, efficacy perceptions, actively open minded-thinking, susceptibility to misinformation or beliefs whether the coronavirus pandemic is a hoax.