#101,432 | AsPredicted

'The impact of authorship and bias information on the credibility of an AI-text'
(AsPredicted #101,432)


Author(s)
This pre-registration is currently anonymous to enable blind peer-review.
It has 3 authors.
Pre-registered on
2022/06/30 04:37 (PT)

1) Have any data been collected for this study already?
No, no data have been collected for this study yet.

2) What's the main question being asked or hypothesis being tested in this study?
This study has a 3x2 between-subjects design with two experimental factors: Bias information (basic AI information vs. AI bias information vs. AI + human bias information) and labeled authorship (AI authorship vs. human co-authorship). Participants will read a science journalism article about fat-shaming and will rate it concerning perceived message credibility, source credibility, and perceived intelligence of the author.
H1: There will be a main effect of bias information with higher perceived message credibility in the basic AI information conditions than in the AI bias information conditions.
H2: There will be a main effect of bias information with higher perceived source credibility in the basic AI information conditions than in the AI bias information conditions.
H3: There will be an interaction effect of bias information and labeled authorship on perceived message credibility. In particular, there will be higher perceived message credibility in the AI bias information condition when the labeled authorship is a human co-authorship compared to a mere AI authorship.
H4: There will be an interaction effect of bias information and labeled authorship on perceived source credibility. In particular, there will be higher perceived source credibility in the AI bias information condition when the labeled authorship is a human co-authorship compared to a mere AI authorship.
H5: There will be a main effect of labeled authorship with higher perceived intelligence of the author in the human co-authorship condition than in the AI authorship condition.

3) Describe the key dependent variable(s) specifying how they will be measured.
- Perceived message credibility: measured on 27 items using the Message Credibility Scale (Appelman & Sundar, 2016; Sundar, 1999; 7-point Likert-type scale)
- Perceived source credibility: measured on five bipolar items (Flanagin & Metzger, 2000, 2007; 7-point scale)
- Perceived intelligence of the author(s): five bipolar items of the Godspeed Instrument (Bartneck, Kulić, Croft & Zoghbi, 2009; 5-point scale)

4) How many and which conditions will participants be assigned to?
Participants will be randomly assigned to one of six between-group conditions resulting from the between-group factors:
Bias information:
Participants will receive an AI info text:
- Version 1: Basic AI information:
Participants in this condition will receive basic AI information about automated text generation.
- Version 2: AI bias information:
Participants in this condition will receive basic AI information plus an additional paragraph about the risks of automated text generation in terms of algorithmic bias.
- Version 3: AI + human bias information:
Participants in this condition will receive basic AI information and algorithmic bias information plus an additional paragraph about the flaws of human written articles in terms of human biases.
Labeled authorship:
- Version 1: AI authorship:
Participants will be told that the article had been written via automated text generation.
- Version 2: Human co-authorship:
Participants will be told that the article had been written via automated text generation and that a human had checked and revised the article.

5) Specify exactly which analyses you will conduct to examine the main question/hypothesis.
To examine H1 – H4, we will conduct separate ANOVAs for message credibility and source credibility with the between-group factors bias information and labeled authorship. To examine H5, we will conduct a paired sample t-test.

6) Describe exactly how outliers will be defined and handled, and your precise rule(s) for excluding observations.
We will exclude participants who incorrectly answer the manipulation check for labeled authorship, the attention check for bias information, or the attention check regarding the content of the article.

7) How many observations will be collected or what will determine sample size?
No need to justify decision, but be precise about exactly how the number will be determined.

A power-analysis for a small effect size of f = .15, an alpha-error probability of .05, and a power of .80 revealed a total sample size of 432 participants. In case that we will have to exclude participants due to incorrect answers to the manipulation, attention, or content checks, we will collect observations from 512 participants.

8) Anything else you would like to pre-register?
(e.g., secondary analyses, variables collected for exploratory purposes, unusual analyses planned?)

For exploratory purposes, we will include the pre- and post-attitudes toward the topic covered by the article based on the fat phobia scale (short form) by Bacon, Scheltema, and Robinson (2001; 14 bipolar items measured on 5-point scales) as a covariate in the analysis.
We will measure the perceived neutrality of the text by one 5-point bipolar item from 1 = absolutely neutral to 5 = absolutely evaluative.
We will measure participants' intentions to recommend the article to a friend or a relative and to read such an article again on two single items (5-point Likert-type scale).
We will include the perceived anthropomorphism of the author(s): five bipolar items (Bartneck et al., 2009; 5-point scale).
RQ1: Are there any main effects of labeled authorship on perceived message credibility, perceived source credibility, intentions, or anthropomorphism?
RQ2: Is there any main effect of the AI + human bias information condition on perceived message and source credibility?
RQ3: Is there a main effect of bias information or an interaction effect on perceived intelligence?

Version of AsPredicted Questions: 2.00