#74475 | AsPredicted

'Responsiveness, Response Time and Agent Type (chatbot03)'
(AsPredicted #74,475)


Author(s)
Stefanie Ritz (Leibniz-Institut für Wissensmedien, Tübingen) - s.klein@iwm-tuebingen.de
Sonja Utz (Leibniz-Institut für Wissensmedien, Tübingen) - s.utz@iwm-tuebingen.de
Pre-registered on
2021/09/13 02:25 (PT)

1) Have any data been collected for this study already?
No, no data have been collected for this study yet.

2) What's the main question being asked or hypothesis being tested in this study?
To what extent do responsive communicative behavior, agent type, and agent response time influence the acceptance and the perceived performance of chat interactions?

As acceptance is a heterogeneous construct and this study is part of a larger research network, we investigate different facets: Attitude towards the interaction (a), intention to use (b), perceived enjoyment (c), likeability (d), perceived intelli-gence (e), warmth (f), and competence (g). Perceived performance is conceptualized as satisfaction with the interaction.

Our previous study (https://aspredicted.org/TAK_MII) found higher acceptance and perceived performance ratings in the responsive (vs. non-responsive) conditions. Results for the effects of agent type on the outcomes were inconclusive.
Based on these findings and our theoretical considerations (social response theory (Nass & Moon, 2000), MAIN model (Sundar, 2008)), we propose the following hypotheses:
H1: Responsive (vs. non-responsive) communicative behavior has a positive effect on the acceptance outcomes (a-g) and perceived performance (μresponsive > μnon-responsive).
H2: The positive effects of responsive communicative behavior on the acceptance outcomes (a-g) and perceived performance are mediated by social presence, perceived dialog, and feeling heard.
H3: A human agent (vs. chatbot) has a positive effect on the acceptance outcomes (a-g) and perceived performance (μhuman > μchatbot).

Gnewuch et al. (2018): Dynamic (vs. instant) response times positively impact users' satisfaction with a chatbot interaction. Holtgraves et al. (2007): Short (vs. longer) response times lead to higher responsiveness and perceived conscientiousness. We propose the following research question due to the ambiguous findings:
RQ1: To what extent does agent response time influence the acceptance outcomes (a-g) and perceived performance?

Responsive and dynamically responding chatbots could cause positive expectation violations, positively influencing the outcomes (Burgoon et al., 1989; Burgoon & Hale, 1988). However, responsive and dynamically responding chatbots could also be perceived as unnatural (neg. expectation violation), negatively influencing the outcomes. We are interested in exploring if and how the three independent variables interact to affect the outcomes:
RQ2a: Is there an interaction effect between agent type and responsiveness on the acceptance outcomes (a-g) and perceived performance?
RQ2b: Is there an interaction effect between agent type and response time on the acceptance outcomes (a-g) and perceived performance?
RQ2c: Is there an interaction effect between responsiveness and response time on the acceptance outcomes (a-g) and perceived performance?
RQ2d: Is there an interaction effect between agent type, responsiveness, and response time on the acceptance outcomes (a-g) and perceived performance?

3) Describe the key dependent variable(s) specifying how they will be measured.
Dependent variables:
Attitude, adapted from Diers (2020), Schlohmann (2012)
Use intention, adapted from Diers (2020), Schlohmann (2012)
Perceived joy, adapted from Diers (2020), Schlohmann (2012)
Likeability and perceived intelligence, adapted from Bartneck et al. (2009), translated to German
Warmth and competence, adapted from Fiske (2018), translated to German
Satisfaction, adapted from de Ruyter & Wetzels (2000), Lagace et al. (1991), translated to German
Mediator variables:
Social presence, adapted from Gefen et al. (2004), translated to German
Perceived dialog, adapted from Sundar et al. (2016), translated to German
Feeling heard, adapted from Roos et al. (in preparation), translated to German
Except for likeability and perceived intelligence, which will be measured on a 7-point semantic differential scale, we assess all items on Likert-type rating scales from one (absolutely disagree) to seven (absolutely agree).

4) How many and which conditions will participants be assigned to?
The study has a 2x2x2 between-subjects design. Participants will be randomly assigned to eight conditions. The conditions result from the three manipulated independent variables: Agent type (chatbot vs. human), responsiveness (presence vs. absence of responsive verbal cues), and response time (instant vs. dynamic). Participants will follow an animated chat conversation between a study advisor of a fictitious university and a person interested in studying at the university.

5) Specify exactly which analyses you will conduct to examine the main question/hypothesis.
We will conduct all statistical analyses in R version 4.1.1.
Summary statistics, bivariate correlations, and Cronbach's alpha values for all study variables will be calculated for descriptive purposes. To test H1, H3, and RQ1, we will compute a total of 24 independent t-tests (three independent variables x eight dependent variables). We will test the mediation hypothesis (H2) by computing one structural equation model per outcome variable with responsiveness as the predictor and social presence, perceived dialog, and feeling heard as mediating variables using the R lavaan package (version 0.6-9). We will carry out a three-way ANOVA with responsiveness, agent type, and response time as independent variables for each of the eight dependent variables and posthoc pairwise comparisons to answer RQ2. P-values (H1, H3, RQ1, RQ2) will be adjusted using the Holm (1979) correction.

6) Describe exactly how outliers will be defined and handled, and your precise rule(s) for excluding observations.
We will exclude participants who fail the agent type manipulation check from the analysis. The manipulation check reads: If you think back to the chat you just saw: Who was Marc talking to? - to the student advisor Sophie - to the professor Sophie - to the chatbot Sophie - to the doctor Sophie - don't know.

7) How many observations will be collected or what will determine sample size?
No need to justify decision, but be precise about exactly how the number will be determined.

We aim to recruit 400 participants (after exclusions) via the crowdsourcing platform Clickworker to find a significant small to medium-sized effect (power = 80%, alpha error probability p = .05).

8) Anything else you would like to pre-register?
(e.g., secondary analyses, variables collected for exploratory purposes, unusual analyses planned?)

We survey gender, age, and whether respondents have studied before at a higher education institution to characterize our sample afterward. Three manipulation checks and one attention check are included to secure reliable answers. For exploratory analyses, we measure chatbot knowledge and frequency of use, the quality of previous chatbot experiences, perceived relational commitment (Kelleher, 2009), and perceived moral agency of the chat agent (Banks, 2018). As social presence, perceived dialog, and feeling heard might also mediate the response time effects on the outcomes, we will conduct exploratory mediation analyses using response time as the independent variable. The data might be further analyzed for exploratory purposes, e. g. with structural equation modeling.

Version of AsPredicted Questions: 2.00