'Do social robots deserve fair treatment?'
(AsPredicted #36534)
Author(s)
This pre-registration is currently anonymous to enable blind peer-review.
It has 4 authors.
Pre-registered on
02/28/2020 08:13 PM (PT)
1) Have any data been collected for this study already?No, no data have been collected for this study yet.
2) What's the main question being asked or hypothesis being tested in this study? We plan to investigate whether children think that social robots are worthy of fair treatment, similar to human children. We hypothesize that children will consider robots as less worthy of fair treatment, compared to human children. We also hypothesize that younger children (below the age of 6) may be more likely to regard social robots as being worthy of fairness treatment than older children based on past research suggesting that children initially overgeneralize fairness concerns and become more discerning about fairness-related rules as they mature (Schmidt, Svetlova, Johe, & Tomasello, 2016).
3) Describe the key dependent variable(s) specifying how they will be measured. We will measure social robot’s to be treated fairly from children’s perspective via resource allocation decision. In a similar method to Shaw and Olson (2012), we will examine children’s expectations of fair treatment of robots by asking for third-party evaluations of resource allocation by an experimenter. Specifically, we will explore the children’s evaluations of resource allocations in two different conditions: between a human child and a robot (robot-child condition), or between two human children (child-child condition). In the robot-child condition, participants will be presented with a background story in which both the robot and child help clean up and the experimenter wants to reward both characters. However only one sticker is available, and the experimenter says s/he can either “give it to the child or throw it away”. The experimenter thinks and then chooses to give it to the child. The child-child condition is identical, except that the non-recipient character is another child instead of a robot.
We will measure social robot’s to be treated fairly from children’s perspective via resource allocation decision. In a similar method to Shaw and Olson (2012), we will examine children’s expectations of fair treatment of robots by asking for third-party evaluations of resource allocation by an experimenter. Specifically, we will explore the children’s evaluations of resource allocations in two different conditions: between a human child and a robot (robot-child condition), or between two human children (child-child condition). In the robot-child condition, participants will be presented with a background story in which both the robot and child help clean up and the experimenter wants to reward both characters. However only one sticker is available, and the experimenter says s/he can either “give it to the child or throw it away”. The experimenter thinks and then chooses to give it to the child.The child-child condition is identical, except that the non-recipient character is another child instead of a robot.
The participant is then asked:
DV1. How mad do you think the Non-Recipient (Robot in robot-child condition; A child in child-child condition) will be about this?
- We will use Likert scale using circles (4 circles, from large to small)
DV2. How fair do you think that is?
- Answers range between “Very (fair or unfair) / A little (fair or unfair) ” as a possible response and will be coded on a Likert scale from 1 “very unfair” to 4 “very fair”.
DV3. Should I take this from the kid and throw it away instead?
- Responses will be coded as a forced choice between “no (don’t take it and throw it away)” (0) and “yes (take it and throw it away)” (1).
4) How many and which conditions will participants be assigned to? Each participant will be asked about one out of two resource allocation dilemma conditions: between a child and a robot (robot-child), or between two children (child-child). Each condition consists of two sequences in which we counterbalance the order that picture of the two characters (recipient and non-recipient) are placed in front of the child.
5) Specify exactly which analyses you will conduct to examine the main question/hypothesis. We will conduct a linear regression to analyze participant’s anger attributions to the non-recipient. We will set madness ratings as the dependent variable and set Condition (robot-child or child-child condition) and Age (continuous) as independent variables. We expect that participants will attribute more anger to a child non-recipient than a robot non-recipient.
We will conduct a second linear regression to analyze participant’s fairness expectations. We will set fairness evaluation as the dependent variable and again set Condition (robot-child or child-child condition) and Age (continuous) as independent variables. We expect that participants will evaluate the allocation is less fair in the child-child condition over the robot-child condition. Further, we expect that children’s differential expectations of fairness between robot or child recipients will increase with age.
We will further correlate anger attributions with fairness expectations to see if there is a relationship between the two DVs. We expect to find a positive correlation in-so-far that the more children expect a recipient to be mad with the distribution outcome, that the distribution itself will be rated as more unfair.
Finally, we will conduct a binomial logistic regression to analyze participant’s responses to whether the resource should be thrown away or not. We will set responses (coded as a forced choice between “no (don’t take it and throw it away)” (0) and “yes (take it and throw it away)” (1)) as the dependent variable and set condition (robot-child or child-child condition) and Age (continuous) as independent variables. We expect that there should be higher rates of selecting to “throw the resource away” in the child-child condition over the robot-child condition. Further, we will analyze how responses differ from chance within our pre-specified age bins (4- to 5-; 6- to 7-; and 8- to 9-years old) using binomial sign tests.
6) Describe exactly how outliers will be defined and handled, and your precise rule(s) for excluding observations. Children who do not finish our robot game (e.g., refusing to stay for the length of the task) will be excluded. Otherwise, we plan to not exclude any participants.
7) How many observations will be collected or what will determine sample size?
No need to justify decision, but be precise about exactly how the number will be determined. We will run children between ages 4-9 in this study, across our 2 conditions. Our planned age bins are 4-5, 6-7, and 8-9 years old. We will run 30 children in
each age group and condition, for a total of 120 children.
8) Anything else you would like to pre-register?
(e.g., secondary analyses, variables collected for exploratory purposes, unusual analyses planned?) We have piloted this task prior to beginning formal data collection. No pilot data will be included in our final sample.