'The Evolving Politics of using AI-based Algorithms in Public Policy' (AsPredicted #121928)
Author(s) This pre-registration is currently anonymous to enable blind peer-review. It has 2 authors.
Pre-registered on 02/14/2023 04:06 AM (PT)
1) Have any data been collected for this study already? No, no data have been collected for this study yet.
2) What's the main question being asked or hypothesis being tested in this study? Our theory focuses on two key factors that we assume to play a central role in preference formation for new technologies: ability, i.e., all background information relevant to evaluating the costs, benefits, and risks of adopting the technology, and motivation, i.e., the willingness to learn and process new information when forming judgments about the technology. Based on these factors, we offer several hypotheses related to two of the questions we address in this paper.
1. How does exposure to information about the technology and its implications affect people's preferences regarding the use of ADSs in public policy? H1a. Directional updating: People update their preferences in the direction implied by the information, regardless of prior stances.
H1b. No updating: People will not change their preferences regarding the use of AI in public policy as a result of exposure to new information about the technology and its implications.
H1c. Motivated updating: People will update their preferences in the direction of the information only if it matches their prior beliefs.
2. How does experience with algorithmic decision-making affect people's preferences regarding the use of ADSs in public policy?
H2a. Updating by exposure: The personal experience of ADSs will increase support for their use in public policy.
H2b. Updating by experience: The effect of personal experience with ADSs on AI-related attitudes depends on the specific interaction with the ADS. H2c. Directional updating by exposure: The effect of information about AI on attitudes will be moderated by experience with ADSs.
3) Describe the key dependent variable(s) specifying how they will be measured. The key dependent variable in this study is individuals' attitudes toward the use of algorithmic decision-making systems (ADS) in public policy. Our primary measures will be based on a PCA score of the 8 items asked in Wave 3, and the 4 items asked in Wave 2.
These attitudes will be measured using two matrices in the post-treatment survey (Wave 3). The matrices will ask participants to indicate their level of support or opposition to having an ADS, rather than a human, make various policy decisions, including granting parole, food stamp distribution, police patrol locations, street lighting placement, issuing restraining orders, immigrant visa applications, police enforcement, and homeless shelter placement. These policy decisions were chosen according to two relevant theoretical dimensions: the objective of the decision (assistance or sanctioning) and the population directly affected by the decision (individuals or collectives) to ensure that the results are not sensitive to a specific context, item or wording.
4) How many and which conditions will participants be assigned to? Participants will be assigned to one of 16 conditions. The main intervention evaluates the impact of personal experience with ADS by varying the decision-maker who hires and assigns workers to tasks: a computer algorithm or a human HR worker.
The second condition distinguishes between a positive and negative experience with the decision-maker by varying the description of the task, either a high-paying, higher-status task or a low-paying, lower-status task. We will exclude participants who prefer the lower-status task, resulting in two treatment groups: those with a positive experience assigned to the high-paying-status desired task and those with a negative experience assigned to the low-paying-status undesired task.
The third intervention assesses the impact of information on attitudes by varying the content of the tasks that the workers perform, including positive predictions on AI, negative predictions on AI, positive predictions on the future of fashion, and negative predictions on the future of fashion.
5) Specify exactly which analyses you will conduct to examine the main question/hypothesis. Our main target parameters are the intent-to-treat effect (ITT). We will use estimators of the ATE using ordinary least squares with adjustment for the following pre-treatment covariates: Age, Gender, Education, Race, Party ID, and Digital Literacy to produce a more precise estimate of the treatment effect. Our primary estimator treats only the post-treatment outcomes as the dependent variables but controls for the pre-treatment outcome (as measured at the first wave survey). Specifically, we will focus on two dependent variables: (1) the PCA score of 8 items asked at Wave 3 and the PCA score of the four items asked at Wave 2.
To test the directional updating and no-updating hypotheses, we will limit the sample to respondents who were assigned to the Human DM treatment. The regression model will be specified as Yi = α i+ β Information i + β Valence i + β Information i x Valence i + β PCA0i + β experience i + β Xi + ε I.
To test the motivated updating hypothesis, we will run the same analysis separately for "proponents" and "opponents" defined by the pre-treatment PCA score (above/below the median score).
To test the Updating by exposure and Updating by experience hypotheses, we will analyze the sample of respondents assigned to the placebo information treatments with a regression model specified as Yi = α i + β Decision-maker i + β Experience i + β PCA0 i + β Valence i + β X i + ε i.
To test the directional updating by exposure hypothesis, we will analyze the sample of respondents assigned to the treatments of AI information with a regression model specified as Yi = αi + β Decision-maker i + β Valence i + β Decision-maker i x β experience i + β PCA0i + β X i + ε i.
6) Describe exactly how outliers will be defined and handled, and your precise rule(s) for excluding observations. The target population is MTurk workers, that is, individuals paid to complete a series of Human Intelligence Tasks (HITs).
Participants who do not answer correctly on the attention test at the beginning of the pretreatment survey will be excluded from the study.
We will exclude the top 1% of the most active workers on MTurk, who complete around 21% of the daily HITs on the platform, to minimize the potential impact of familiarity with AI on our results and to enhance the external validity of our findings to other labor markets.
7) How many observations will be collected or what will determine sample size? No need to justify decision, but be precise about exactly how the number will be determined. Assuming a power of 0.8 and an alpha level of 0.05 (two-tailed), to detect small effects of 0.2 and assess interaction hypotheses requiring smaller comparison groups, power calculations suggest a sample of 40 participants in each of the 16 cells, resulting in a total sample size of at least 640 participants. To account for a 40% attrition rate for each wave (wave 2 and wave 3) and ensure a final sample size of 640 participants, we need to recruit at least 1300 participants.
8) Anything else you would like to pre-register? (e.g., secondary analyses, variables collected for exploratory purposes, unusual analyses planned?) For the secondary analysis, we define the following estimators: (1) we will estimate effects on the change in attitude. The dependent variable will be defined as the difference between the post- and pre-treatment outcomes; and (2) we will assess the treatment effects when using post-treatment outcomes only (i.e., a between subjects analysis).
We will use responses to the general question on preferences towards AI as a third outcome. The question asks, "Do you think the shift from human to algorithmic decisions will lead to an improvement or deterioration in public services?" The answers will be measured on a five-point scale, with higher values indicating greater support for the idea of a big improvement.
Bundle
This pre-registration is part of a bundle. PDFs for each pre-registration in the
bundle include links to all other pre-registrations in the bundle. The bundle includes: