'Rating Weirdness of Previous Uncertainty Effect Paradigms' (AsPredicted #2150)
Author(s) Robert Mislavsky (University of Pennsylvania) - mislavsky@jhu.edu Uri Simonsohn (University of Pennsylvania) - urisohn@gmail.com
Pre-registered on 11/22/2016 08:15 PM (PT)
1) Have any data been collected for this study already? No, no data have been collected for this study yet
2) What's the main question being asked or hypothesis being tested in this study? Past uncertainty effect studies that have attempted to manipulate only risk across conditions have actually also manipulated transaction weirdness.
3) Describe the key dependent variable(s) specifying how they will be measured. “How weird is it to buy a gift [card/certificate] like this?”
(1 = It is not weird at all; 4 = It is extremely weird)
4) How many and which conditions will participants be assigned to? 6 between-subject conditions.
Participants will read scenarios used in prior uncertainty effect studies and rate how weird they are.
Condition 1: Baseline (not weird, not risky) condition
Condition 2: 50/50 lottery scenario from Gneezy, List, & Wu 2006 (Pricing Task; p. 1301)
Conditions 3 & 4: Certain and uncertain coin flip scenarios from Yang, Vosgerau, & Loewenstein 2013 (Experiment 4; p. 737)
Conditions 5 & 6: Certain and uncertain box scenarios from our prior studies (Study 2: “One-option Not Risky/Weird” and “Risky/Weird” scenarios available at tinyurl.com/weirdosf)
For Condition 1 we will use three different wordings (randomized across participants) to accommodate slight variations in the riskless options across the three studies we will contrast with in Conditions 2-6.
Conditions 2-6 use verbatim stimuli from the prior studies.
5) Specify exactly which analyses you will conduct to examine the main question/hypothesis. T-tests comparing mean of condition 1 against each of the other 5 conditions. We predict that condition 1 is rated as less weird than every other condition.
6) Any secondary analyses? To describe and explore data, we will compare the cdfs of responses.
7) How many observations will be collected or what will determine sample size? No need to justify decision, but be precise about exactly how the number will be determined. We will aim for 600 total (100/cell). We will include an attention check prior to randomization, ending the survey for those who do not answer the question correctly. Those who fail the attention check will not count towards the recruitment goal (i.e., we will recruit participants until 600 have passed the attention check). We will also prevent those who took a similar survey (identified by their MTurk ID) from participating in the experiment.
8) Anything else you would like to pre-register? (e.g., data exclusions, variables collected for exploratory purposes, unusual analyses planned?) We will collect age and gender to describe the sample.