'Within-Subjects Weirdness Ranking of Uncertainty Effect Paradigms' (AsPredicted #2239)
Author(s) Robert Mislavsky (University of Pennsylvania) - mislavsky@jhu.edu Uri Simonsohn (University of Pennsylvania) - urisohn@gmail.com
Pre-registered on 11/28/2016 08:59 AM (PT)
1) Have any data been collected for this study already? No, no data have been collected for this study yet
2) What's the main question being asked or hypothesis being tested in this study? Past uncertainty effect studies that have attempted to manipulate only risk across conditions have actually also manipulated transaction weirdness.
3) Describe the key dependent variable(s) specifying how they will be measured. Within-subjects ranking of how weird certain transactions are
4) How many and which conditions will participants be assigned to? Only one condition, every participant evaluates 6 transactions used in prior uncertainty effect studies and ranks how weird the transactions are.
Transaction 1: Baseline (not weird, not risky) condition
Transaction 2: 50/50 lottery scenario from Gneezy, List, & Wu 2006 (Pricing Task; p. 1301)
Transactions 3 & 4: Certain and uncertain coin flip scenarios from Yang, Vosgerau, & Loewenstein 2013 (Experiment 4; p. 737)
Transactions 5 & 6: Certain and uncertain box scenarios from our prior studies (Study 2: “One-option Not Risky/Weird” and “Risky/Weird” scenarios available at tinyurl.com/weirdosf)
For Transaction 1 we will use three different wordings (randomized across participants) to accommodate slight variations in the riskless options across the three studies we will contrast with in Transactions 2-6. Transactions 2-6 use verbatim stimuli from the prior studies.
5) Specify exactly which analyses you will conduct to examine the main question/hypothesis. T-tests comparing mean rankings of each transaction.
We predict that transaction 1 will have the highest (i.e., the least weird) mean ranking.
6) Any secondary analyses? To describe and explore data, we will compare the cdfs of responses. We will also compare rankings for each version of Transaction 1, expecting Transaction 1 to be ranked directionally highest (least weird) in each case.
7) How many observations will be collected or what will determine sample size? No need to justify decision, but be precise about exactly how the number will be determined. We will aim for 150 total. We will include an attention check prior to Transaction 1 randomization, ending the survey for those who do not answer the question correctly. Those who fail the attention check will not count towards the recruitment goal (i.e., we will recruit participants until 150 have passed the attention check). We will also prevent those who took similar surveys (identified by their MTurk ID) from participating in the experiment.
8) Anything else you would like to pre-register? (e.g., data exclusions, variables collected for exploratory purposes, unusual analyses planned?) We will collect age and gender to describe the sample.