#137012 | AsPredicted

'The effect of expertise and anchor relevance on anchoring II'
(AsPredicted #137,012)


Author(s)
This pre-registration is currently anonymous to enable blind peer-review.
It has 2 authors.
Pre-registered on
2023/06/28 01:40 (PT)

1) Have any data been collected for this study already?
No, no data have been collected for this study yet.

2) What's the main question being asked or hypothesis being tested in this study?
Hypothesis 1: Experts are less influenced by presented anchors than novices.
Hypothesis 2: Relevant anchors have a larger anchoring effect on subsequent judgments than irrelevant anchors.
Hypothesis 3: Anchors having larger deviations from the correct judgment have a larger effect on subsequent judgments than anchors with smaller deviations.
Hypothesis 4: Expertise and anchor relevance interact such that experts show less anchoring for relevant than for irrelevant anchors whereas novices show the opposite pattern.

With these hypotheses, we can also compare different theories of anchoring, i.e., Insufficient Adjustment Model (IAM; Kahneman & Tversky, 1974), Insufficient Adjustment Model including the assumption that individuals hold ranges of plausible values (IAM-PL; Epley & Gilovich, 2001), Selective Accessibility Model (SAM; Mussweiler & Strack, 1999), Scale Distortion Theory (SDT; Frederick & Mochon, 2012).

- H1: IAM suggests no difference between experts and novices since adjustment does not suggest differential adjustment processes. IAM-PL suggests the same as H1 since experts can be expected to have smaller ranges of plausible judgments than novices. SAM suggests the opposite as experts can come up with more information in favor of the anchors. SDT suggests the same as H1 as experts require a scale for their judgment less than novices.
- H2: IAM suggests no effect of anchor relevance as it does not distinguish between different origins of an anchor. The range of plausibel judgments assumed by IAM-PL may be affected by relevant anchor such that these anchors show larger anchoring effects than irrelevant anchors. SAM suggests the same as generating information in favor of the anchor is easier for relevant anchors. SDT also suggests that relevant anchors produce larger anchoring effects since they provide scales relevant for making a judgment.
- H3: since we do not expect extreme anchors, all theories would suggest similar effects.

3) Describe the key dependent variable(s) specifying how they will be measured.
Relative error computed as the relative deviation from the correct answer (see Mayer, Broß & Heck, 2023)

4) How many and which conditions will participants be assigned to?
Expertise (between): Participants are either trained to raster scan presented images making them experts in judging the number of presented dots or they read an essay about the importance of accurate jugments making them novices.
Anchor relevance (between): Presented anchors are either included in random facts (irrelevant for task) or framed as judgments of previous participants (relevant for task).
Anchor deviation (within): Presented anchors are the same in both conditions and differ +/- 35% or +/-70% from the correct answer.

5) Specify exactly which analyses you will conduct to examine the main question/hypothesis.
We use linear mixed models to regress the relative error on expertise (0.5- expert, -0.5 - novice), anchor relevance (-0.5 - irrelevant, 0.5 - relevant), and anchor deviation (-0.5 - small deviation with +/- 35%, 0.5 – large deviation with +/-70%). We include random intercepts for participants and items.

6) Describe exactly how outliers will be defined and handled, and your precise rule(s) for excluding observations.
Participants are excluded if they provide judgments that depict low effort or no interest in the study, which include providing the same answer, number sequences or extreme judgments with more than 150% deviation for more than 20% of items. Additionally, participants who correctly suspect the topic (anchoring) of the study are excluded. Single judgments are excluded if they are timed-out after 60 seconds.
Moreover, participants are already excluded during participation if they do not consent to the terms and conditions of the study, access the study with mobile devices, change the browser window more than 5 times during participation, or answer less than 3 of 4 control questions correctly after being instructed to use raster scanning.

7) How many observations will be collected or what will determine sample size?
No need to justify decision, but be precise about exactly how the number will be determined.

Power analysis for a 2x2 between-subjects design, alpha = .05, beta = .95, and an effect size of f = 0.25 (medium effect) revealed a required sample of at least 148 participants. To ensure a sufficient sample size after possible exclusions and allow for random effects in our models, we collect data of at least 250 participants and aim at 60 participants per condition.

8) Anything else you would like to pre-register?
(e.g., secondary analyses, variables collected for exploratory purposes, unusual analyses planned?)

Nothing else to pre-register.

Version of AsPredicted Questions: 2.00