Menkveld, Albert J.Dreber, AnnaHolzmeister, FelixHuber, JuergenJohannesson, MagnusKirchler, MichaelNeussüs, SebastianRazen, MichaelWeitzel, UtzBrownlees, Christian T.Gil-Bazo, Javieret al.,Universitat Pompeu Fabra. Departament d'Economia i Empresa2024-11-142024-11-142021-12-01http://hdl.handle.net/10230/68657In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants.application/pdfengL'accés als continguts d'aquest document queda condicionat a l'acceptació de les condicions d'ús establertes per la següent llicència Creative CommonsNon-standard errorsinfo:eu-repo/semantics/workingPapernon-standard errorsmulti-analyst approachliquidityFinance and Accountinginfo:eu-repo/semantics/openAccess