—for portraying statistical methods as tools for routine discovery: Do the randomization, gather the data, pass statistical significance and collect 0. Think of the thousands of careful scientists who, for whatever combination of curiosity or personal interests or heterodoxy, decide to study offbeat topics such as ESP or the effect of posture on life success—but who conduct their studies carefully, gathering high-quality data, and using designs and analyses that minimize the chances of being fooled by noise.
Also awkward was a full retraction by first author Dana Carney, who detailed many ways in which the data were handled in order to pull out apparently statistically significant findings. [No, upon reflection, I don’t think the article was fair, as it places, without rebuttal, misrepresentations of my work and that of Dana Carney — AG], given the inevitable space limitations.
I wouldn’t’ve chosen to have written an article about Amy Cuddy—I think Eva Ranehill or Uri Simonsohn would be much more interesting subjects.
We learn, individually and collectively, from our mistakes.
We’re all part of the process, and Dominus is doing the readers of the New York Times a favor by revealing one part of that process from the inside.
Suppose he’d fit a hierarchical model or done a preregistered replication or used some other procedure to avoid jumping at patterns in noise. And then he most likely would’ve found nothing distinguishable from a null effect, no publication in JPSP (no, I don’t think they’d publish the results of a large multi-year study finding no effect for a phenomenon that most psychologists don’t believe in the first place), no article on Bem in the NYT . (I assume it depends on context, that power pose will do more good than harm in some settings, and more harm than good in others).