Statistician’s creed

This page was last updated on 6 September 2018.


Coolum, Australia

Reference
Nester, M. R. (1996). An applied statistician’s creed. Journal of the Royal Statistical Society, Series C (Applied Statistics) 45: 401-410.

I do not own the copyright but it is conceivable that a copy of the above paper can be found somewhere on the Internet.

About the creed
The creed relates to null hypothesis testing, such as whether or not two treatments have identical average effects. I argue that in almost all cases there are sound scientific (biological, physical, chemical) reasons to expect that two treatments will not have perfectly identical average effects. In a similar vein, is it reasonable to expect that two different teaching methods, say, will lead to perfectly identical average scores on student tests?

The creed makes a number of sweeping generalizations, viz. that all treatments differ, that all factors interact, that no data are strictly normally distributed, that no relationships between variables are perfectly linear, etc. These claims therefore render almost all null hypothesis testing pointless. To put it bluntly, I believe that most null hypothesis testing is just plain silly.

I believe that researchers should not be asking if two treatment means are identical, but how different they are. How large are the interactions among factors? Are the data sufficiently normal for whatever purpose you have in mind, or should you be looking for a better fitting distribution? Is the relationship sufficiently linear for how you want to use it?

Many of the sentiments in the creed paper had been expressed before, but possibly not in such a comprehensive manner.

Marks Nester on ResearchGate