Counternull

In statistics, and especially in the statistical analysis of psychological data, the counternull is a statistic used to aid the understanding and presentation of research results. It revolves around the effect size, which is the mean magnitude of some effect divided by the standard deviation.[1]

The counternull value is the effect size that is just as well supported by the data as the null hypothesis.[2] In particular, when results are drawn from a distribution that is symmetrical about its mean, the counternull value is exactly twice the observed effect size.

The null hypothesis is a hypothesis set up to be tested against an alternative. Thus the counternull is an alternative hypothesis that, when used to replace the null hypothesis, generates the same p-value as had the original null hypothesis of “no difference.”[3]

Some researchers contend that reporting the counternull, in addition to the p-value, serves to counter two common errors of judgment:[4]

  • assuming that failure to reject the null hypothesis at the chosen level of statistical significance means that the observed size of the "effect" is zero; and
  • assuming that rejection of the null hypothesis at a particular p-value means that the measured "effect" is not only statistically significant, but also scientifically important.

These arbitrary statistical thresholds create a discontinuity, causing unnecessary confusion and artificial controversy.[5]

Other researchers prefer confidence intervals as a means of countering these common errors.[6]

See also

References

  1. Pashler, Harold E.; Stevens, S. S. (2002). Steven's handbook of experimental psychology. Chichester: John Wiley & Sons. pp. 138, 422. ISBN 0-471-44333-6. The counternull revolves around an increasingly common measure called “effect size,” which, essentially, is the mean magnitude of some effect (e.g., the mean difference between two conditions) divided by the standard deviation (generally pooled over the conditions).
  2. Rubin, Donald B.; Rosenthal, Robert; Rosnow, Ralph L. (2000). Contrasts and effect sizes in behavioral research: a correlational approach. Cambridge, UK: Cambridge University Press. p. 5. ISBN 0-521-65258-8.
  3. Iacobucci, Dawn (2005). "From the Editor" (PDF). Journal of Consumer Research. 32: 6–11. doi:10.1086/430648. Archived from the original (PDF) on 2005-11-08. Retrieved 2007-08-01.
  4. Rosenthal, R.; Rubin, D.B. (1994). "The counternull value of an effect size: A new statistic". Psychological Science. 5 (6): 329–334. doi:10.1111/j.1467-9280.1994.tb00281.x.
  5. Pasher (2002), p. 348: "The reject/fail-to-reject [the null hypothesis] dichotomy keeps the field awash in confusion and artificial controversy."
  6. Boik, Robert J. (2001). "Review of Contrasts and Effect Sizes in Behavioral Research: A Correlational Approach by Robert Rosenthal, Ralph L. Rosnow & Donald B. Rubin". Journal of the American Statistical Association. 96 (456): 1528–1529. doi:10.1198/jasa.2001.s432. JSTOR 3085927. If interval estimates of standardized effect size measures are desired, then a more sensible approach is to construct confidence intervals having fixed confidence coefficients.

Further reading

  • Rosnow, R. L., & Rosenthal, R. (1996). Computing contrasts, effect sizes, and counternulls on other people's published data: General procedures for research consumers. Psychological Methods, 1, 331-340
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.