Return to Discussion Forum

Title: P-Value Fallacy and Error Rates

On the bottom of this page, you will find the topic for discussion and the name of the contributor.

Please add comments and then click on the "Add comment" button.

%COMMENT{type="belowthreadmode"}%

-- PeterBacchetti - 17 Aug 2011

I agree that overemphasis on whether or not P<0.05 greatly increases confusion when interpreting statistical results. On the front lines of medical research, I constantly see much more blatant problems than what Goodman wrote about, most notably the interpretation of P>0.05 as proof of no difference. Both the subtle and blatant problems appear to be related to the hybridization of statistical hypothesis testing (for making automatic decisions) with quantification of strength of evidence. Gigerenzer has traced and analyzed this to some extent (Gigerenzer, G. 2004. Mindless statistics. Journal of Socio-Economics 33:587-606; and a book chapter referenced therein), noting that current practices contradict all the major foundational theories, including Neyman-Pearson. The persistence of widespread na\xEFve misinterpretation of P-values appears to me to be an unexplained sociological phenomenon.

The proposal to emphasize Bayes factors instead of p-values might help, but I suspect that they would become subject to the same na\xEFve misinterpretations that P-values currently elicit. A different possible solution is to emphasize estimates and the uncertainty around them. This can be done within either a Bayesian or frequentist framework. The hard part is getting scientists to deal realistically with uncertainty, given the pressure to claim definitive “findings”. The lecture linked to the “see also” CTSpedia page for this discussion is part of my efforts to steer clinical researchers in this direction.

DiscussionBERDForm edit

Title P-Value Fallacy and Error Rates
Description - Problem to be explored Hello All- May I take the liberty of introducing a general question as a side bar to the on-going discussion? It relates to the role of P-value (and its cousins';-error rates, size etc) in interpreting the findings in medical research. In particular, the so-called "double duty" performed by this number ( Termed as P-values fallacy i.e.; "... the mistaken idea that a single number can capture both the long-run outcomes of an experiment and the evidential meaning of a single result..." (Goodman- Ann Intern Med 1999;130:995-1013). I have a feeling that much of the "pain" on the part of the clinical researchers with regards to the statistical interpretation of the results can be reduced if we can continue to clarify this fallacy and move away to a (somewhat) lesser confusing solution- along the lines of Steven Goodman? Comments from our distinguished discussants will be immensely appreciated from the bottom of my heart.

Warm regards.
Rakesh Shukla
Contributor/Email Rakesh Shukda (SHUKLAR@UCMAIL.UC.EDU
See Also Bacchetti -CTSpedia Content of Interest - P-Value Fallacy
Disclaimer The views expressed within CTSpedia are those of the author and must not be taken to represent policy or guidance on the behalf of any organization or institution with which the author is affiliated.
This topic: CTSpedia > WebHome > DiscussionForum > DiscussionBERD001
Topic revision: 09 May 2013, MaryBanach
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback