Return to Online Journal Club Discussion
##### (PLEASE NOTE: On the bottom of this page, you will find the topic for discussion and the name of the contributor.)

# TITLE Discussion Topic:Verification of Assumptions

Email Notification of Changes: Click here and add TITLE of the topic to the body of the email.
**Please add comments and then click on the "Add comment" button.**
%COMMENT{type="belowthreadmode"}%
I agree with the point from Knut and in the recorded discussion that “verifying” assumptions is not a reasonable goal, or even possible. Instead, we just want the model to be reasonably accurate, not literally true. To meet this goal, I advocate performing “due diligence” assessments of model assumptions, and I’ve argued that large P-values are of some value in such situations (in contrast to their usual worthlessness). As long as there is no particular *a priori* suspicion about an important departure from the assumptions, it may be enough to show that some reasonable precaution was taken and no alarming evidence was found.
Knut raises the important issue of how best to measure central tendency (or more generally, effects of predictors on the outcome). I agree that this should usually be decided *a priori* on conceptual grounds, rather than by empirically looking to see what better approximates statistical modeling assumptions. This is a key issue when deciding whether or not to logarithmically transform an outcome variable for modeling. If modeling the outcome on the most meaningful scale results in violations of statistical assumptions (such as Gaussian residuals), then I advocate using bootstrapping or other advanced methods to obtain valid confidence intervals; this seems preferable to modeling the wrong thing for statistical convenience. A common example is when cost is the outcome. Costs are often skewed, with a small number of patients with very high costs. Nevertheless, the handful of high-cost patients really are very important and should not be down-weighted by use of logarithmic transformation or nonparametric methods. The raw arithmetic mean cost is what matters for policy or for a hospital’s bottom line, and the geometric mean and median are usually not relevant.
[Excerpted from a different thread.]
Re 3.5., the focus on the "assumption [of a] Gaussian distribution" may also me misleading. First, least-square methods are highly robust against deviations of the empirical distribution of residuals from the Gaussian distribution (Scheffe 1959). Second, lack of a "significant" result of a test for deviation does not prove the null hypothesis (of a Gaussian distribution). Hence, requiring "empirical verification" of assumptions could create the problems it is supposed to address. Finally, the focus on the Gaussian distribution may result in other assumptions, such as the adequatness of the measure of central tendency being used (arithmetic mean, geometric mean, median, ...) being overlooked. Which measure of central tendency to choose can rarely be decided (or verified) based on the data. Instead, knowledge of the subject matter are needs to be applied to select this measure. In particular, an approximate answer for the correct question may be better than the exact answer for the wrong question.