Re: [S] Fisher's exact test isn't always appropriate

Bill Venables (
Thu, 26 Mar 1998 17:51:04 +1030

Frank E. Harrell, Jr. writes:
> There seems to be a tremendous interest in Fisher's exact
> test. At the risk of setting off a religious war please let
> me note a few points:

There's no chance of your setting it off, Frank. This particular
statistical religious war has been raging now for about 70 years!

> 1. Fisher's test is a conditional test whereas we're usually
> interested in unconditional tests (i.e., not conditioning
> on the margins)

Huh? If the margins were actually fixed before sampling you
would certainly want to condition on them, and Fisher's exact
test would essentially be incontrovertable. If they were not,
then the point is a little more moot but, asymptotically at
least, you don't lose anything by conditioning and the minuscule
gain you theoretically would get out of not conditioning is very
difficult to realise in practice. (See below.)

> 2. The test can be conservative - see

Yes, and some forms of evidence can be hard to get. That's the
way it is.

> 3. The rule of "expected value of 5 in each cell' to be able
> to use Pearson's chi-square is too stringent in many cases.

On this we agree entirely. I suspect the truth is much more
complex, in fact, and that in some rare cases the rule might not
stringent enough. This question deserves more work. An
intersting oldish paper in this vein is

Journal = {Journal of the American Statistical Association},
Volume = 73,
Pages = {253--263},
Keywords = {Pearson chi-squared test; Likelihood ratio test;
Freeman-Tukey statistic; Categorical data},
Author = {Larntz, Kinley},
Title = {Small-sample Comparisons of Exact Levels for {C}hi-squared
Goodness-of-fit Statistics},
Year = 1978

> The default test I use is the likelihood ratio test.

Curiously so do I. If you look at the large sample theory you
find that under the null hypothesis the conditional distribution
is asymptotically chi-squared{(r-1)(c-1)}. Since this
distribution does not depend on the margins on which you have
conditioned, it is also the asymptotic *unconditional*
distribution of the statistic.

Now I buy the conditional argument but Frank, it seems, does not.
What I am wondering now is if Frank and I do the same thing with
a 2-way table, is Frank's test an unconditional test and mine
conditional? :-)

> There are newer promising unconditional tests that I hope to
> learn about.

Are they coming round again? George Barnard was once an active
proponent of one such test, but did a complete 180 degree turn
and became one of the `conditional' camp's greatest supporters.
(The Neyman-Pearson school have had a fatois on him ever since.)

The real question is this: if you have a very good approximate
ancillary, is it more important to condition on it to focus your
`reference set' (as Fisher called it) more precisely, or not to
condition on it to gain the tiny amount of information that in
principle conditioning loses? It is a moot point, but in this
case I think the conditioners do have the better case.


Bill Venables, Head, Dept of Statistics,    Tel.: +61 8 8303 5418
University of Adelaide,                     Fax.: +61 8 8303 3696
South AUSTRALIA.     5005.   Email:

----------------------------------------------------------------------- This message was distributed by To unsubscribe send e-mail to with the BODY of the message: unsubscribe s-news