Re: [S] AIC for survreg models: extending the question

Prof Brian Ripley (ripley@stats.ox.ac.uk)
Mon, 16 Feb 1998 15:33:05 GMT


Jens Oehlschlaegel <oehl@Psyres-Stuttgart.DE> wrote:
>
> I was recently told that likelihoods from non-nested models are not really
> comparable, thus also AIC (penalizing for degrees of freedom) would not be
> comparable between non-nested models. Instead I was told, that the only
> way comparing non-nested models is a bayesian approach using bayes factors
> (BIC).
>

[We are getting off S here, but perhaps it is worth going a little
further down this line.]

Perhaps you were told that the only _valid_ way was using Bayes
factors? Certainly Akaike appears to me to have believed that AIC was
a way to compare the predictive performance of non-nested models, and
Cox wrote about testing of non-nested hypotheses in 1961 and 2
(`separate families of hypotheses') in essence by embedding them in a
larger family.

One flaw in the AIC argument is that it assumes that each model is
true, but it is based on estimating how well each model would predict a
future dataset (which has nothing to do with hypothesis testing). A
flaw in the Bayes factor argument is that it helps answers the question

`Given that precisely one of these models is true, what should my
degree of belief be that it is this one?',

not which one should I use for prediction or even explanation. But
`the Bayesian approach using Bayes factors' is not the same thing as
using BIC (or bIC to you), which refers to either of two
approximations (by Akaike and Schwarz, each in 1978) to the integration
involved in finding Bayes factors.

> May I, or may I not compare the LL of different survival models (like
> Weibull, Log-Normal, Identity-Normal), if the degrees of freedom are
> equivalent (or made equivalent via AIC)?

The likelihoods are comparable in the technical sense that they are on
on the same probability space, and densities with respect to the same
measure. So a larger log-likelihood does indicate a better fit.
Whether using the model with the best (penalized) fit is a good idea is
debatable (and hotly debated), and may depend on the purpose to which
the model is put. My slant in one context is written out in chapter 2
of `Pattern Recognition and Neural Networks' (CUP, 1996).

My ultimate answer is that I do use AIC with parametric survival
models, cautiously, and find it highly related to (much more expensive)
cross-validated measures of the performance I am interested in.

Brian
-----------------------------------------------------------------------
This message was distributed by s-news@wubios.wustl.edu. To unsubscribe
send e-mail to s-news-request@wubios.wustl.edu with the BODY of the
message: unsubscribe s-news