Re: [S] cross validating regression models

Frank E Harrell Jr (fharrell@virginia.edu)
Wed, 11 Mar 1998 15:41:37 -0500


>I'll check again, but as V-fold cross-validation is generally regarded
>as having become widely known through the CART book (1984) I do not believe
>those papers fully address that issue. And the CART book shows that the
>Efron bootstrap methods are biased for their method, which is why they did
>something else. [That agrees with the historical section in Davison & Hinkley
>which I have just checked. I am sure Efron did not address repeated
>V-fold CV.]

It's very possible that bootstrapping works differently (worse) for CART than
for logistic regression.

>> My work in averaging cross-vals is quite limited so I'll defer on this point.
>> The bootstrap still has two advantages here: (1) It validates the final fit, i.e., the
>> one developed on ALL of the data; and (2) if you've done variable selection (why?),
>> the bootstrap incorporates the right amount of variation due to model uncertainty.
>
>I don't follow that. Precisely what is the difference (except that
>bootstrapping uses a more peculiar dataset than 10-fold CV)? As I see it
>you have two ways of resampling, one a lot more balanced than the other,
>but no other fundamental difference.

You're still validating a model with sample size n*.9, I think. And
the "significant variables selected" if doing stepwise modeling may not
vary enough, although 10-fold cv is certainly better than jackknife for
this. It's interesting that with the .632 bootstrap, which attempts to
balance the samples in some sense, some indexes of predictive accuracy
can be estimated more accurately but for some it gets worse.

>
>> Leave out one definitely doesn't work in that context, and 10-fold does't work very well.
>
>Is that a theory or an observation? I have had no problems with
>V-fold CV under quite extreme model selection, and I understand that
>to be the problem with bootstrapping tree-based procedures.

I have seen this in print at least regarding leave out one- wish I
remember where. The problem is that when you leave out one observation
the final stepwise model doesn't change often enough (e.g., compared
with the variation in variables selected when you do monte carlo
simulation).

-Frank

-----------------------------------------------------------------------
This message was distributed by s-news@wubios.wustl.edu. To unsubscribe
send e-mail to s-news-request@wubios.wustl.edu with the BODY of the
message: unsubscribe s-news