For distinct failure times k > j, the correlation is=20
C(Sj, Sk)=3D ((1-S=5Ej)*S=5Ek/(S=5Ej*(1-S=5Ek))=5E.5 for a particular =
cohort.=20
I did the coding as follows.
glsV_function(vS,sSse) =7Bn_length(vSe); C_matrix(0,n,n);=20
=23 sSse is vector of SE of survival vector S;
for(i in c(1:n-1))=20
for(j in c(i+1,n)) C=5Bi,j=5D_C=5Bj,i=5D_the above expression;=20
return(diag(sSse)%*%C%*%diag(sSse)); =20
=7D
In combining different studies by stacking their survival vectors, the =
covariance matrix is blocked diagonal and updated at each iteration with =
the current survival estimates.=20
Questions:
1. The QR decomposition is rather unstable with this covariance structure. =
It fails for sure for cohort with stable survivals, e.g. Time 1 =
2 3 4 5 %(se)
Study 1 49(12) 46(12) 42(12) 40(12) 40(12)=09
Study 2 54(10) 47(13) 40(13) 40(13)
at successive time points. How can we get around this?
2. We could not extract SE from some studies. If we estimate SE through =
SE(i)=3D(S=5Ei*(1-S=5Ei))/=23 at risk, what are the arguments against =
doing this?
3. If a published survival S(i) was estimated including censors but =
detailed only the initial n and total censors, could we use the effective =
sample size as (n - =23 censors). in effect, we ignore the contribution to =
the lik from censored pts but decrease the sample size to failure pts only =
(this definitely won*t work for studies with small failure rates). What =
are the stupid aspects of doing this?
4. Is there some work out there for the calculation of least square =
estimates of certain factors in a glm or lm object?
Thank you for bearing with me on these questions.
Regards,
Ba* Pham
Thomas C. Chalmers for Systematic Reviews
Children*s Hospital of Eastern Ontario Research Institute
Phone (613) 738 3951