Thank you for your advice, Tim.
I am reading your paper and other materials in your website.
I could not find R package of your bootknife method. Is there any R
package for this procedure?
(11/05/17 14:13), Tim Hesterberg wrote:
My usual rule is that whatever gives the widest confidence
The choice is not clear, and requires some simulations to estimate the
average absolute error of the covariance matrix estimators.
Frank
細田弘吉 wrote:
Thank you for your reply, Prof. Harrell.
I agree with you. Dropping only one variable does not actually help a lot.
I have one more
Thank you for your comment, Prof. Harrell.
I would appreciate it very much if you could teach me how to simulate
for the estimation. For reference, following codes are what I did
(bootcov, summary, and validation).
MyFullModel.boot - bootcov(MyFullModel, B=1000, coef.reps=T)
My usual rule is that whatever gives the widest confidence intervals
in a particular problem is most accurate for that problem :-)
Bootstrap percentile intervals tend to be too narrow.
Consider the case of the sample mean; the usual formula CI is
xbar +- t_alpha sqrt( (1/(n-1)) sum((x_i -
Hi,
I am trying to construct a logistic regression model from my data (104
patients and 25 events). I build a full model consisting of five
predictors with the use of penalization by rms package (lrm, pentrace
etc) because of events per variable issue. Then, I tried to approximate
the full model
I think you are doing this correctly except for one thing. The validation
and other inferential calculations should be done on the full model. Use
the approximate model to get a simpler nomogram but not to get standard
errors. With only dropping one variable you might consider just running the
Thank you for your reply, Prof. Harrell.
I agree with you. Dropping only one variable does not actually help a lot.
I have one more question.
During analysis of this model I found that the confidence
intervals (CIs) of some coefficients provided by bootstrapping (bootcov
function in rms
7 matches
Mail list logo