Leonid,
With regard to discarding runs at the boundary what I had in mind was
runs which had reached the maximum number of iterations but I realized
later that Jacob was referring to NONMEM's often irritating messages
that usually just mean the initial estimate changed a lot or variance
was getting close to zero.
There are of course some cases where the estimate is truly at a user
defined constraint. Assuming that the user has thought carefully about
these constraints then I would interpret a run that finished at this
constraint boundary as showing NONMEM was stuck in a local minimum
(probably because of the constraint boundary) and if the constraint was
relaxed then perhaps a more useful estimate would be obtained.
In those cases then I think one can make an argument for discarding runs
with parameters that are at this kind of boundary as well as those which
reached an iteration limit.
In general I agree with your remarks (echoing those from Marc
Gastonguay) that one needs to think about the way each bootstrap run
behaved. But some things like non-convergence and failed covariance are
ignorable because they don't influence the bootstrap distribution.
There is also the need to recognize that bootstraps can be seriously
time consuming and the effort required to understand all the ways that
runs might finish is usually not worth it given the purposes of doing a
bootstrap.
The most important reason for doing a bootstrap is to get more robust
estimates of the parameters. This was the main reason why these
re-sampling procedures were initially developed. The bootstrap estimate
of the parameters will usually be pretty insensitive to the margins of
the distribution where the questionable run results are typically located.
A secondary semi-quantitative reason is to get a confidence interval
which may be helpful for model selection. This may be influenced by the
questionable runs but that is just part of the uncertainty that the
confidence interval is used to define.
Nick
On 10/07/2011 11:13 p.m., Leonid Gibiansky wrote:
I thought that the original post was "results at a boundary should NOT
be discarded" and Nick reply was just a typo. If it was not a typo, I
would disagree and argue that all results should be included:
Each data set is a particular realization. We should be able to use
all of them. If some realizations are so special that the model
behaves in an unusual way (with any definition of unusual:
non-convergence, not convergence of the covariance step, parameter
estimates at the boundary, etc.) we either need to accept those as is,
or work with each of those special data sets one by one to push to the
parameter estimates that we can accept, or change the bootstrap
procedure (add stratification by covariates, by dose level, by route
of administration, etc.) so that all data sets behave similarly.
Leonid
--------------------------------------
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web: www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel: (301) 767 5566
On 7/10/2011 2:57 PM, Stephen Duffull wrote:
Nick, Jakob, Marc et al
Thanks for your helpful comments. I agree with you that any results
that
are at a boundary should be discarded from the bootstrap distribution.
On the whole I the sentiments in this thread align with anecdotal
findings from my experience. But, I was just wondering how you
define your boundaries for variance and covariance parameters (e.g.
OMEGA terms)?
For variance terms, lower boundaries seems reasonably straightforward
(e.g. 1E-5 seems close to zero). Upper boundaries are of course
open, for the variance of a log-normal ETA would 1E+5 or 1E+4 be
large enough to be considered close to a boundary? At what value
would you discard the result? At what correlation value would you
discard a result (>0.99,> 0.97...) as being close to 1. Clearly if
this was for regulatory work you could define these a priori after
having chosen any arbitrary cut-off. But the devil here lies with
the non-regulatory work where you may not have defined these
boundaries a priori.
Steve
--
Professor Stephen Duffull
Chair of Clinical Pharmacy
School of Pharmacy
University of Otago
PO Box 56 Dunedin
New Zealand
E: [email protected]
P: +64 3 479 5044
F: +64 3 479 7034
W: http://pharmacy.otago.ac.nz/profiles/stephenduffull
Design software: www.winpopt.com
--
Nick Holford, Professor Clinical Pharmacology
Dept Pharmacology& Clinical Pharmacology
University of Auckland,85 Park Rd,Private Bag 92019,Auckland,New Zealand
tel:+64(9)923-6730 fax:+64(9)373-7090 mobile:+64(21)46 23 53
email: [email protected]
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford