Hi Pete -
You can do this through trial simulation/estimation or through
information theoretic approaches (e.g. POPT, PFIM, PopED, etc.). Both
methods will give you an estimate of expected parameter estimation
precision under a given design. Simulation/estimation approaches will
also provide an estimate of parameter estimation bias.
The challenge is to accurately select the parameters (and model) a
priori. If you choose a model with fixed point estimates of THETA,
OMEGA and SIGMA, then your conclusions are only valid if the model
and parameters are an accurate representation of the truth. Since you
are extrapolating to a new population you could run into trouble in
this regard; there may considerable uncertainty in the extrapolation
of your current parameters to the new population.
A more useful approach would be to conduct the simulations or
information theory analyses over a joint probability distribution
representing uncertainty in the model parameters (and maybe the model
itself). For information theoretic methods, PopED allows you to do
something like this (I don't mean to leave out other approaches that
may also accommodate this). For simulation-estimation methods, you
can implement this level of parameter uncertainty at the inter-trial
level using simulation tools like Trial Simulator, or the R functions
we've developed for simulation from uncertainty distributions in
NONMEM (NMSUDS: http://metruminstitute.org/downloads/index.shtml).
The joint uncertainty distributions can be derived from Bayesian
posterior distributions, bootstrap results or just an educated guess
about plausible distributions encompassing the uncertainty in
parameters due to the extrapolation to the new population.
For each trial replicate, you'll get an estimate of parameter
precision (and bias), resulting in a probability distribution of
trial outcomes. You can then examine the sensitivity of the outcome
(e.g. %precision of typical CL) to the uncertainty in your simulation
parameters (and even model) by plotting trial outcome vs. the trial-
specific draws of a given parameter from its uncertainty
distribution. Do this for all parameters in your model. If there are
regions of these sensitivity curves that do not achieve the desired
target response, you could 1) modify the trial design to make it
robust enough to achieve the target across the distribution of
parameter uncertainty or 2) gain more information about the model
parameters (e.g. a pilot study in the new population), reducing the
range of uncertainty, and re-run the simulation exercise to determine
if the proposed design is sufficient given the improved estimates of
model parameters. You'll have to balance practical considerations in
either case.
In my experience, this sort of request is not new, especially in
populations such as pediatrics. Some recent poster presentations on
this topic are listed here and are available for download at http://
metrumrg.com/publications.htm:
1. Gastonguay MR, Gibiansky L. Acknowledging and Incorporating
Uncertainty in Model-Based Inferences. ECPAG Conference (2006)
Workshop Poster Session, Abstract.
2. Gibiansky L and MR Gastonguay. R/NONMEM Toolbox for Simulation
from Posterior Parameter (Uncertainty) Distributions. L. Gibiansky
and M.R. Gastonguay. PAGE ( 2006) Abstract 958.
3. Mondick JT, Gibiansky L, Gastonguay MR, Veal GJ, Barrett JS.
Acknowledging parameter uncertainty in the simulation-based design of
an actinomycin-D pharmacokinetic study in pediatric patients with
Wilms’ Tumor or rhabdomyosarcoma. PAGE 15 (2006) Abstract 938.
4. Gastonguay MR, Gibiansky L. Acknowledging Parameter Uncertainty by
Simulating from Posterior Distributions with NONMEM and R. MUFPADA
Annual Meeting (2006) Abstract.
2005
5. Gastonguay MR, El-Tahtawy A. Modeling and Simulation Guided Design
of a Pediatric Population Pharmacokinetic Trial for Hydromorphone.
The AAPS Journal. Vol. 7, No. S2, Abstract W5318, 2005.
Hope this helps.
Marc
Marc R. Gastonguay, Ph.D.
President & CEO, Metrum Research Group LLC [www.metrumrg.com]
Scientific Director, Metrum Institute [www.metruminstitute.org]
Email: [EMAIL PROTECTED] Direct: +1.860.670.0744 Main:
+1.860.735.7043
On Aug 29, 2007, at 11:28 AM, Bonate, Peter wrote:
Recently in an interaction with the FDA they asked us to power a
pharmacokinetic study to a given precision in a parameter estimate
based on a pop pk model in a population we have no experience
with. In other words, they wanted us to power a study to ensure
that the standard error of the population mean clearance was less
than 30% CV. Does anyone know how to do this a priori? Does this
seem to be something new?
Thanks,
pete bonate
Peter L. Bonate, PhD, FCP
Genzyme Corporation
Senior Director, Pharmacokinetics
4545 Horizon Hill Blvd
San Antonio, TX 78229 USA
[EMAIL PROTECTED]
phone: 210-949-8662
fax: 210-949-8219
crackberry: 210-315-2713