Hi Jakob,
The derivation for your expression comes from a first-order approximation of
the variance often referred to as the "delta method" in the statistical
literature. The approximation is:
Var(f(x)) ~= [f'(x)^2]Var(x) or equivalently, SE(f(x)) ~= f'(x)SE(x)
If we want SE(omega) but have the
Ron,
Truncation can also introduce bias so you should check for this in your
simulations as well. Also, if your model based on 100 patients results in
simulations of CL/F ranging from 1 300 L/hr but the post hoc estimates of
CL/F range from 5 30 L/hr then you may want to further assess th
Steven,
While I agree that simulation of a larger number of subjects will result in
a wider range of clearance I have my doubts that that alone would be
sufficient to account for a 50-fold increase in range (5 to 30 vs 1 to 300).
Another contributing factor is that the post hoc estimates will have
Andreas,
Your simulations highlight a limitation with the combined (additive +
proportional or slope-intercept) residual error model. The combined
residual error model cannot be the correct model at very low concentrations
since the normal distribution will put non-zero probability mass at
con
ation always
causes measurement bias (whether the LLOQ is 0 or greater).
Best wishes,
Nick
Ken Kowalski wrote:
>
> Andreas,
>
> Your simulations highlight a limitation with the combined (additive +
> proportional or slope-intercept) residual error model. The combined
> residual e
Mitsuo,
This is a new message specific to NONMEM VI. I must confess I don't know
what to make of this message myself. It would be informative if someone
could tell us what internals in NONMEM trigger this message (i.e., "PROBLEMS
OCCURRED WITH THE MINIMIZATION").
With respect to your two model
Mats, Nick, and NMusers,
When Stu Beal was first thinking about reporting out a p-value for ETABAR I
know he was conflicted because he knew that the statistical properties of
the test were probably never likely to be met. A couple of statistical
properties that are probably not met that haven't b
Leonid,
I have never reported out as a final model a run that failed to converge or
failed the COV step. My guess is that individuals who frequently do probably
tend to be more mechanistic in their model building than I am and often push
the complexity of their models beyond what can be suppor
The method that Marc describes is labeled the PPP&D method in the Zhang et
al paper below. With this approach you set up the model just as if you were
going to do a simultaneous fit (that is the dataset contains DVs for both
the PK and PD (or metabolite)) but all of the population PK parameters
(
NMusers,
My apologies for entering into this discussion a bit late as I was on vacation
last week. Rather than rehash previous debates about $COV, I thought I would
just list some of the ways I use the $COV step output to assist my model
building and clinical trial simulation efforts.
Before
Dear NMusers,
For those of you using the Windows-based version of the WAM algorithm
(Wald's Approximation Method for covariate model building) developed by
Pharsight based on the methodology in the manuscript by Kowalski &
Hutmacher, JPP 2001;28:253-275, please note that we recently determined
Nick,
It sounds like you do recognize that models are often over-parameterized by
your statements:
" It is quite common to find that the
estimates EC50 and Emax are highly correlated (I assume SLOP=EMAX/EC50).
It would also be common to find that the random effects of EMAX and EC50
are also co
model will still do better. I cannot see how it can do worse than
the linear model (assuming the model passes other tests of plausibility
and the VPC looks OK).
Thanks for your 2c!
Nick
Ken Kowalski wrote:
> Nick,
>
> It sounds like you do recognize that models are often over-parameter
Hi Varsha,
In your control stream you have CL and Vd parameterized as:
TVCL = THETA(1) + THETA (3) * WT
TVVD = THETA(2) + THETA (4) * WT
Thus, THETA(1) and THETA(2) are the typical individual values of CL and Vd
at WT=0 which are meaningless parameters. If both CL and Vd are directly
proportion
Hi All,
I think we are getting off the point. What Varsha was asking is how can the CL
reported as L/kg/hr be so different between the two treatment groups while the
volume of distribution and half-life could be so similar when fitting a one
compartment model? I think the answer to this q
Multi-Discipline Model-Based Drug Development to Improve "Accelerate/Go/No
Go" Decisions for New Products
These live webinars will show how modern modeling techniques can be used to
avoid incorrect drug development decisions, tackle costs, and assess value.
The expert speakers will survey the u
Hi Pete,
In this setting I generally try to first model the baseline response and
perhaps pursue alternative structural model forms. For example, I may
consider a multiplicative relationship rather than an additive relationship
between baseline and placebo/drug effects. However, if the distribut
Hi Venkatesh,
I think it is reasonable to postulate models with combined additive and
proportional effects as long as they are supported by the data. You may
wish to generate some graphical diagnostics to support your structural model
choice in addition to the OFV (especially since these diffe
Hi Thierry,
Actually, devising a TDM program is precisely when you should be evaluating
whether you have substantial IOV. If IOV is considerably greater than IIV
then there is little benefit in a TDM program as you point out since a
concentration from one occasion may not contain much informat
Hi Yuhong,
To expand on Serge's response that OFV can be negative for untransformed
data it is important to note that the OFV depends on the scale. You can
show that
OFV(Y/c) =OFV(Y) - 2nlog(c)
where n is the total number of observations and c is an arbitrary constant
such as one mig
Hi Pavel,
Perhaps you should consider a search in alphabetical order. In which case it
is hard to beat A2PG. :)
www.a2pg.com
Ken
Kenneth G. Kowalski
President & CEO
A2PG - Ann Arbor Pharmacometrics Group, Inc.
110 Miller Ave., Garden Suite
Ann Arbor, MI 48104
Work: 734-274-8255
Cell: 248-207-
Hi Paul,
When you treat each occasion as a different patient you get
subject-by-occasion specific predictions of all the parameters in your model
(or at least for those parameters in which you include an IIV eta)...not
just clearance. Thus, it is not surprising that you might get a better fit
by
All,
I feel the need to clarify my response as well. I agree that the
systematic/mechanistic effects of pregnancy term on the PK should definitely
be explored first (get the fixed effects right before postulating additional
IOV random effects) and I leave that to the subject-matter experts. My
r
Svetlana,
Can you clarify your model a bit more in terms of what is a parameter and
what is a covariate? You indicated that the DV is change from baseline. If
so, why do you have TBAS in the model? Is DV = TEFF - TBAS? If so,
it seems that your model should be written as:
DEFF = (TSLP*CP)**TA
Hi Shirley,
What this suggests is that the distributions of the empirical Bayes'
predictions of the ETAs (which are used to generate the IPREDs) have some
departures from the normality assumptions (including zero mean with constant
variance omega^2) when you use the model in simulations. I sug
Hi Ayyappa,
Some comments and suggestions:
1) Standard diagnostics that suggest the model fits well does not say anything
about whether the parameter estimates you've obtained are reasonably accurate
or precisely estimated. I don't know if your model is over-parameterized or
not but an over-
Dear Pavel,
I certainly feel your pain but you have to be careful how you fix certain
elements in Omega to ensure that you have a valid positive definite
covariance matrix. The starting values in your $OMEGA block do not give
rise to a valid covariance matrix. Note in particular that the cova
Hi Matt/Nick/All,
It is my understanding that if analytical labs were to report the measured
concentrations below the BLQ that negative concentration values could be
reported from the standard curve predictions. Thus, reporting the
pre-first-dose PK observations and including them in the anal
Hi Jeroen,
I believe the feature that you describe for a simultaneous fit also applies to
the PPP&D sequential approach that Nick advocates (which I also like).The
framework of the PPP&D approach is to set it up the same as you would a
simultaneous model fit but you fix the PK parameters (P
hursday, February 21, 2013 3:48 AM
To: 'Perez Ruixo, Juan Jose'; Ken Kowalski; 'Elassaiss - Schaap, J
(Jeroen)'; 'Nick Holford'; 'nmusers'
Subject: RE: [NMusers] RE: Simulation settgin in the precence of Shrinkage
in PK when doing PK-PD analysis
Hi All,
The
Dear Gavin,
This is most likely because most nonlinear regression programs invert the
Hessian (second derivative matrix of the model with respect to the
parameters) to obtain the covariance matrix. This corresponds to the R
matrix in NONMEM. However, the default method that NONMEM uses is a
s
Dear Yu,
Did you explore block Omega structures to investigate potential correlations
among the random effects? If you assumed a diagonal Omega structure where
the random effects are assumed to be independent when they are indeed
correlated this can inflate the between-subject variability in y
Hi Douglas,
My own thinking is that you should fit the largest omega structure that can
be supported by the data rather than just always assuming a diagonal omega
structure. This does not necessarily mean always fitting a full block omega
structure, as it can often lead to an ill-conditioned mode
reasonably estimated.
BW,
Joe
Joseph F Standing
MRC Fellow, UCL Institute of Child Health
Antimicrobial Pharmacist, Great Ormond Street Hospital
Tel: +44(0)207 905 2370
Mobile: +44(0)7970 572435
_
From: owner-nmus...@globomaxnm.com [owner-nmus...@globomaxnm.com] On Behalf Of
Ken Kowals
http://www.mail-archive.com/nmusers%40globomaxnm.com/msg03401.html. See also
http://holford.fmhs.auckland.ac.nz/docs/bootstrap-and-confidence-intervals.pdf
slides 24 to 31.
Best wishes,
Nick
On 1/10/2014 7:57 a.m., Ken Kowalski wrote:
Hi Jeroen,
I think we might be on the same page but I want
Hi All,
I agree with everything that Marc and Douglas have pointed out. I too do not
advise building the omega structure based on repeated likelihood ratio tests.
The approach I take is more akin to what Joe had suggested earlier using SAEM
to fit the full block omega structure and then lo
development with a fully parametric estimation method. I’m
intrigued…
Ken
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On
Behalf Of Mats Karlsson
Sent: Thursday, October 02, 2014 12:43 PM
To: Ken Kowalski; ma...@metruminst.org; 'Eleveld, DJ'
Hi All,
I have done it both ways (with and without including parameter uncertainty).
It is important to note that the resulting VPC intervals are degenerate when
you don’t take into account parameter uncertainty. That is, with infinite
sample size these intervals will collapse to the p
Dear Pavel,
Are you using MATRIX=R when using NONMEM or the default sandwich estimator for
the covariance matrix? Most nonlinear regression packages use the equivalent
of MATRIX=R in NONMEM. You might try MATRIX=R on the $COV step and then
compare to Monolix.
Ken
Sent from my iPhone
> On
Dear NMusers,
As many of you may know, Matt Hutmacher passed away on May 3, 2017,
unexpectedly at the age of 47. He is survived by his loving and supportive
wife, Carey, and their two children, James (10 yrs) and Ella (8 yrs). I
have known Matt for 22 years as his mentor, colleague, research/bus
Dear NMusers,
As many of you may know, Matt Hutmacher passed away on May 3, 2017,
unexpectedly at the age of 47. He is survived by his loving and supportive
wife, Carey, and their two children, James (10 yrs) and Ella (8 yrs). I
have known Matt for 22 years as his mentor, colleague, research/bus
Hi All,
I know what Bill is trying to say but it is not quite accurate the way he
states it.
A prediction interval makes inference on a statistic based on a future sample
such as a sample mean of a future set of data. In contrast, a confidence
interval makes inference on a parameter such
Kowalski
Subject: RE: [NMusers] VPCs confidence intervals?
Hi Ken,
Thanks for that good clarification!
Bill
From: Ken Kowalski mailto:kgkowalsk...@gmail.com> >
Sent: Thursday, March 14, 2019 2:01 PM
To: 'Bill Denney' mailto:wden...@humanpredictions.com> >; '
Dear NMUsers,
On Thursday, October 24, 2019, I will be giving an ACoP10 post-meeting
workshop in Orlando, FL. This workshop will be a 3-hour didactic lecture
based on my recently published paper:
Kowalski, K.G. "Integration of Pharmacometric and Statistical Analyses Using
Clinical Trial Si
Hi Nyein,
I agree with Nick that it may be valid to simulate negative concentrations and
that the only reason that we don't observe negative concentrations is because
assay labs censor these values. However, for these negative concentrations to
be reasonable and attributed to assay variation,
Hi Ibtihel,
I think you are probably asking for covariance matrix of the parameter
estimates. This should automatically be outputted as the .cov file assuming
that the $COV step runs successfully. Note that since NONMEM minimizes a
function related to -2LL, the Hessian (R matrix) in NONMEM is
Hi Ibtihel,
See below for my comments.
Ken
From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On
Behalf Of Hammami, Ibtihel /FR
Sent: Wednesday, March 24, 2021 5:47 AM
To: nmusers@globomaxnm.com
Subject: [NMusers] Statistical power of covariate inclusion in popPK m
Hi Bob,
Just a point of clarification. If the default sandwich estimator is used to
estimate the covariance matrix then the .coi file outputs the inverse of this
sandwich estimator, ie., R(S^-1)R …correct? If so, and maybe this is just
semantics but I don’t think we would refer to R(S^-1)R
Hi Ayyappa,
I think the condition number was first proposed as a statistic to diagnose
multicollinearity in multiple linear regression analyses based on an
eigenvalue analysis of the X'X matrix. You can probably search the
statistical literature and multiple linear regression textbooks to find
va
= CN < 1000
High: 1000 <= CN < 10,000
Extreme:CN >= 10,000
Ken
-Original Message-
From: Ayyappa Chaturvedula [mailto:ayyapp...@gmail.com]
Sent: Tuesday, November 29, 2022 10:20 AM
To: Ken Kowalski
Cc: nmusers@globomaxnm.com
Subject: Re: [NMusers] Condition num
pairwise correlations are extreme if CN is very
large.
From: Ayyappa Chaturvedula [mailto:ayyapp...@gmail.com]
Sent: Tuesday, November 29, 2022 11:07 AM
To: Ken Kowalski
Cc: nmusers@globomaxnm.com
Subject: Re: [NMusers] Condition number
Hi Ken,
Thank you again. But, I have seen models with
Hi Kyun-Seop,
I would state things a little differently rather than say “devalue condition
number and multi-collinearity” we should treat CN as a diagnostic and rules
such as CN>1000 should NOT be used as a hard and fast rule to reject a model.
I agree with Jeroen that we should understand t
, November 29, 2022 7:04 PM
To: Ken Kowalski
Cc: Kyun-Seop Bae ; nmusers@globomaxnm.com; Jeroen
Elassaiss-Schaap (PD-value B.V.)
Subject: Re: [NMusers] Condition number
Hi Ken & Kyun-Seop,
I agree it should be taught, since it is prevalent in the industry, and should
be looked at as somethin
,
Ken
From: Matthew Fidler [mailto:matthew.fid...@gmail.com]
Sent: Tuesday, November 29, 2022 7:04 PM
To: Ken Kowalski
Cc: Kyun-Seop Bae ; nmusers@globomaxnm.com; Jeroen
Elassaiss-Schaap (PD-value
Hi Matt,
I’m pretty sure Stu Beal told me many years ago that NONMEM calculates the
eigenvalues from the correlation matrix. Maybe Bob Bauer can chime in here?
Ken
From: Matthew Fidler [mailto:matthew.fid...@gmail.com]
Sent: Tuesday, November 29, 2022 7:56 PM
To: Ken Kowalski
Cc: Kyun
lting, LLC
Email: kgkowalsk...@gmail.com
Cell:248-207-5082
-Original Message-
From: Bonate, Peter [mailto:peter.bon...@astellas.com]
Sent: Tuesday, November 29, 2022 8:27 PM
To: Leonid Gibiansky
Cc: Ken Kowalski ; Matthew Fidler
; Kyun-Seop Bae ;
nmusers@globomaxnm.com; Jeroen Elas
obomaxnm.com <mailto:owner-nmus...@globomaxnm.com>
mailto:owner-nmus...@globomaxnm.com> > On Behalf
Of Bonate, Peter
Sent: Wednesday, 30 November 2022 2:27 PM
To: Leonid Gibiansky mailto:lgibian...@quantpharm.com> >
Cc: Ken Kowalski mailto:kgkowalsk...@gmail.com> >
rical instability while a CN based on a covariance matrix may
have more utility as a collinearity diagnostic.
Best,
Ken
-Original Message-
From: Bonate, Peter [mailto:peter.bon...@astellas.com]
Sent: Wednesday, November 30, c 10:13 AM
To: Ken Kowalski ; 'Leonid Gibiansky'
Cc:
en
-Original Message-
From: Bonate, Peter [mailto:peter.bon...@astellas.com]luSent: Wednesday,
November 30, 2022 7:52 PM
To: Ken Kowalski ; 'Leonid Gibiansky'
Cc: 'Matthew Fidler' ; 'Kyun-Seop Bae'
; nmusers@globomaxnm.com; 'Jeroen Elassaiss-Schaap
(PD-va
Hey Bob,
I get that NONMEM can encounter negative eigenvalues during the R matrix
decomposition and inversion step and if it does then the $COV step fails.
However, both Pete and I have encountered situations where the R matrix is
apparently positive definite since the $COV step runs but NON
Hi Bob,
Could it possibly be related to the S matrix and the default sandwich estimator
used in estimating the covariance and correlation matrices?
Ken
From: Ken Kowalski [mailto:kgkowalsk...@gmail.com]
Sent: Thursday, December 1, 2022 9:52 AM
To: 'Bauer, Robert' ; nmusers@glob
61 matches
Mail list logo