[NMusers] Calculating shrinkage when some etas are zero

2009-08-21 Thread Pyry Välitalo
Hi all,

I saw this snippet of information on PsN-general mailing list.

Kajsa Harling wrote in PsN-general:
I talked to the experts here about shrinkage. Apparently, sometimes an
individual's eta may be exactly 0 (no effect, placebo, you probably
understand this better than I do). These zeros should not be included in
the shrinkage calculation, but now they are (erroneously) in PsN.

This led me to wonder about the calculation of shrinkage. I decided to post
here on nmusers, because my question mainly relates to NONMEM. I could not
find previous discussions about this topic exactly.

As I understand, if a parameter with BSV is not used by some individuals,
the etas for these individuals will be set to zero. An example would be a
dataset with IV and oral dosing data. If oral absorption rate constant KA
with BSV is estimated for this data, then all eta(KA) values for IV dosing
group will be zero.

The shrinkage of etas is calculated as
1-sd(etas)/omega
If the etas that equal exactly zero would have to be removed from this
equation then it would mean that NONMEM estimates the omega based on only
those individuals who need it for the parameter in question, e.g. the
omega(KA) would be estimated only based on the oral dosing group. Is this a
correct interpretation for the rationale to leave out zero etas?

I guess the inclusion of zero etas into shrinkage calculations significantly
increases the estimate of shrinkage because the zero etas always reduce the
sd(etas). As a practical example, suppose a dataset of 20 patients with oral
and 20 patients with IV administration. Suppose NONMEM estimates an omega of
0.4 for BSV of KA. Suppose the sd(etas) for oral group is 0.3 and thus
sd(etas) for all patients is 0.3/sqrt(2) since the etas in IV group for KA
are zero.
Thus, as far as I know, PsN would currently calculate a shrinkage of
1-(0.3/sqrt(2))/0.4=0.47.
Would it be more appropriate to manually calculate a shrinkage of
1-0.3/0.4=0.25 instead?

All comments much appreciated.

Kind regards,
Pyry



Kajsa Harling wrote:

 Dear Ethan,

 I have also been away for a while, thank you for your patience.

 I talked to the experts here about shrinkage. Apparently, sometimes an
 individual's eta may be exactly 0 (no effect, placebo, you probably
 understand this better than I do). These zeros should not be included in
 the shrinkage calculation, but now they are (erroneously) in PsN.

 Does this explain the discrepancy?

 Then, the heading shrinkage_wres is incorrect, it should say
 shrinkage_iwres (or eps) they say.

 Comments are fine as long as they do not have commas in them. But this
 is fixed in the latest release.

 Best regards,
 Kajsa





[NMusers] IOV with Mu modeling, using EM algorithms

2017-02-16 Thread Pyry Välitalo
Dear NMusers,

Based on the NONMEM Inter-Occasion Variability (IOV) example control
stream, located at NONMEM_install_dir/examples/example7.ctl, it seems that
it is currently not possible to Mu parameterize IOV in NONMEM 7.3. The
relevant lines from example7.ctl control stream are copypasted below:

MU_2=THETA(2)
CLB=DEXP(MU_2+ETA(2))
DCL1=DEXP(ETA(3))
DCL2=DEXP(ETA(4))
DCL3=DEXP(ETA(5))
DCL=DCL1
IF(TIME.GE.5.0) DCL=DCL2
IF(TIME.GE.10.0) DCL=DCL3
CL=CLB*DCL

My question is: Does this mean that the updating procedure of IIV/IOV
parameters (omegas) does not have the EM efficiency when IOV is present? If
we had no IOV, then I think NONMEM would calculate for the update step that
THETA(2)=E(LOG(CLB))
OMEGA(2,2)=VAR(LOG(CLB))
where E() refers to computing mean and VAR() refers to computing variance.
These expressions are useful, because with these there is no need to find
the best THETA(2) or OMEGA(2,2) values with numerical optimization. It
saves computation time that just mean and variance can be taken.

Now, with the IOV model, I think the updating could be accomplished with
THETA(2)=E(LOG(CLB))
OMEGA(2,2)=VAR(LOG(CLB))
OMEGA(3,3)=VAR(LOG(CL))-OMEGA(2,2)

However, I'm not sure what actually happens inside NONMEM.
-Is the updating done like this, or does the lack of MU referencing in IOV
force the estimation process to use gradient descent for estimation of
some/all parameters?
-If gradient descent is used, how dramatically does it reduce the speed of
the estimation process?
-If MU referencing is impossible with the presence of IOV, would it then be
possible to use occasion variable OCC as the ID, and somehow convince
NONMEM not to reset the compartments when OCC  changes? This way, MU
referencing could be used for the OCC variable, and IIV would be just
ignored for that run. This would be only for exploratory runs, of course.

For this discussion, let's assume that
-Adequate data exist to estimate occasion-specific random effects.
-The IOV parameters are critical for the model to be able to describe the
data nicely (e.g. large IOV in absorption)


Looking forward to hear your thoughts! Best wishes,
Pyry Välitalo
Postdoc, Leiden University, Division of Pharmacology
Senior Scientist, Orion Pharma, Drug Disposition and Pharmacometrics


Re: [NMusers] Observed (yaxis) vs Predicted (xaxis) Diagnostic Plot - Scientific basis.

2023-08-23 Thread Pyry Välitalo
nippet may be
relevant. It simulates 100 observations from lognormal distribution, and
then compares the smoothing curves from "loess" and "mgcv::gam" functions
to the theoretically expected mean value. There is a close agreement
between the loess curves and the analytically calculated mean value.
library(tidyverse)
with(list(omega=0.6),
 map_dfr(1:100,~tibble(x=1:10,y=exp(rnorm(10,0,omega %>%
 mutate(theoretical=exp(omega^2/2)) %>%
 ggplot(aes(x,y))+geom_point()+
 geom_smooth(method="loess",col=3)+geom_smooth(method=mgcv::gam),col=4)+
geom_line(aes(y=theoretical),col=2)

ps. The usual disclaimer, the opinions expressed in this message are mine
alone, and not necessarily those of my employer.

Best wishes,
Pyry Välitalo
PK Assessor at Finnish Medicines Agency

On Fri, 18 Aug 2023 at 10:59, Martin Bergstrand <
martin.bergstr...@pharmetheus.com> wrote:

> Dear Joga and all,
>
> Joga makes a valuable point that all pharmacometricians should be aware
> of. Standard methodology for regression assumes that the x-variable is
> without error (loess, linear regression etc.). Note that it is the same for
> NLME models i.e. we generally assume that our independent variables e.g.
> time, covariates etc. are without error.
>
> For DV vs. PRED plots it is common practice, even among those that do not
> know why, to plot PRED on the x-axis and DV on the y-axis. A greater
> problem with these plots is the commonly held expectation that for a "good
> model" a smooth or regression line should align with the line of unity.
> Though this seems intuitive it is a flawed assumption. This issue was
> clearly pointed out by Mats Karlsson and Rada Savic in their 2007 paper
> titled "Diagnosing Model Diagnostics''. For simple well-behaved examples
> you will see an alignment around the line of unity for DV vs. PRED plots.
> However, there are several factors that contribute to an expected deviation
> from this expectation:
> (1) Censoring (e.g. censoring of observations < LLOQ)
>  - In this case DVs are capped at LLOQ but PRED values are not.  This
> makes it perfectly expected that there will be a deviation from alignment
> around the line of unity in the lower range.
> (2) Strong non-linearities
> - The more nonlinear the modelled system is, the greater the expected
> deviation from the line of unity. Especially in combination with
> significant ETA correlations.
> (3) High variability
> - With higher between/within subject variability (e.g. IIV and RUV) that
> isn't normally distributed (e.g. exponential distributions) will result in
> an expected deviation from the line of unity. Note: this is a form of
> non-linearity so it may fall under the above category.
> (4) Adaptive designs (e.g. TDM dosing)
> - Listed in the original paper by Karlsson & Savic but I have not been
> able to recreate an issue in this case.
>
> I am rather sure that many thousands of hours have been spent on modeling
> trying to correct for perceived model misspecifications that are not really
> there. This is why I recommend relying primarily on simulation-based model
> diagnostics (e.g. VPCs) and as far as possible account for censoring that
> affects the original dataset. As pointed out by Karlsson & Savic a
> simulation/re-estimation based approach can also be used to investigate the
> expected behavior for DV vs. PRED plots for a particular model and dataset
> (e.g. mirror plots in Xpose). Note that to my knowledge there is yet
> no automated way to handle censoring in this context (clearly doable if
> anyone wants to develop a nifty implementation of that).
>
> If we leave the DV vs. PRED plot case, there are many other instances
> where we use scatter plots where it is much less clear what can be
> considered the independent variable and yet other cases where the
> assumption that the x-variable is without error is violated in a way that
> makes the results hard to interpret. One instance of the latter is when
> exposure-response is studied by plotting observed PD response versus
> observed trough plasma concentrations. This is already a way too long email
> so I will not deep dive into that problem as well.
>
> Best regards,
>
>
> Martin Bergstrand, Ph.D.
>
> Principal Consultant
>
> Pharmetheus AB
>
> martin.bergstr...@pharmetheus.com
>
> www.pharmetheus.com
>
>
> On Thu, Aug 17, 2023 at 12:44 PM Gobburu, Joga 
> wrote:
>
>> Dear Friends – Observations versus population predicted is considered a
>> standard diagnostic plot in our field. I used to place observations on the
>> x-axis and predictions on the yaxis. Then I was pointed to a publication
>> from ISOP (
>> https://www.ncbi.nlm.nih.gov/pmc/articles/PMC532