Re: [NMusers] %RSE for IIV when expressed as %CV instead of variance

2024-05-11 Thread Leonid Gibiansky

For CV, look for "geometric CV" here:

https://en.wikipedia.org/wiki/Coefficient_of_variation

(Sln^2 in Nonmem OMEGA).

Or you can use simpler (but approximate) expression
CV%=100*SQRT(OMEGA).

For RSE, not sure. Nonmem reports OMEGA SE that can be used to compute 
OMEGA CI (confidence intervals), than can be translated to CI for CV (if 
needed) as described above.


Leonid


On 5/11/2024 10:08 AM, Anita Moein wrote:

Dear All:

I have a question regarding reporting ETAs as %CV instead of variance.

In NONMEM the IIV estimate is reported as variance with associated RSE%.

How can I convert the IIV Estimate and RSE% to report it as CV%?

Thank you!

Best,
Anita




*Anita Moein*
Senior Scientist
Modeling and Simulation | Clinical Pharmacology | Genentech
Phone: (650) 866 7701 | Cell: (415) 254 7972






Re: [NMusers] LSODA error code -5

2024-04-15 Thread Leonid Gibiansky

Hi Yun,

LSODA messages are not very informative or specific. They just tell that 
the model has numerical problems. You may try to start with the new 
initial conditions (e.g., divide all CL and V initial parameter values 
by 0.9) and/or change TOL value to the lower value (8 or 9?).


Thank you,
Leonid



On 4/15/2024 1:32 PM, Wang, Yun (OCP/DPM) wrote:

Dear all:

   I wonder if anyone has experience with LSODA error code -5. I 
recently slightly modified a dataset by applying a conversion factor 
(~0.9) to the DV and rerun the same Poppk model which used to be ok with 
the dataset before conversion. However, I encountered the following 
messages during running. I appreciate the tips if you have. Thanks a lot.


Best wishes,

Yun





Re: [NMusers] ERROR -M2 vpc run

2024-03-23 Thread Leonid Gibiansky

This is what I found in PDx-Pop manual, I think this is what is the problem:

"You can not do a predictive check (or a simulation in general) based on 
an estimation that required
the Laplacian method for the estimation (e.g. an analysis using YLO, 
YUP, etc.). You will get an

error like the following:
AN ERROR WAS FOUND IN THE CONTROL STATEMENTS.

 585 LAPLACIAN METHOD IS REQUIRED WHEN YLO, YUP, CTLO, OR CTUP IS USED.
fsubs did not get created by NM-TRAN. No NONMEM execution. "



On 3/23/2024 11:40 PM, Leonid Gibiansky wrote:

Hi Mohd,

Something else is going on, your code runs fine on my side except you 
need to define LLOQ, and also correct an error in SD:

SD = SQRT(1 + THETA(4)*F**2) ; not **F

Why you call it VPC, are you running it via PSN or similar interface? 
Then the problem can be there, in transformation of the estimation file 
to the simulation file.


Also, why YLO is log(LLOQ), your code does not use transform on both 
sides, so why log is used?


Leonid




---
$ERROR
   LLOQ=0.1
   SD = SQRT(1 + THETA(4)*F**2)
   YLO=LOG(LLOQ)
   IPRED=F
   W=SD
   IRES = DV-IPRED
   IWRES = IRES/W
   Y = F + SD*EPS(1)

$ESTIMATION METHOD=COND LAPLACIAN INTER NUMERICAL NOABORT SLOW NSIG=2
     SIGL=9 MAXEVAL= PRINT=1


On 3/23/2024 4:44 PM, Mohd Rahimi wrote:

Dear NONMEM user,

I tried to run a VPC for M2 BQL method by using the code as stated below:

;M2 - Likelihood assumes all values are censored at LLOQ ‘YLO’

*   SD = SQRT(1 + THETA(4)**F**2)
   YLO=LOG(LLOQ)
   IPRED=F
   W=SD
   IRES = DV-IPRED
   IWRES = IRES/W
   Y = F + SD*EPS(1)
*
*
*
*$ESTIMATION METHOD=COND LAPLACIAN INTER NUMERICAL NOABORT SLOW NSIG=2
             SIGL=9 MAXEVAL= PRINT=1*

But I got this kind of error message from NONMEM

* AN ERROR WAS FOUND IN THE CONTROL STATEMENTS.

   585  LAPLACIAN METHOD IS REQUIRED WHEN YLO, YUP, CTLO, OR CTUP IS 
USED.*


Any thoughts on how to solve this?

Thank you

--
*Mohd Rahimi Muda*




Re: [NMusers] ERROR -M2 vpc run

2024-03-23 Thread Leonid Gibiansky

Hi Mohd,

Something else is going on, your code runs fine on my side except you 
need to define LLOQ, and also correct an error in SD:

SD = SQRT(1 + THETA(4)*F**2) ; not **F

Why you call it VPC, are you running it via PSN or similar interface? 
Then the problem can be there, in transformation of the estimation file 
to the simulation file.


Also, why YLO is log(LLOQ), your code does not use transform on both 
sides, so why log is used?


Leonid




---
$ERROR
  LLOQ=0.1
  SD = SQRT(1 + THETA(4)*F**2)
  YLO=LOG(LLOQ)
  IPRED=F
  W=SD
  IRES = DV-IPRED
  IWRES = IRES/W
  Y = F + SD*EPS(1)

$ESTIMATION METHOD=COND LAPLACIAN INTER NUMERICAL NOABORT SLOW NSIG=2
SIGL=9 MAXEVAL= PRINT=1


On 3/23/2024 4:44 PM, Mohd Rahimi wrote:

Dear NONMEM user,

I tried to run a VPC for M2 BQL method by using the code as stated below:

;M2 - Likelihood assumes all values are censored at LLOQ ‘YLO’

*   SD = SQRT(1 + THETA(4)**F**2)
   YLO=LOG(LLOQ)
   IPRED=F
   W=SD
   IRES = DV-IPRED
   IWRES = IRES/W
   Y = F + SD*EPS(1)
*
*
*
*$ESTIMATION METHOD=COND LAPLACIAN INTER NUMERICAL NOABORT SLOW NSIG=2
             SIGL=9 MAXEVAL= PRINT=1*

But I got this kind of error message from NONMEM

* AN ERROR WAS FOUND IN THE CONTROL STATEMENTS.

   585  LAPLACIAN METHOD IS REQUIRED WHEN YLO, YUP, CTLO, OR CTUP IS USED.*

Any thoughts on how to solve this?

Thank you

--
*Mohd Rahimi Muda*




Re: [NMusers] Warning obtained while running SAEM

2024-03-05 Thread Leonid Gibiansky
I think we do not need to interpret any error model in terms of additive 
+ proportional. We can think of it as an independent type of the 
variance dependence on IPRED. If we want to revert to additive + 
proportional representation, we do not know a priory what is better: 
assumption of independence or assumption of perfect correlation: both 
are assumptions. I tried on a few examples to estimate error block 
correlation (long time ago, so I do not remember the details), and the 
error model always wanted to be perfectly correlated. This makes sense: 
when you have a discrepancy between DV and IPRED, you decompose it as a 
sum of two numbers of the same sign, one from the additive part, and one 
from the proportional part, weighted by their relative variances or SDs, 
rather than as a sum of two numbers of opposite signs.


I think this type of error model coding is one of the acceptable ways of 
doing it (until proven otherwise :) ).


Best regards,
Leonid


On 3/5/2024 5:55 PM, kgkowalsk...@gmail.com wrote:

Hi Luann,

Yes, both of your expressions for Y (corrected below) would be 
equivalent assuming ERR(1) and ERR(2) are independent, i.e., assuming a 
$SIGMA DIAGONAL(2).  The interesting thing is the interpretation of a 
model where W = THETA(1)*IPRED + THETA(2).  This turns out to be 
equivalent to assuming a $SIGMA BLOCK(2) with perfect correlation 
between ERR(1) and ERR(2).  That is not something I thought about until 
Leonid wrote out his expression for VAR2 = VAR1 + 2*COV(ERR1,ERR2), 
where COV(ERR1,ERR2) = THETA1*IPRED*THETA2 which means CORR(ERR1,ERR2)=1.


Ken

*From:* Luann Phillips 
*Sent:* Tuesday, March 5, 2024 5:38 PM
*To:* kgkowalsk...@gmail.com; 'Leonid Gibiansky' 
; 'T. Preijers' ; 
nmusers@globomaxnm.com

*Subject:* RE: [NMusers] Warning obtained while running SAEM

Ken,

I have compared estimations of

Y= IPRED + IPRED*ERR(1) + ERR(2) to

W = SQRT(THETA(1)**2 * IPRED**2 + THETA(2)**2)

Y = IPRED + W*EPS(1)  with EPS(1) fixed to 1

Using the classical methods (FOCE, etc).

With estimation, the same results are achieved. However, as you 
mentioned this induces an implied relationship between the ‘additive’ 
and ‘ccv’ portions of the error model and the simulation results of the 
2 models can become quite different.


Unless I have reason to believe there is a relationship between the 2 
components of the additive + ccv error model, I use the following code 
to obtain the exact IWRES now. Skipping the need to use a theta as a 
random variable scale and skipping the need to fix the RV.


$ERROR
IPRED=F
W = SQRT(IPRED**2*SIGMA(1,1)+SIGMA(2,2))
IRES=DV-IPRED
IWRES = IRES/W
Y = F + F*EPS(1) + EPS(2)  ---or  Y = IPRED + IPRED*EPS(1) + EPS(2)

All eps terms are estimated.
And whether a correlation between the 2 can easily be tested by using a 
$SIGMA BLOCK(2)


I didn’t read all of the details of this thread. But I have noticed that 
many NM users do not realize that SIGMA(x,x) can be used to calculate 
the exact IWRES now instead of THETAs with a FIXED EPS.


Have a great day,

*Luann Phillips*

Distinguished Scientist, Pharmacometrics

Clinical Pharmacology and Pharmacometrics Solutions

Phone Number (716) 633-3463

Simulations Plus, Inc. (NASDAQ: SLP)

LinkedIn <https://www.linkedin.com/company/95827> | Twitter 
<https://twitter.com/SimulationsPlus> | YouTube 
<https://www.youtube.com/c/SimulationsPlusInc>


<https://www.simulations-plus.com/>

*From:* owner-nmus...@globomaxnm.com 
<mailto:owner-nmus...@globomaxnm.com> <mailto:owner-nmus...@globomaxnm.com>> *On Behalf Of 
*kgkowalsk...@gmail.com <mailto:kgkowalsk...@gmail.com>

*Sent:* Tuesday, March 5, 2024 5:05 PM
*To:* 'Leonid Gibiansky' <mailto:lgibian...@quantpharm.com>>; 'T. Preijers' <mailto:t.preij...@gmail.com>>; nmusers@globomaxnm.com 
<mailto:nmusers@globomaxnm.com>

*Subject:* RE: [NMusers] Warning obtained while running SAEM



Some people who received this message don't often get email from 
kgkowalsk...@gmail.com <mailto:kgkowalsk...@gmail.com>. Learn why this 
is important <https://aka.ms/LearnAboutSenderIdentification>




Oops see correction to the $SIGMA BLOCK(2) labeling below:

-Original Message-
From: kgkowalsk...@gmail.com <mailto:kgkowalsk...@gmail.com> 
mailto:kgkowalsk...@gmail.com>>

Sent: Tuesday, March 5, 2024 4:56 PM
To: 'Leonid Gibiansky' <mailto:lgibian...@quantpharm.com>>; 'T. Preijers' <mailto:t.preij...@gmail.com>>; nmusers@globomaxnm.com 
<mailto:nmusers@globomaxnm.com>

Subject: RE: [NMusers] Warning obtained while running SAEM

Hi Leonid,

I see.  Normally we think of the proportional and additive random 
effects as being independent of each other.  For example, we could code 
the residual error as:


THETA(1)*IPRED*ERR(1) + THETA(2)*ERR(2)

where

$SIGMA DIAGONAL(2)

1 FIXED

1 FIXED

which is equivalent to:

W = SQRT(THETA(1)**2 * IPRED**2 + THETA(2)*

Re: [NMusers] Warning obtained while running SAEM

2024-03-05 Thread Leonid Gibiansky

Hi Tim and Ken,

I do not think this is an error to code W = THETA(1) * IPRED + THETA(2). 
This does not correspond to the variance of

 VAR1=(THETA(1)*IPRED)^2+THETA(2)^2
but this is just a different dependence of variance
   VAR2=(THETA(1)*IPRED+THETA(2))^2 = VAR1+2*THETA(1)*IPRED*THETA(2)
on IPRED. I think this is a default way of coding in Monolix (not 100% 
sure as I do not use it).


Also, with regards to the previous post, we need to make sure that 
variance of EPS1 is fixed to 1 if we use this code.


Regards,
Leonid



On 3/5/2024 11:25 AM, kgkowalsk...@gmail.com wrote:

Hi Tim,

Are you getting a zero gradient for TH2 on the 0-th iteration?  If so, 
this may be indicative of a coding error.  You might check to make sure 
that THETA(2) is not being assigned to another parameter (say a fixed 
effect parameter) in addition to the standard deviation for the additive 
residual error.  Note that it is the variances of the proportional and 
additive residual errors that can be summed for two independent random 
effects and not the standard deviations.  So, adding the proportional 
and additive standard deviations in the specification of W:


   W = THETA(1) * IPRED + THETA(2)

is not correct.

If the gradient for TH2 is not zero on the 0-th iteration and you don’t 
find any coding errors involving say multiple assignments for TH2, but 
the gradient goes to zero after several iterations, this might mean you 
don’t have rich enough information to estimate the additive error 
component.  If so, try fixing TH2 to 0 and just estimate the 
proportional error component.  Make sure that you are not including any 
DVs (e.g., the predose sample at TIME=0) where IPRED=0 in the estimation 
since the proportional error is 0 and hence IWRES is undefined.


Regards,

Ken

Kenneth G. Kowalski

President

Kowalski PMetrics Consulting, LLC

Email: kgkowalsk...@gmail.com 

Cell:  248-207-5082

*From:*owner-nmus...@globomaxnm.com  *On 
Behalf Of *T. Preijers

*Sent:* Tuesday, March 5, 2024 8:09 AM
*To:* nmusers@globomaxnm.com
*Subject:* [NMusers] Warning obtained while running SAEM

Dear NMusers,

While running a simple 1-comp model in NONMEM using SAEM, we discovered 
a warning resulting from $ERROR. Using a SQRT( **2) describing a mixed 
error model resulted in zero gradients for the additive error (see 
Example error model 1 below).


*/Example error model 1/*

$ERROR

   IPRED = F

   IRES = DV-IPRED

   W = SQRT(THETA(1)**2 * IPRED**2 + THETA(2)**2)

   IF (W.EQ.0) W = 1

   IWRES = IRES/W

Y = IPRED+W*ERR(1)

However, when simplifying the $ERROR code omitting the use of SQRT() and 
THETA(x)**2 (see Example error model 2 below), a successful SAEM run was 
obtained.


*/Example error model 2/*

$ERROR

   IPRED = F

   IRES = DV-IPRED

   W = THETA(1) * IPRED + THETA(2)

   IF (W.EQ.0) W = 1

   IWRES = IRES/W

   Y = IPRED + W*ERR(1)

Initial values for both THETAs were positive (TH1: 0.417, TH2: 0.545) 
and constrained with a lower boundary of 0. Moreover, the expected 
values for IPRED were ranged from 1-700 mg/L.


Our interest lies in what has driven this error. Perhaps someone else 
has encountered this before?


Looking forward to receiving any answers/comments!

Kind regards,

*Dr. T. Preijers, PharmD, PhD*

/Hospital Pharmacist and Clinical Pharmacologist/

Hospital Pharmacy Erasmus MC Rotterdam The Netherlands

Rotterdam Clinical Pharmacometrics Group





Re: [NMusers] Warning obtained while running SAEM

2024-03-05 Thread Leonid Gibiansky
Most likely, the parameter was reduced to the internal bound. Nonmem 
places the low bound equal to 1/100 (or 1/1000) of the initial value 
even if it is set to zero in the code. You may rerun with lower value of 
THETA(2) or put NOSIGMABOUND to the estimation record


Leonid

THETABOUNDTEST, OMEGABOUNDTEST, SIGMABOUNDTEST
  With NONMEM VI, the estimation step sometimes terminates with the
  message
  PARAMETER ESTIMATE IS NEAR ITS DEFAULT BOUNDARY.
  These options request that the "default boundary  test"  be  per-
  formed for THETA, OMEGA, and SIGMA, respectively.  THETABOUNDTEST
  may also be coded TBT or TBOUNDTEST; OMEGABOUNDTEST may  also  be
  coded  OBT or OBOUNDTEST; SIGMABOUNDTEST may also be coded SBT or
  SBOUNDTEST.  These options are the defaults.

 NOTHETABOUNDTEST, NOOMEGABOUNDTEST, NOSIGMABOUNDTEST
  Instructs NONMEM to omit the "default  boundary  test"  for  this
  type  of  variable, i.e., to behave like NONMEM V in this regard.
  Any option listed above may be  preceded  by  "NO".   The  THETA,
  OMEGA, and SIGMA choices are independent of each other.  E.g., it
  is possible to specify  NOOBT  (to  prevent  the  "default  OMEGA
  boundary test") and permit both the "default THETA boundary test"
  and "default SIGMA boundary test".


On 3/5/2024 8:08 AM, T. Preijers wrote:

Dear NMusers,

While running a simple 1-comp model in NONMEM using SAEM, we discovered 
a warning resulting from $ERROR. Using a SQRT( **2) describing a mixed 
error model resulted in zero gradients for the additive error (see 
Example error model 1 below).



/*Example error model 1*/

$ERROR

   IPRED = F

   IRES = DV-IPRED

   W = SQRT(THETA(1)**2 * IPRED**2 + THETA(2)**2)

   IF (W.EQ.0) W = 1

   IWRES = IRES/W

Y = IPRED+W*ERR(1)



However, when simplifying the $ERROR code omitting the use of SQRT() and 
THETA(x)**2 (see Example error model 2 below), a successful SAEM run was 
obtained.



/*Example error model 2*/

$ERROR

   IPRED = F

   IRES = DV-IPRED

   W = THETA(1) * IPRED + THETA(2)

   IF (W.EQ.0) W = 1

   IWRES = IRES/W

   Y = IPRED + W*ERR(1)


Initial values for both THETAs were positive (TH1: 0.417, TH2: 0.545) 
and constrained with a lower boundary of 0. Moreover, the expected 
values for IPRED were ranged from 1-700 mg/L.



Our interest lies in what has driven this error. Perhaps someone else 
has encountered this before?



Looking forward to receiving any answers/comments!

Kind regards,

*Dr. T. Preijers, PharmD, PhD*

/Hospital Pharmacist and Clinical Pharmacologist/

Hospital Pharmacy Erasmus MC Rotterdam The Netherlands

Rotterdam Clinical Pharmacometrics Group







Re: [NMusers] NONMEM Error on SAEM but not FOCEI

2024-01-26 Thread Leonid Gibiansky
type = "typo", something similar is in the control stream where Q is 
defined

Leonid


On 1/26/2024 10:16 AM, Leonid Gibiansky wrote:

most likely, some type in the code. Can you provide the control stream?

On 1/26/2024 10:01 AM, Elashkar, Omar I. wrote:

ERROR IN TRANS4 ROUTINE




Re: [NMusers] NONMEM Error on SAEM but not FOCEI

2024-01-26 Thread Leonid Gibiansky

most likely, some type in the code. Can you provide the control stream?

On 1/26/2024 10:01 AM, Elashkar, Omar I. wrote:

ERROR IN TRANS4 ROUTINE




Re: [NMusers] Simulate with ETAs from .phi and residual variability

2023-10-08 Thread Leonid Gibiansky

The only way I know to simulate with residual variability is to use

$SIM SIMONLY

If ETAs need to be fixed, they have to be in the data file

$INPUT ,PKETA1,

CL=THETA(1)*EXP(PKETA1+0*ETA(1))

--
another option is to use mrgsolve or similar R packages (after reading 
in estimated ETA values).

--

Mixed option is to simulate IPRED and then add noise in the R code after 
reading IPRED.


Thank you
Leonid


On 10/8/2023 3:30 PM, Philip Harder Delff wrote:

Dear all,

I want to simulate subjects with ETAs estimated. I use the .phi file 
using a $ETAS statement like:


$ETAS FILE=file.phi FORMAT=s1pE15.8 TBLN=1

As far as I understand that has to be accompanied by a $ESTIMATION step 
that specifies FNLETA=2 like this:


$ESTIMATION MAXEVAL=0 NOABORT METHOD=1 INTERACTION FNLETA=2

$SIM ONLYSIM does not work with $ESTIMATION so instead I just do:

$SIMULATION  (220412)

This however means that in the following from $ERROR we don't get the 
ERR(1) and ERR(2) simulated and so Y=IPRED.


Y=F+F*ERR(1)+ERR(2)

I need to be able to edit the phi file to only simulate a subset of the 
patients so I can't use msf based solutions. Can I somehow use $ETAS 
without $ESTIMATION? Or how else can I get it to simulate the residual 
variability?


Thank you,
Philip





Re: [NMusers] Specify full OMEGA matrix for simulation

2023-07-06 Thread Leonid Gibiansky

Looks like a bug.
Workaround is to use a very small diagonal value (1E-30)
Leonid

On 7/6/2023 6:01 AM, Philip Harder Delff wrote:

Hi all,

I am trying to specify a full OMEGA matrix for simulation purpose. I get 
issues because some diagonals are zero and some are positive. If I write


$OMEGA BLOCK(3) FIX
0.078
0 0.02
0 0 0

I get this error:
AN ERROR WAS FOUND ON LINE 34 AT THE APPROXIMATE POSITION NOTED:
   0 0 0

   224  A VARIANCE IS ZERO, BUT THE BLOCK IS NOT FIXED TO ZERO.

I know I could do:
$OMEGA BLOCK(2) FIX
0.078
0 0.02
$OMEGA FIX
0

However, that would be complicated to generate programmatically. Anyway 
I can tell Nonmem to use a specified OMEGA matrix as is for a simulation?


Thank you,
Philip





Re: [NMusers] BQL simulations

2023-06-19 Thread Leonid Gibiansky
For simulation, you can remove bql part, and simulate from the model.
Simulated  values below lloq can be treated as bqls, if you e.g. want to
compare fraction of bqls in observed and simulated sets.
Leonid

On Mon, Jun 19, 2023, 3:57 AM Hiba Sliem  wrote:

> Hello everyone
>
> I have a simple question for a simple bicompartmental model where M3
> method for estimating bql is used, because of that my predictions are
> overly inflated at the tail end of the treatment period (bql levels), which
> distorts my simulations.
>
> I used this line to get corrected predictions:
> IF(COMACT.EQ.1) PREDI=IPRED
>
> But that doesn't improve my simulated data at low concentrations, is there
> a way around it or should I use an other method to handle my bql values?
>
> Thank you in advance
>
>


Re: [NMusers] Problem with estimating sigma when using M3 method

2023-05-31 Thread Leonid Gibiansky
For error messages, check that you have bounds on THETAs that prevent 
model parameters to be negative (if they should be positive, like CL). 
Check whether there are any places where you could "ATTEMPT TO COMPUTE 
BASE*POWER WITH BASE < 0". Otherwise, these error messages are OK if the 
final model looks good. For the pre-dose values, if the model can 
produce only zeros there (pre-first dose), then these points do not have 
any information value, and should be excluded (probably harmless but 
useless). We should not compare OF of models with different number of 
observations, this has no meaning.

Regards,
Leonid

On 5/31/2023 8:58 AM, Hiba Sliem wrote:



Thank you! I followed your advice by ignoring bql values at first and I managed 
to get my model to work with satisfying results.

I did chose not to exclude any bql DV in my final dataset, including those 
pre-first dose samples, since removing them only slightly changes my parameter 
estimates while increasing the objective function, what's the usual approach 
when dealing with predose observations?

And finally while my model seems to be a good fit, during its execution nonmem 
displayed several error messages in the output window that weren't reported in 
the res file such as:

OCCURS DURING SEARCH FOR ETA AT INITIAL VALUE, ETA=0
PK SUBROUTINE: ERROR IN COMPUTATION
ATTEMPT TO COMPUTE BASE*POWER WITH BASE < 0
PRED EXIT CODE= 1

OCCURS DURING SEARCH FOR ETA AT INITIAL VALUE, ETA=0
ERROR In TRANS4 ROUTINE, CL (could also be V2 or Q) IS NEGATIVE

All these errors involved the first subject ID=1 exclusively

Should these messages be taken into consideration or can they be overlooked 
considering they weren't transcribed in the report file and I'm satisfied with 
my model?

Thank you again

regards


-Original Message-----
From: Leonid Gibiansky 
Sent: Tuesday, 30 May 2023 04:17
To: Hiba Sliem 
Cc: nmusers 
Subject: Re: [NMusers] Problem with estimating sigma when using M3 method

The code is incorrect, $ERROR should be

   $ERROR
   LOQ=0.1
   IPRED = F
   W1 = THETA(8)*IPRED
   W2 = THETA(9)
   W = SQRT(W1**2+W2**2)
   IRES = DV - IPRED
   IWRES = IRES/W
   DUM = (LOQ -IPRED)/W
   CUMD = PHI(DUM)
   IF(BLQ.EQ.0) THEN
   F_FLAG=0
   Y= IPRED +W1*ERR(1) + W2*ERR(2)
   ELSE
   F_FLAG=1
   Y=CUMD
   MDVRES = 1
   ENDIF


My be it makes sense to show the entire $PK and $EST blocks: easier to check. 
Which ADVAN do you use? If ADVAN6, switch to ADVAN13

Use MATRIX=S on $COV step
Use NOABORT on $EST step

try to use NSIG=4 SIGL=12 on $EST with LAPLACEAN

For pre-dose samples, do you mean pre-first dose? Then those should be ignored, 
removed from the data set, or use EVID=2 MDV=1, then they will be ignored at 
estimation. After washout, if all samples are BQLs, I would ignore them in a 
similar way, at least initially.
For washout setting, all versions are fine, You can use TIME=0 EVID=4 for the 
first dose after washout.

To check the code, I would first remove BQLs (use MDV=1 EVID=2 for those, to 
get IPRED), use FOCEI rather than LAPLACEAN, and make sure that the model fit 
is good. Then check that IPRED at the points of BQLs is small. If not, check 
whether those BQLs are reasonable or could be data errors. Then switch to 
LAPLACEAN with initial values of all parameters set at the  final values of the 
previous FOCEI run. Make sure you use INTERACTION options for all runs.

Good luck!
Leonid




On 5/29/2023 8:02 PM, Hiba Sliem wrote:

Hi

Thank you for your assistance, unfortunately I still  haven't found a solution 
to my issue, and I keep running into either one of these error messages 
depending on my dataset codification:

#PROGRAM TERMINATED BY OBJ
   ERROR IN NCONTR WITH INDIVIDUAL   6   ID= 6.00E+00
   NUMERICAL HESSIAN OF OBJ. FUNC. FOR COMPUTING CONDITIONAL ESTIMATE
   IS NON POSITIVE DEFINITE


#R MATRIX ALGORITHMICALLY SINGULAR
   AND ALGORITHMICALLY NON-POSITIVE-SEMIDEFINITE 0R MATRIX IS OUTPUT
0COVARIANCE STEP ABORTED


I finally managed to have both estimation and covariance not fail with the 
following error code :
$ERROR
LOQ=0.1
IPRED = F
W1 = THETA(8)*IPRED
W2 = THETA(9)
IRES = DV - IPRED
IWRES = IRES/(W1 + W2)
DUM = (LOQ -IPRED)/(W1 + W2)
CUMD = PHI(DUM)
IF(BLQ.EQ.0) THEN
F_FLAG=0
Y= IPRED +W1*ERR(1) + W2*ERR(2)
ELSE
F_FLAG=1
Y=CUMD
MDVRES = 1
ENDIF

$THETA
(0.01, 0.38) ; [w1]
(0.01, 0.1) ; [w2]

$SIGMA
1 FIX ;[P] sigma(1,1)
 1 FIX ;[P] sigma(2,2)

However my results were accompanied by the following message:

MINIMIZATION SUCCESSFUL
   HOWEVER, PROBLEMS OCCURRED WITH THE MINIMIZATION.
   REGARD THE RESULTS OF THE ESTIMATION STEP CAREFULLY, AND ACCEPT THEM ONLY
   AFTER CHECKING THAT THE COVARIANCE STEP PRODUCES REASONABLE OUTPUT.


I think a part of the issue is the way I've been formatting my dataset, since I 
get different results depending on the way I've set it up, so I'd like to have 
your opinion on the best way to proceed in the following situations:
   >predose samples tha

Re: [NMusers] Problem with estimating sigma when using M3 method

2023-05-29 Thread Leonid Gibiansky
 1   ...
 
 100 200
 4
1 0  0.10  1
 0

Sorry if these all seem like obvious questions, but I've been struggling to get 
satisfying results over the last few days and I'd like to understand what I've 
been doing wrong.

Kind regards,

-Original Message-
From: Philip Harder Delff 
Sent: Friday, 26 May 2023 21:42
To: Leonid Gibiansky 
Cc: Hiba Sliem ; nmusers 
Subject: Re: [NMusers] Problem with estimating sigma when using M3 method

[You don't often get email from phi...@delff.dk. Learn why this is important at 
https://aka.ms/LearnAboutSenderIdentification ]

Hi Hiba,

I agree that often the issues should be found in the data rather than the 
model. I recommend checking the data with NMcheckData from the R package called 
NMdata. It scans for a long list of potential issues, some that will make 
Nonmem fail, some that won't. If it finds issues, they will be returned in a 
data.frame with reference to row numbers and ID's so you can easily identify 
the root cause. If you look at ?NMcheckData you may identify arguments you can 
specify to add to the list of checks the function can run.

Having said this, a data/model issue can also be that your data poorly supports 
estimation of parts of your model (practical identifiability).
NMcheckData won't help you identify such issues.

NMdata: https://philipdelff.github.io/NMdata/
NMcheckData manual:
https://philipdelff.github.io/NMdata/reference/NMcheckData.html

An example with a few arguments that activate additional checks:
res.checks <-
NMcheckData(mydata,covs="WEIGHTBL",cols.num="WEIGHT",col.usubjid="USUBJID")
Here, NMcheckData will (in addition to a bunch of other checks) see if WEIGHTBL 
exists numeric, non-na and unique within subjects, WEIGHT exists and is numeric 
and non-NA, and that ID is unique against USUBJID and vice versa. (Obviously, 
Nonmem can't read USUBJID if it contains characters, but you could still keep 
it to the right in the dataset for reference). See the manual above for more 
options.

Best,
Philip

On 2023-05-26 11:06 AM, Leonid Gibiansky wrote:

Yes, SIGMA should be fixed to 1 (do not try anything else, it has to
be done correctly in the code first, and then we should worry about
how to make it work)

For combined error, expression is
W = SQRT(W1**2 + W2**2) (squares in both terms)

Do not worry about error 134, this is harmless, and you can fix it any
time after you get your model right. Add UNCONDITIONAL MATRIX=S to the
$COV step.

For PARAMETER ESTIMATE IS NEAR ITS BOUNDARY try to add
NOSIGMABOUNDTEST NOOMEGABOUNDTEST NOTHETABOUNDTEST to $est record

Most of the time, numerical difficulties come from the problems with
the data, so it makes sense to clean the data set first as much as
possible.

Leonid


On 5/26/2023 9:30 AM, Hiba Sliem wrote:

Hi

I already tried fixing the value of sigma to 1, the covariance step
isn't implemented when I do that.
If I try fixing it to 0.144 the minimization isn't successful.

I also tried a combined error model like this:
LOQ=0.1
IPRED = F
W1 = THETA(8)*IPRED
W2 = THETA(9)
W = SQRT(W1**1 + W2**2)
DEL = 0
IF(W.EQ.0) DEL = 1
IRES = DV - IPRED
IWRES = IRES/(W + DEL)
DUM = (LOQ -IPRED)/(W + DEL)
CUMD = PHI(DUM)
IF(BLQ.EQ.0) THEN
F_FLAG=0
Y= IPRED +W*ERR(1)
ELSE
F_FLAG=1
Y=CUMD
MDVRES = 1
ENDIF

In which case I get a PARAMETER ESTIMATE IS NEAR ITS BOUNDARY error
message When trying to fix Sigma in the combined model I have a
MINIMIZATION TERMINATED
   DUE TO ROUNDING ERRORS (ERROR=134) message.

My dataset has a lot of predose samples and washouts between
different periods, is it possible the issue comes from my dataset?

Regards

-Original Message-
From: Leonid Gibiansky 
Sent: Friday, 26 May 2023 14:51
To: Hiba Sliem ; nmusers@globomaxnm.com
Subject: Re: [NMusers] Problem with estimating sigma when using M3
method

[You don't often get email from lgibian...@quantpharm.com. Learn why
this is important at https://aka.ms/LearnAboutSenderIdentification ]

you should fix

$SIGMA
1 FIX

as you are already estimating the SD using THETA(8).

Leonid

On 5/26/2023 4:57 AM, Hiba Sliem wrote:

Hello

I'm fairly new to nonmem, I'm currently trying to model a phase 1
study with BLQ values, while the run was successful with no error
message, my
residual error has a   %rse >70 and a confidence interval that
includes
zero.

Here's my code:

$ERROR

LOQ=0.1

IPRED = F

SD = THETA(8)*IPRED

DEL = 0

IF(SD.EQ.0) DEL = 1

IRES = DV - IPRED

IWRES = IRES / (SD + DEL)

DUM = (LOQ -IPRED) / (SD + DEL)

CUMD = PHI(DUM) + DEL

IF(BLQ.EQ.0) THEN

F_FLAG=0

Y= IPRED +SD*ERR(1)

ELSE

F_FLAG=1

Y=CUMD

MDVRES = 1

ENDIF

$EST METHOD=1 INTERACTION LAPLACIAN PRINT=5 MAX= SIG=3  SLOW
NUMERICAL   MSFO=*.msf

$SIGMA

 0.38 ;[P] sigma(1,1) (estimated in a previous model)

Furthermore, when trying to fit this model to my phase 2 dataset,
covar

Re: [NMusers] Problem with estimating sigma when using M3 method

2023-05-26 Thread Leonid Gibiansky
Yes, SIGMA should be fixed to 1 (do not try anything else, it has to be 
done correctly in the code first, and then we should worry about how to 
make it work)


For combined error, expression is
W = SQRT(W1**2 + W2**2) (squares in both terms)

Do not worry about error 134, this is harmless, and you can fix it any 
time after you get your model right. Add UNCONDITIONAL MATRIX=S to the 
$COV step.


For PARAMETER ESTIMATE IS NEAR ITS BOUNDARY try to add
NOSIGMABOUNDTEST NOOMEGABOUNDTEST NOTHETABOUNDTEST
to $est record

Most of the time, numerical difficulties come from the problems with the 
data, so it makes sense to clean the data set first as much as possible.


Leonid


On 5/26/2023 9:30 AM, Hiba Sliem wrote:

Hi

I already tried fixing the value of sigma to 1, the covariance step isn't 
implemented when I do that.
If I try fixing it to 0.144 the minimization isn't successful.

I also tried a combined error model like this:
LOQ=0.1
IPRED = F
W1 = THETA(8)*IPRED
W2 = THETA(9)
W = SQRT(W1**1 + W2**2)
DEL = 0
IF(W.EQ.0) DEL = 1
IRES = DV - IPRED
IWRES = IRES/(W + DEL)
DUM = (LOQ -IPRED)/(W + DEL)
CUMD = PHI(DUM)
IF(BLQ.EQ.0) THEN
F_FLAG=0
Y= IPRED +W*ERR(1)
ELSE
F_FLAG=1
Y=CUMD
MDVRES = 1
ENDIF

In which case I get a PARAMETER ESTIMATE IS NEAR ITS BOUNDARY error message
When trying to fix Sigma in the combined model I have a MINIMIZATION TERMINATED
  DUE TO ROUNDING ERRORS (ERROR=134) message.

My dataset has a lot of predose samples and washouts between different periods, 
is it possible the issue comes from my dataset?

Regards

-Original Message-
From: Leonid Gibiansky 
Sent: Friday, 26 May 2023 14:51
To: Hiba Sliem ; nmusers@globomaxnm.com
Subject: Re: [NMusers] Problem with estimating sigma when using M3 method

[You don't often get email from lgibian...@quantpharm.com. Learn why this is 
important at https://aka.ms/LearnAboutSenderIdentification ]

you should fix

$SIGMA
1 FIX

as you are already estimating the SD using THETA(8).

Leonid

On 5/26/2023 4:57 AM, Hiba Sliem wrote:

Hello

I'm fairly new to nonmem, I'm currently trying to model a phase 1
study with BLQ values, while the run was successful with no error message, my
residual error has a   %rse >70 and a confidence interval that includes
zero.

Here's my code:

$ERROR

LOQ=0.1

IPRED = F

SD = THETA(8)*IPRED

DEL = 0

IF(SD.EQ.0) DEL = 1

IRES = DV - IPRED

IWRES = IRES / (SD + DEL)

DUM = (LOQ -IPRED) / (SD + DEL)

CUMD = PHI(DUM) + DEL

IF(BLQ.EQ.0) THEN

F_FLAG=0

Y= IPRED +SD*ERR(1)

ELSE

F_FLAG=1

Y=CUMD

MDVRES = 1

ENDIF

   $EST METHOD=1 INTERACTION LAPLACIAN PRINT=5 MAX= SIG=3  SLOW
NUMERICAL   MSFO=*.msf

$SIGMA

0.38 ;[P] sigma(1,1) (estimated in a previous model)

Furthermore, when trying to fit this model to my phase 2 dataset,
covariance step fails when I implement it.

Any suggestions are welcome

Thank you





Re: [NMusers] Problem with estimating sigma when using M3 method

2023-05-26 Thread Leonid Gibiansky

you should fix

$SIGMA
1 FIX

as you are already estimating the SD using THETA(8).

Leonid

On 5/26/2023 4:57 AM, Hiba Sliem wrote:

Hello

I’m fairly new to nonmem, I’m currently trying to model a phase 1 study 
with BLQ values, while the run was successful with no error message, my 
residual error has a   %rse >70 and a confidence interval that includes 
zero.


Here’s my code:

$ERROR

LOQ=0.1

IPRED = F

SD = THETA(8)*IPRED

DEL = 0

IF(SD.EQ.0) DEL = 1

IRES = DV - IPRED

IWRES = IRES / (SD + DEL)

DUM = (LOQ -IPRED) / (SD + DEL)

CUMD = PHI(DUM) + DEL

IF(BLQ.EQ.0) THEN

F_FLAG=0

Y= IPRED +SD*ERR(1)

ELSE

F_FLAG=1

Y=CUMD

MDVRES = 1

ENDIF

  $EST METHOD=1 INTERACTION LAPLACIAN PRINT=5 MAX= SIG=3  SLOW 
NUMERICAL   MSFO=*.msf


$SIGMA

   0.38 ;[P] sigma(1,1) (estimated in a previous model)

Furthermore, when trying to fit this model to my phase 2 dataset, 
covariance step fails when I implement it.


Any suggestions are welcome

Thank you





Re: [NMusers] Modeling for non-Normal distributions of individual parameters

2023-01-03 Thread Leonid Gibiansky

Hi Bob,

Thanks for the instructions, it is very helpful. From the practical 
standpoint, using gamma as you described is somewhat complicated while 
using square-normal is trivial. Would it be a fair summary to say that:


-if the results of log-normal and square are similar (or if log-normal 
is better), we can stay with log-normal;

-if square-normal is much better then log-normal, then we should use square;
-gamma version provides only minor or no improvement versus 
square-normal and only when variances are large.


Thank you
Leonid


On 1/3/2023 11:49 AM, Bauer, Robert wrote:

Dear nmusers:

I am providing instructions and examples of how to model individual 
parameters with gamma distribution among the population in NONMEM.  A 
general method for modeling other distributions is also given.  This 
material is located at (start with the instruction file gamma_indpar.pdf):


https://nonmem.iconplc.com/nonmem/gamma_indpar 



Robert J. Bauer, Ph.D.

Senior Director

Pharmacometrics R

ICON Early Phase

731 Arbor way, suite 100

Blue Bell, PA 19422

Office: (215) 616-6428

Mobile: (925) 286-0769

robert.ba...@iconplc.com 

www.iconplc.com 



ICON plc made the following annotations.
--
This e-mail transmission may contain confidential or legally privileged 
information that is intended only for the individual or entity named in 
the e-mail address. If you are not the intended recipient, you are 
hereby notified that any disclosure, copying, distribution, or reliance 
upon the contents of this e-mail is strictly prohibited. If you have 
received this e-mail transmission in error, please reply to the sender, 
so that ICON plc can arrange for proper delivery, and then please delete 
the message.


Thank You,

ICON plc
South County Business Park
Leopardstown
Dublin 18
Ireland
Registered number: 145835





Re: [NMusers] Off diagonals in $TABLE

2022-12-20 Thread Leonid Gibiansky

This should do it:

OM12 = OMEGA(1,2)
$TAB OM12

On 12/20/2022 1:55 PM, Paul Hutson wrote:
Is it possible to include the off-diagonal correlation of ETAs in the 
$TABLE output? If so, how is it done?  ETAR12 is not recognized.


I’d like to import into R for summary stats.

Thank you!
Paul

Paul R. Hutson, PharmD, BCOP

Professor (CHS)

UW School of Pharmacy

Director, UW Madison Transdisciplinary Center for Research in 
Psychoactive Substances


Associate Member, Paul P. Carbone Comprehensive Cancer Center

T: 608.263.2496

paul.hut...@wisc.edu





Re: [NMusers] Simulate with parameter uncertainty

2022-12-17 Thread Leonid Gibiansky
Is it is only CL (then you can just use SE and normal distribution) or 
you need predictions and use whole parameter matrix (then you need to 
use PRIORS): make sure to set TRUE=PRIOR on the simulation step).

Leonid

Roughly (See guide for IVAR value):

$PRIOR TNPRI (PROBLEM 2)  IVAR=2 PLEV=0.999
$MSFI ../.../FILE.MSF ONLYREAD
$SUBROUTINES

$PK
...
$ERROR
...

$PROBLEM XXX, simulations
$INPUT
$DATA  REWIND IGNORE=C

$THETA
...

$OMEGA
..

$SIGMA
...

$SIMULATION (1334) (5778 UNIFORM) ONLYSIMULATION PARAFILE=ON RANMETHOD=P 
TRUE=PRIOR SUBPROBLEMS=3


$TAB FILE=...tab



On 12/17/2022 10:16 AM, Mark Sale wrote:

Hi,
  I'm pretty sure this is possible, I think I even did it long ago, but I need 
to simulate a model with parameter uncertainty, i.e., sample mean and variance 
from prior distribution of typical value of CL, then sample individual Cls from 
that distribution of mean/variance. Can anyone help me with the code to do this?

Thanks
Mark


Mark Sale M.D.
Vice President
Integrated Drug Development
mark.s...@certara.com
Remote-Forestville CA
Office Hours 9 AM - 5 PM Eastern Time
+1 302-516-1684
www.certara.com





This message (including any attachments) may contain confidential, proprietary, 
privileged and/or private information. The information is intended to be for 
the use of the individual or entity designated above. If you are not the 
intended recipient of this message, please notify the sender immediately, and 
delete the message and any attachments. Any disclosure, reproduction, 
distribution or other use of this message or any attachments by an individual 
or entity other than the intended recipient is prohibited.







Re: [NMusers] Condition number

2022-11-29 Thread Leonid Gibiansky

from the manual:

Iteration -13 indicates that this line contains the condition 
number , lowest, highest, Eigen values of the correlation matrix of the 
variances of the final parameters.




On 11/29/2022 7:59 PM, Ken Kowalski wrote:

Hi Matt,

I’m pretty sure Stu Beal told me many years ago that NONMEM calculates 
the eigenvalues from the correlation matrix.  Maybe Bob Bauer can chime 
in here?


Ken

*From:*Matthew Fidler [mailto:matthew.fid...@gmail.com]
*Sent:* Tuesday, November 29, 2022 7:56 PM
*To:* Ken Kowalski 
*Cc:* Kyun-Seop Bae ; nmusers@globomaxnm.com; 
Jeroen Elassaiss-Schaap (PD-value B.V.) 

*Subject:* Re: [NMusers] Condition number

Hi Ken,

I am unsure, since I don't have my NONMEM manual handy.

I based my understanding on reading about condition numbers in numerical 
analysis, which seemed to use the parameter estimates:


https://en.wikipedia.org/wiki/Condition_number 



If it uses the correlation matrix, it could be less sensitive.

Matt

On Tue, Nov 29, 2022 at 6:11 PM Ken Kowalski > wrote:


Hi Matt,

Correct me if I’m wrong but I thought NONMEM calculates the
condition number based on the correlation matrix of the parameter
estimates so it is scaled based on the standard errors of the estimates.

Ken

*From:*Matthew Fidler [mailto:matthew.fid...@gmail.com
]
*Sent:* Tuesday, November 29, 2022 7:04 PM
*To:* Ken Kowalski mailto:kgkowalsk...@gmail.com>>
*Cc:* Kyun-Seop Bae mailto:kyunseop@gmail.com>>; nmusers@globomaxnm.com
; Jeroen Elassaiss-Schaap (PD-value
B.V.) mailto:jer...@pd-value.com>>
*Subject:* Re: [NMusers] Condition number

Hi Ken & Kyun-Seop,

I agree it should be taught, since it is prevalent in the industry,
and should be looked at as something to investigate further, but no
hard and fast rule should be applied to if the model is reasonable
and fit for purpose.  That should be done in conjunction with other
diagnostic plots.

One thing that has always bothered me about the condition number is
that it is calculated based on the final parameter estimates, but
not the scaled parameter estimates.  Truly the scaling is supposed
to help make the gradient on a comparable scale and fix many
numerical problems here.  Hence, if the scaling works as it is
supposed to,  small changes may not affect the colinearity as
strongly as the calculated condition number suggests.

This is mainly why I see it as a number to keep in mind instead of a
hard and fast rule.

Matt

On Tue, Nov 29, 2022 at 5:09 PM Ken Kowalski mailto:kgkowalsk...@gmail.com>> wrote:

Hi Kyun-Seop,

I would state things a little differently rather than say
“devalue condition number and multi-collinearity” we should
treat CN as a diagnostic and rules such as CN>1000 should NOT be
used as a hard and fast rule to reject a model.  I agree with
Jeroen that we should understand the implications of a high CN
and the impact multi-collinearity may have on the model
estimation and that there are other diagnostics such as
correlations, variance inflation factors (VIF), standard errors,
CIs, etc. that can also help with our understanding of the
effects of multi-collinearity and its implications for model
development.

That being said, if you have a model with a high CN and the
model converges with realistic point estimates and reasonable
standard errors then it may still be reasonable to accept that
model.  However, in this setting I would probably still want to
re-run the model with different starting values and make sure it
converges to the same OFV and set of point estimates.

As the smallest eigenvalue goes to 0 and the CN goes to infinity
we end up with a singular Hessian matrix (R matrix) so we know
that at some point a high enough CN will result in convergence
and COV step failures.  Thus, you shouldn’t simply dismiss CN as
not having any diagnostic value, just don’t apply it in a rule
such as CN>1000 to blindly reject a model.  The CN>1000 rule
should only be used to call your attention to the potential for
an issue that warrants further investigation before accepting
the model or deciding how to alter the model to improve
stability in the estimation.

Best,

Ken

Kenneth G. Kowalski

Kowalski PMetrics Consulting, LLC

Email: kgkowalsk...@gmail.com 

Cell:    248-207-5082

*From:*owner-nmus...@globomaxnm.com

[mailto:owner-nmus...@globomaxnm.com
] 

Re: [EXTERNAL] [NMusers] Continuation of "What does "Second Error signal from LINV1PD: 129"

2022-09-16 Thread Leonid Gibiansky

Thank you, Bob!

Ctl-T command is not working in parallel mode (on my Windows 10 system) 
but it worked perfectly fine in the single-processor mode.


Thank you
Leonid






On 9/16/2022 5:21 PM, Bauer, Robert wrote:

Leonid:
You can enter ctrl-T to see OFV outputs of each subject, and you can 
note which of the subjects is issuing an LNV1PD: 129 error. The error 
issues first, then the subject number is shown.


Robert J. Bauer, Ph.D.
Senior Director
Pharmacometrics R
ICON Early Phase
731 Arbor way, suite 100
Blue Bell, PA 19422
Office: (215) 616-6428
Mobile: (925) 286-0769
robert.ba...@iconplc.com <mailto:robert.ba...@iconplc.com>
www.iconplc.com <http://www.iconplc.com>

-Original Message-
From: owner-nmus...@globomaxnm.com <mailto:owner-nmus...@globomaxnm.com> 
 On Behalf Of Leonid Gibiansky

Sent: Friday, September 16, 2022 12:57 PM
To: nmusers 
Subject: [EXTERNAL] [NMusers] Continuation of "What does "Second Error 
signal from LINV1PD: 129"


Hi Nick and Bob,

There was a thread in 2010 about LINV1PD: 129 error, see below.
Is it possible to get ID(s) of the subject(s) with the problem or 
extract any other useful information to debug the issue? I am using
ADVAN5 with LAPLACIAN option (with PROTECT), are there any tricks that 
would help to avoid or debug the problem? The iterations move forward in 
spite of multiple error messages that continue to appear through the 
whole run, and the model provided the parameters and predictions, and 
even completed the covariance step but it would be helpful to find out 
what causes the error.


Thank you
Leonid

===
Nick:
It means that a positive definiteness correction on the eta Hessian 
matrix of a particular individual at a particular iteration was not 
successful, but NONMEM goes ahead anyway. If it happened on an 
intermediate iteration, and the error signal was not continually 
repeated, then NONMEM managed to get through the numerical rough spot, 
and there is not likely a systematic issue to the problem.



Robert J. Bauer, Ph.D.
__
Sent: Friday, November 19, 2010 7:16 PM
To: nmusers
Subject: [NMusers] What does "Second Error signal from LINV1PD: 129"
mean?


Hi,
I'm using NM7.1.2
$EST MAX=9990 NOABORT NSIG=3 SIGL=9 PRINT=1 METHOD=CONDITIONAL LAPLACIAN 
SLOW $SUBR ADVAN6 TOL=6 Predictions are made using F_FLAG=0 and F_FLAG=1


I get this message I have never seen before. It happens with one 
particular model that is closely related to others that run Ok. The 
model with this error seems to run Ok and produces plausible parameters.

What does this message mean?

Second Error signal from LINV1PD: 129

Nick


ICON plc made the following annotations.
-- 

This e-mail transmission may contain confidential or legally privileged 
information that is intended only for the individual or entity named in 
the e-mail address. If you are not the intended recipient, you are 
hereby notified that any disclosure, copying, distribution, or reliance 
upon the contents of this e-mail is strictly prohibited. If you have 
received this e-mail transmission in error, please reply to the sender, 
so that ICON plc can arrange for proper delivery, and then please delete 
the message.


Thank You,

ICON plc
South County Business Park
Leopardstown
Dublin 18
Ireland
Registered number: 145835





[NMusers] Continuation of "What does "Second Error signal from LINV1PD: 129"

2022-09-16 Thread Leonid Gibiansky

Hi Nick and Bob,

There was a thread in 2010 about LINV1PD: 129 error, see below.
Is it possible to get ID(s) of the subject(s) with the problem or 
extract any other useful information to debug the issue? I am using 
ADVAN5 with LAPLACIAN option (with PROTECT), are there any tricks that 
would help to avoid or debug the problem? The iterations move forward in 
spite of multiple error messages that continue to appear through the 
whole run, and the model provided the parameters and predictions, and 
even completed the covariance step but it would be helpful to find out 
what causes the error.


Thank you
Leonid

===
Nick:
It means that a positive definiteness correction on the eta Hessian
matrix of a particular individual at a particular iteration was not
successful, but NONMEM goes ahead anyway.  If it happened on an
intermediate iteration, and the error signal was not continually
repeated, then NONMEM managed to get through the numerical rough spot,
and there is not likely a systematic issue to the problem.


Robert J. Bauer, Ph.D.
__
Sent: Friday, November 19, 2010 7:16 PM
To: nmusers
Subject: [NMusers] What does "Second Error signal from LINV1PD: 129"
mean?


Hi,
I'm using NM7.1.2
$EST MAX=9990 NOABORT NSIG=3 SIGL=9 PRINT=1
METHOD=CONDITIONAL LAPLACIAN SLOW
$SUBR ADVAN6 TOL=6
Predictions are made using F_FLAG=0 and F_FLAG=1

I get this message I have never seen before. It happens with one
particular model that is closely related to others that run Ok. The
model with this error seems to run Ok and produces plausible parameters.
What does this message mean?

 Second Error signal from LINV1PD:  129

Nick



Re: [NMusers] flip-flop without absorption information?

2022-09-13 Thread Leonid Gibiansky
With flip-flop, we always can get 2 solutions (assuming 1-cmpt model 
with absorption; 2-cpt case is similar but expressions may differ):


ka1-CL-V1 and ka2-CL-V2 such that

ka1=CL/V2 and ka2=CL/V1

Note that CL is the same, so info on CL will not help to distinguish 
these cases.


One cannot just fix the volume, as it should have one of the values V1 
or V2, but one can select the "more mechanistic" value if other 
information (e.g., about similar compounds) is available, and push the 
solution to the right place by providing the bounds on the range of 
possible parameter estimates.


Note that with SC dose, we cannot estimate CL and V, we have apparent CL 
and apparent V (related to the underlying CLtrue and Vtrue as 
CL=CLtrue/F and V= Vtrue/F, where F is the absolute bioavailability of 
SC administration). When apparent CL and apparent V are compared with 
parameters for other compounds, F should be taken into account.


Thank you
Leonid




On 9/13/2022 10:54 AM, Bonate, Peter wrote:

In comment to Shan’s statement:

It's my understanding that this flip-flop phenomenon is fundamentally a 
mathematical problem -- that is, if we write down a PK model in its 
analytical form, it becomes rather easy to understand that swapping the 
values between ka and ke (CL/V) would lead to the same output.


This is not true.  The values do not swap out.  V will be different.

Consider a 1-compartment model with KA=0.1 per h, KEL=0.7 per h, and 
V1=125L.  Suppose a 250 mg dose is given.  This model has flip-flop 
kinetics.


Now assume we build a model with KA=0.7 per h, KEL=0.1 per h, and V2 is 
unknown, same dose. This model does not have flip-flop. Using the 
simulated data from model 1 as the observed data for model 2, we can fit 
model 2 and find the optimum value of V2.  In this case it is 875L.  If 
you look at the profiles you will see that they are /exactly/ the same.


So it’s not a matter of just changing the order of the exponents.

If you want to estimate the parameters of a flip-flop model you need a 
data without absorption – IV.  Or some other independent assessment of 
CL that does not depend on absorption.


*Peter Bonate, PhD*

Executive Director

Pharmacokinetics, Modeling, and Simulation (PKMS)

Clinical Pharmacology and Exploratory Development (CPED)

Astellas

1 Astellas Way

Northbrook, IL  60062

peter.bon...@astellas.com 

(224) 619-4901

Quote of the week –

/“Dancing with the Stars” is not owned by Astellas.**/

*From:* owner-nmus...@globomaxnm.com  *On 
Behalf Of *Shan Pan

*Sent:* Tuesday, September 13, 2022 3:42 AM
*To:* Jakob Ribbing ; Niurys.CS 


*Cc:* nmusers 
*Subject:* Re: [NMusers] flip-flop without absorption information?

This is an interesting discussion. At the same time I can't get my head 
around the assumption of any covariate on a flip-flop phenomenon. In 
other words, even if there is no information on covariates 
this phenomenon could still exist.


It's my understanding that this flip-flop phenomenon is fundamentally a 
mathematical problem -- that is, if we write down a PK model in its 
analytical form, it becomes rather easy to understand that swapping the 
values between ka and ke (CL/V) would lead to the same output.


In the absence of data on drug absorption as in your case, I think the 
solution could lie in fixing volume of distribution based on any prior 
information, e.g. a reported value in the literature. Otherwise, try to 
fix it to a reasonable estimate and see what happens.


Hope it helps.

Kind regards,

Shan

On Tue, Sep 13, 2022 at 5:36 AM Jakob Ribbing 
mailto:jakob.ribb...@pharmetheus.com>> 
wrote:


Dear Niurys,

It would be down to distributional assumptions in that case.

For example if you have a very strong predictor (covariate) of
either elimination or absorption rate (but not both) - data could be
informative to discriminate between flip-flop or not.

Had your therapeutic been IgG monoclonal antibody, albumin wold have
been a predictor of the absolute CL that with a larger number of
subjects may allow to discriminate (especially if a mix of both
healthy, and patients with higher inflammation level and thereby
lower albumin -> higher CL).

On the other hand, for example body weight would not be helpful in
this regard.

Even if body weight would have an effect on CL and V, it would not
have a major impact on terminal elimination (and in addition one
could have a concern on body weight also affecting the absorption rate).

So you would need both the mechanistic knowledge on the covariate,
for your therapeutic peptide in the RA population, and it would need
to be a strong effect in sufficient number of subjects.

On such obvious covariate would be different routes of
administration, where nobody would question the mechanistic
knowledge on that SC has a slower absorption that IV :>)

In liu of IV dosing this becomes a more 

Re: [NMusers] Problems when implementing M3 for BLQs

2022-08-17 Thread Leonid Gibiansky

Hi Roeland,

ERROR_TERM in Y and in DUM should differ: one should include EPS() while 
the other should use only SD. F_FLAG should be set to zero outside M3 
block. What is DRUG.EQ.1, is it parent? there should be a separate M3 
part for metabolite.


It would be easier to debug if you would provide the full code of the 
ERROR block


Thank you,
Leonid


On 8/16/2022 6:53 AM, Roeland Wasmann wrote:

Hi all,

I have a problem while trying to implement the m3 method in my model 
with a parent and metabolite. I could really use some help.


About 50% of the parent data is BLQ while the metabolite only has a 
couple of BLQs. When modeling them separately there is no issue but when 
I have them both in one model, I get the following error:


/Recompiling certain components.
starting wait
ending wait
Exiting lpreddo
Compiling FSUBS
FSUBS.f90:1322:6:/

/   B56=PHI(DUM)/

/  1/

/Error: Unclassifiable statement at (1)
Building NONMEM Executable
gfortran: error: fsubs.o: No such file or directory
No nonmem execution.
It seems like Fortran compilation by NONMEM's nmfe script failed. Cannot
start NONMEM.
Go to the NM_run1 subdirectory and run psn.mod with NONMEM's nmfe script 
to diagnose the problem./


The NM_run1 folder does not provide any answers… The M3 part of the 
error block looks like this:


Y = IPRED + ERROR_TERM

*DUM = (LLOQ-IPRED) / ERROR_TERM
CUMD  =  PHI(DUM)
IF (ICALL.NE.4.AND.BLQ.GT.0.AND.DRUG.EQ.1) THEN
F_FLAG = 1
Y = CUMD
MDVRES = 1
ENDIF*

When I put the “DUM” part within the PHI (i.e., “*CUMD  =  
PHI(((LLOQ-IPRED) / ERROR_TERM))*” the error changes slightly:


/Compiling FSUBS
FSUBS.f90:1324:6:/

/   B58=PHI(B56)/

When I remove the “*.AND.DRUG.EQ.1*” from IF I  get:

/FSUBS.f90:1355:6:/

/   B61=PHI(DUM)/

Please let me know what info I should provide to better diagnose this 
problem challenge.


Thanks in advance for helping out!

Cheers Roeland

--

*Roeland Wasmann, PharmD, PhD*

Division of Clinical Pharmacology

Department of Medicine

University of Cape Town

K45 Old Main Building

Groote Schuur Hospital

Observatory, Cape Town

7925 South Africa

phone: +27 21 650 4861

Disclaimer - University of Cape Town This email is subject to UCT 
policies and email disclaimer published on our website at 
http://www.uct.ac.za/main/email-disclaimer or obtainable from +27 21 650 
9111. If this email is not related to the business of UCT, it is sent by 
the sender in an individual capacity. Please report security incidents 
or abuse via https://csirt.uct.ac.za/page/report-an-incident.php.




[NMusers] Final count for unofficial survey: COVID at PAGE

2022-08-01 Thread Leonid Gibiansky

Here is the final update:

Total replied: 133
 33 positive, 100 negative/not tested, 24.8% positivity rate.

Among 59 who did not have prior infections:
19 positive 40 negative, 27.5% positivity rate.

Among 74 who did have prior infections:
14 positive 60 negative, 18.9% positivity rate.

Among 50 who did have prior infections in 2022:
2 positive 48 negative, 4% positivity rate.
 Both people with positive teste got infected in February 2022
(sorry, I did an error in my prior post about this group)

Among 24 who did have prior infections earlier than 2022:
12 positive 12 negative, 50% positivity rate.
 Not sure what does it mean, likely the effect of the small sample size

Thank you to those to participated.

The document is/will be available for further input and more elaborate 
research of the results:


https://docs.google.com/spreadsheets/d/1faVr5vwt9KNyxdxJw9IXB6VZkeRFxg_QHad8-o1wfKw/edit?usp=sharing 



Leonid


On 7/24/2022 2:36 PM, Leonid Gibiansky wrote:

Hi All,
I planned to put a summary in a week, but I see that the stream of 
responses dwindle, so we can have some summaries now (BTW, if you know 
colleagues who do not use nmusers but attended PAGE and would be willing 
to participate in a survey, could you forward this email to them).


Summary:

1. Total replied 123, 31 were positive for a positivity rate of 25.2%;

2. Among 57 subjects who had NO prior infection, 18 were positive for a 
total rate of 31.6%;


3. Among 66 subjects who had prior infection, 13 were positive for a 
total rate of  19.7%;


4. Among 44 subjects who had prior infection in 2022, 2 were positive 
for a total rate of 2.3%


So prior infection this year provides significant protection against 
transmission.


Apparently, Europe does not use paxlovid, as I was the only one (based 
in US) who had it. Also, nobody had rebound of infection without paxlovid.


Update on paxlovid in my case: I managed to pass COVID to 4 people, 
including myself, and all of us took paxlovid within 1/2 day of the 
positive test. Three of us who had Age >=60, got rebound infection (75% 
out of 4 people); for two, the second symptoms were worse than the first 
round. One person (younger guy < 30 years old) did not have a rebound. 
Assuming 2% rebound probability, we observed an event with the 
probability <0.001, looks like we were really lucky :)


Below is the link to the survey document

https://docs.google.com/spreadsheets/d/1faVr5vwt9KNyxdxJw9IXB6VZkeRFxg_QHad8-o1wfKw/edit?usp=sharing 



Link is for the google doc that everyone can edit. Those who attended 
PAGE but did not replied yet, could you put an X to the appropriate 
column(s):


Not tested;
Tested negative;
Tested positive;
Used paxlovid;
Had rebound.

Please do not put your name anywhere, this is completely anonymous. 
Everybody can see the results.


Please, do NOT reply to the list, unless the post is of general interest 
to the group. If needed, reply directly to me, let's not create extra spam.


Thank you
Leonid




[NMusers] Update for unofficial survey: COVID at PAGE

2022-07-24 Thread Leonid Gibiansky

Hi All,
I planned to put a summary in a week, but I see that the stream of 
responses dwindle, so we can have some summaries now (BTW, if you know 
colleagues who do not use nmusers but attended PAGE and would be willing 
to participate in a survey, could you forward this email to them).


Summary:

1. Total replied 123, 31 were positive for a positivity rate of 25.2%;

2. Among 57 subjects who had NO prior infection, 18 were positive for a 
total rate of 31.6%;


3. Among 66 subjects who had prior infection, 13 were positive for a 
total rate of  19.7%;


4. Among 44 subjects who had prior infection in 2022, 2 were positive 
for a total rate of 2.3%


So prior infection this year provides significant protection against 
transmission.


Apparently, Europe does not use paxlovid, as I was the only one (based 
in US) who had it. Also, nobody had rebound of infection without paxlovid.


Update on paxlovid in my case: I managed to pass COVID to 4 people, 
including myself, and all of us took paxlovid within 1/2 day of the 
positive test. Three of us who had Age >=60, got rebound infection (75% 
out of 4 people); for two, the second symptoms were worse than the first 
round. One person (younger guy < 30 years old) did not have a rebound. 
Assuming 2% rebound probability, we observed an event with the 
probability <0.001, looks like we were really lucky :)


Below is the link to the survey document

https://docs.google.com/spreadsheets/d/1faVr5vwt9KNyxdxJw9IXB6VZkeRFxg_QHad8-o1wfKw/edit?usp=sharing

Link is for the google doc that everyone can edit. Those who attended 
PAGE but did not replied yet, could you put an X to the appropriate 
column(s):


Not tested;
Tested negative;
Tested positive;
Used paxlovid;
Had rebound.

Please do not put your name anywhere, this is completely anonymous. 
Everybody can see the results.


Please, do NOT reply to the list, unless the post is of general interest 
to the group. If needed, reply directly to me, let's not create extra spam.


Thank you
Leonid



[NMusers] unofficial survey: COVID at PAGE

2022-07-17 Thread Leonid Gibiansky

Hi All,
I got back from PAGE with COVID, and I know several people who got the 
same. I suspect that there are many people who got it, but it would be 
interesting to put a number on it. Attached is the google doc that 
everyone can edit. Could you put an X to the appropriate column(s):


Not tested;
Tested negative
Tested positive.

Please do not put your name anywhere, this is completely anonymous.

To our colleagues from Pfizer: I took paxlovid, and had a virus rebound: 
tested positive and with worse symptoms approximately 2-3 days after 
stopping paxlovid. Out of 6 people whom I know who took paxlovid, 3 
(50%) had a rebound with the symptoms worse than the initial disease, so 
it should be much more common than the currently mentioned 1% rate. Just 
in case, I put


Used paxlovid
Had rebound

columns to the doc.
Here is the link:

https://docs.google.com/spreadsheets/d/1faVr5vwt9KNyxdxJw9IXB6VZkeRFxg_QHad8-o1wfKw/edit?usp=sharing

Everybody can see the results, but I will summarize them on August 1st 
(and 15th, if there will be any new entries in between).


Please, do NOT reply to the list. If needed, reply directly to me, let's 
not create extra spam.


Thank you
Leonid






Re: [NMusers] Time-varying input/flexibility to change input rate on the fly

2021-08-06 Thread Leonid Gibiansky

one can do it by hands, like set F1=1 and then use

DA1/dt = -KA*A(1)
DA2/dt = FF1(any function of time)*A(1) ..

will it do the trick?

Leonid

On 8/6/2021 4:20 PM, Robin Michelet wrote:

Hi Bill,

Thank you for your quick answer. As far as I understand Nonmem's inner 
workings, bio availability is only applied at the onset of dosing and 
adding variability on it would not be able to capture a transient change 
in input. For example in the case of a patch, if it would detach partly 
during the dosing interval one would still need an input (i.e. 
infusion-style input in the depot) but it would just be lower than 
before. Changing F1 would in this case not do much right?


Kind regards,

Robin

Dr. ir. Robin Michelet
Senior scientist

Freie Universitaet Berlin
Institute of Pharmacy
Dept. of Clinical Pharmacy & Biochemistry
Kelchstr. 31
12169 Berlin
Germany
Phone:  + 49 30 838 50659
Fax:  + 49 30 838 4 50656
Email: robin.miche...@fu-berlin.de
www.clinical-pharmacy.eu
https://fair-flagellin.eu/

On 06-08-21 10:15 PM, Bill Denney wrote:

Hi Robin,

I don't think that I've seen an update.  That said, the need I had 
then was
for a very specific need for an unusual drug.  I've only seen this 
type of

issue once where it seemed to need time-dependent effects.  Generally,
effects similar-- but not identical-- to what I was experiencing at 
the time

are better-modeled with simpler systems.  For example, adsorption to
infusion sets can almost always be modeled as a decrease in 
bioavailability

and/or a lag time (it's not typically time-dependent behavior).

I would assume that loss of part of a tablet or detachment of a patch 
could

be simply modeled as random variability (or a fixed effect) on
bioavailability.  Random pump malfunction would depend on how it
malfunctioned, but I would be wary of trying to model random effects 
as this

more complex time-dependent bioavailability unless you had data on the
malfunction method-- in which case I would suggest putting it into the
dataset as a different dosing record.

Thanks,

Bill

-Original Message-
From: owner-nmus...@globomaxnm.com  On 
Behalf

Of Robin Michelet
Sent: Friday, August 6, 2021 3:38 PM
To: nmusers@globomaxnm.com
Subject: [NMusers] Time-varying input/flexibility to change input rate on
the fly

Dear all,

I was wondering if any progress has been made on the topic raised 
originally

by Bill Denney in 2018:

https://www.mail-archive.com/nmusers@globomaxnm.com/msg06990.html

Are there any simpler ways in NM 7.5 to adapt input (e.g. infusion
rates) in $DES during the integration step without adapting the dataset
itself? I.e. to model the malfunctioning of an infusion pump (at random),
the loss of part of a tablet, or the detachment of a patch?

Thank you! I could not answer to the original topic which is why I just
linked to it.

--
Dr. ir. Robin Michelet
Senior scientist

Freie Universitaet Berlin
Institute of Pharmacy
Dept. of Clinical Pharmacy & Biochemistry Kelchstr. 31
12169 Berlin
Germany
Phone:  + 49 30 838 50659
Fax:  + 49 30 838 4 50656
Email: robin.miche...@fu-berlin.de
www.clinical-pharmacy.eu
https://fair-flagellin.eu/






Re: [NMusers] Help with ODEs for a time varying clearance

2021-07-22 Thread Leonid Gibiansky
PK block is executed only once per record so CL(TIME) is constant 
between records, while in the DES block T (time) changes continuously, 
thus implementing time dependence CL(T) exactly rather than 
approximately. The rest is fine.

Thanks
Leonid

On 7/22/2021 4:19 PM, Niurys.CS wrote:

Dear Leonid,
Thanks for yor suggestion. I've never thought in that possibility.  
However, I have two questions:

1-Why must I write those equations inside $DES?
2-Do you think the ODE that I proposed are Ok?

Thank you very much

MSc Niurys de Castro Suárez
Assistant Professor of Pharmacometrics
Assistant Research
Pharmacy Department
Institute of Pharmacy and Food,
University of Havana
Cuba

El 22/07/2021 16:04, "Leonid Gibiansky" <mailto:lgibian...@quantpharm.com>> escribió:


Definition of CL should be moved inside the DES block; other than
that, looks fine:
; DES --
$DES
CL2_TIME = CL2*EXP(-KDES*T)
CL_TOTAL = CL2_TIME + CL1 ; total clearance
...

Thanks
Leonid


On 7/22/2021 3:30 PM, Niurys.CS wrote:

Dear nmusers,
I'm working on the pharmacokinetics of an antiCD20 mAb; I
suspect the clearance of this mAb should be time dependent as
rituximab’s clearance do. I tried to model this behavior but I’m
not sure if the ODEs are correct. Please can you help? I share 
part of the code.


$SUBROUTINE ADVAN13 TOL=9
$MODEL  COMP=(CENTRAL) COMP=(PERIPH1)
$PK
; STRUCTURAL PARAMETERS --
TVCL1   = THETA(2)  ; system-nonspecific clearance
TVV1   = THETA(3)
TVQ    = THETA(4)
TVV2   = THETA(5)
TVKDES = THETA(6)  ; rate constant of the specific clearance decay
TVCL2  = THETA(7)  ; time varying clearance at time zero
; MU_TRANSFORMATION --
MU_1   = LOG(TVCL1)
MU_2   = LOG(TVV1)
MU_3   = LOG(TVQ)
MU_4   = LOG(TVV2)
MU_5   = LOG(TVKDES)
MU_6   = LOG(TVCL2)
; INDIVIDUAL PARAMETERS --
CL1  = EXP(MU_1+ETA(1))
V1   = EXP(MU_2+ETA(2))
Q    = EXP(MU_3+ETA(3))
V2   = EXP(MU_4+ETA(4))
KDES = EXP(MU_5+ETA(5))
CL2   = EXP(MU_6+ETA(6))
S1  = V1
; INITIAL CONDITIONS --
A_0(1) = 0
A_0(2) = 0
CL2_TIME = CL2*EXP((-KDES)*(TIME))
CL_TOTAL = CL2_TIME + CL1 ; total clearance
; DES --
$DES
    CONC   = A(1)/V1
    DADT(1) =-(CL_TOTAL/V1)*A(1)-(Q/V1)*A(1)+(Q/V2)*A(2)
    DADT(2) = (Q/V1)*A(1)-(Q/V2)*A(2)
;$ERROR --
CONC1 = A(1)/V1
IPRED=-3
IF(CONC1.GT.0) IPRED=LOG(CONC1)
W = THETA(1)
Y =  IPRED+W*EPS(1)
IRES=DV-IPRED
IWRES=IRES/W

Thank you,
Niurys

MSc Niurys de Castro Suárez
Assistant Professor of Pharmacometrics
Assistant Research
Pharmacy Department
Institute of Pharmacy and Food,
University of Havana
Cuba





Re: [NMusers] Help with ODEs for a time varying clearance

2021-07-22 Thread Leonid Gibiansky
Definition of CL should be moved inside the DES block; other than that, 
looks fine:

; DES --
$DES
CL2_TIME = CL2*EXP(-KDES*T)
CL_TOTAL = CL2_TIME + CL1 ; total clearance
...

Thanks
Leonid


On 7/22/2021 3:30 PM, Niurys.CS wrote:

Dear nmusers,
I'm working on the pharmacokinetics of an antiCD20 mAb; I suspect the 
clearance of this mAb should be time dependent as rituximab’s clearance 
do. I tried to model this behavior but I’m not sure if the ODEs are 
correct. Please can you help? I share  part of the code.


$SUBROUTINE ADVAN13 TOL=9
$MODEL  COMP=(CENTRAL) COMP=(PERIPH1)
$PK
; STRUCTURAL PARAMETERS --
TVCL1   = THETA(2)  ; system-nonspecific clearance
TVV1   = THETA(3)
TVQ    = THETA(4)
TVV2   = THETA(5)
TVKDES = THETA(6)  ; rate constant of the specific clearance decay
TVCL2  = THETA(7)  ; time varying clearance at time zero
; MU_TRANSFORMATION --
MU_1   = LOG(TVCL1)
MU_2   = LOG(TVV1)
MU_3   = LOG(TVQ)
MU_4   = LOG(TVV2)
MU_5   = LOG(TVKDES)
MU_6   = LOG(TVCL2)
; INDIVIDUAL PARAMETERS --
CL1  = EXP(MU_1+ETA(1))
V1   = EXP(MU_2+ETA(2))
Q    = EXP(MU_3+ETA(3))
V2   = EXP(MU_4+ETA(4))
KDES = EXP(MU_5+ETA(5))
CL2   = EXP(MU_6+ETA(6))
S1  = V1
; INITIAL CONDITIONS --
A_0(1) = 0
A_0(2) = 0
CL2_TIME = CL2*EXP((-KDES)*(TIME))
CL_TOTAL = CL2_TIME + CL1 ; total clearance
; DES --
$DES
   CONC   = A(1)/V1
   DADT(1) =-(CL_TOTAL/V1)*A(1)-(Q/V1)*A(1)+(Q/V2)*A(2)
   DADT(2) = (Q/V1)*A(1)-(Q/V2)*A(2)
;$ERROR --
CONC1 = A(1)/V1
IPRED=-3
IF(CONC1.GT.0) IPRED=LOG(CONC1)
W = THETA(1)
Y =  IPRED+W*EPS(1)
IRES=DV-IPRED
IWRES=IRES/W

Thank you,
Niurys

MSc Niurys de Castro Suárez
Assistant Professor of Pharmacometrics
Assistant Research
Pharmacy Department
Institute of Pharmacy and Food,
University of Havana
Cuba





Re: [NMusers] computation of exponential error model for RUV

2021-07-19 Thread Leonid Gibiansky

yes, it does at estimation, not sure about simulations step.
If you need true exponential error, log-transform of DV and model 
predictions is needed

Leonid



On 7/19/2021 11:34 AM, Guidi Monia wrote:

Dear colleagues,

We would like to compare the NONMEM predictions with those obtained by a 
Bayesian TDM software for models describing the residual unexplained 
variability with exponential errors.


We need to know if NONMEM performs a first order Taylor expansion of the 
exponential error when data are fitted by the FOCE method:


Y=F*EXP(EPS(1)) -> Y= F*(1+EPS(1)).

Could someone help with this?

Thanks in advance

Monia

Monia Guidi, PhD

Pharmacometrician

Service of Clinical Pharmacology | University Hospital and University of 
Lausanne


Center of research and innovation in Clinical Pharmaceutical Sciences | 
University Hospital and University of Lausanne


BU17 01/193

CH-1011 Lausanne

email: monia.gu...@chuv.ch 

tel: +41 21 314 38 97

*CHUV**
**_centre hospitalier universitaire vaudois_*

/var/folders/mv/mxzxwndj1w761kc51dsd0hdmgp/T/com.microsoft.Outlook/WebArchiveCopyPasteTempFiles/cidimage001.png@01D56893.8D99DD50





Re: [NMusers] Assessment of elimination half life of mAb

2021-04-29 Thread Leonid Gibiansky
still, half-life of the linear part could be helpful in cases when 
non-linearity plays no significant role in elimination, so we tend to 
present it together with the washout time simulations.


Leonid



On 4/29/2021 12:35 PM, Justin Wilkins wrote:

Hi Bill, all,

I do much the same thing - when there's nonlinearity happening, I've found it to be 
effective to plot concentration-time curves by doses and regimens of interest and mark 
the times at which the (median?) clinically-defined threshold for "washout" has 
been reached in each case. Of course this starts getting unwieldy when there are lots of 
doses or regimens. A less attractive way would be to produce a lookup table.

Sounds like everyone's thinking along the same lines...

Justin


-Original Message-
From: owner-nmus...@globomaxnm.com  On Behalf Of 
Bill Denney
Sent: Thursday, April 29, 2021 6:17 PM
To: Bonate, Peter ; Leonid Gibiansky 
; Niurys.CS 
Cc: nmusers@globomaxnm.com
Subject: RE: [NMusers] Assessment of elimination half life of mAb

Hi Pete,

I agree that it is hard to communicate.  I like the general idea of C90 you propose.  I tend 
to choose something in between your and Leonid's answer, when possible.  I target an answer 
of "when is the pharmacodynamic effect <5% of the maximum or therapeutic 
effect".  It does require more than just the PK, though.  And for the just PK answer, I 
agree with Leonid and you, targeting some smallish fraction of Cmax is often reasonable for 
similar communication.

What I find clinicians typically try to understand when the drug has washed 
out.  The answer that many have reasonably latched onto is when 5 half-lives 
have passed, the drug is washed out.  That suggests that about 3% (2^-5) effect 
is generally agreed as being washed out.

To Niurys's question about a citation for this, I don't have one either.
It's just a rule-of-thumb that I have tended to use.

Thanks,

Bill

-Original Message-
From: owner-nmus...@globomaxnm.com  On Behalf Of 
Bonate, Peter
Sent: Thursday, April 29, 2021 12:01 PM
To: Leonid Gibiansky ; Niurys.CS 

Cc: nmusers@globomaxnm.com
Subject: RE: [NMusers] Assessment of elimination half life of mAb

I've never really been happy with this.  It's an unsatisfactory solution.
You have a nonlinear drug.  Let's assume you have an approved drug.  It's given 
at some fixed dose.  The clinician wants to know what is the drug's half-life 
so they can washout their patient and start them on some other therapy.  We go 
back to them and say, we can't give you a half-life because it's a nonlinear 
drug, but once the kinetics become linear the half-life is X hours.  That is a 
terrible answer.  Maybe we need to come up with a new term, call it C90, the 
time it takes for Cmax to decline by 90%.  That we can do.  We don't even need 
an analytical solution, we can eyeball it.  We could even get fancy and do it 
in a population model.  C90 - the time it takes for Cmax to decline 90% in 90% 
of patients.  Of course, for nonlinear drugs, C90 only holds for that dose. 
Change in dose results in a new C90.
Just a thought.

pete



Peter Bonate, PhD
Executive Director
Pharmacokinetics, Modeling, and Simulation (PKMS) Clinical Pharmacology and 
Exploratory Development (CPED) Astellas
1 Astellas Way, N3.158
Northbrook, IL  60062
peter.bon...@astellas.com
(224) 619-4901


It’s been a while since I’ve had something here, but here is a Dad joke.

Question:  Do you know why the math book was sad?
Answer:  Because it had so many problems


-Original Message-
From: owner-nmus...@globomaxnm.com  On Behalf Of 
Leonid Gibiansky
Sent: Thursday, April 29, 2021 9:54 AM
To: Niurys.CS 
Cc: nmusers@globomaxnm.com
Subject: Re: [NMusers] Assessment of elimination half life of mAb

I am not aware of any papers specifically addressing the half-live issue, but 
there are tons of original papers and tutorials on TMDD, just search the web 
Thanks Leonid

On 4/29/2021 9:48 AM, Niurys.CS wrote:

Dear Leonid,

Many thanks for clearing up my doubt. Can you suggest me any paper to
go into this topic in any depth.
Best,
Niurys

El 28/04/2021 19:34, "Leonid Gibiansky" mailto:lgibian...@quantpharm.com>> escribió:

 There is no such thing as half-life of elimination for the nonlinear
 drug. But one can compute something like half-life:

 1. Half-life of the linear part (defined by CL, V1, V2, Q): this
 defines the  half-life at high doses/high concentrations when
 nonlinear elimination is saturated.

 2. Washout time: for the linear drug, 5 half-lives can be used to
 define washout time. During this time, concentrations drop
 approximately 2^5=32 times. So one can simulate the desired dosing
 (single dose or steady state), find the time from Cmax to Cmax/32
 and call it washout time (or time to Cmax/64 to be conservative)

 Thanks
 Leonid


 On 4/28/2021 5:17 PM, Niurys.CS wrote:

 Dear all
 I need s

Re: [NMusers] Assessment of elimination half life of mAb

2021-04-29 Thread Leonid Gibiansky
I am not aware of any papers specifically addressing the half-live 
issue, but there are tons of original papers and tutorials on TMDD, just 
search the web

Thanks
Leonid

On 4/29/2021 9:48 AM, Niurys.CS wrote:

Dear Leonid,

Many thanks for clearing up my doubt. Can you suggest me any paper to go 
into this topic in any depth.

Best,
Niurys

El 28/04/2021 19:34, "Leonid Gibiansky" <mailto:lgibian...@quantpharm.com>> escribió:


There is no such thing as half-life of elimination for the nonlinear
drug. But one can compute something like half-life:

1. Half-life of the linear part (defined by CL, V1, V2, Q): this
defines the  half-life at high doses/high concentrations when
nonlinear elimination is saturated.

2. Washout time: for the linear drug, 5 half-lives can be used to
define washout time. During this time, concentrations drop
approximately 2^5=32 times. So one can simulate the desired dosing
(single dose or steady state), find the time from Cmax to Cmax/32
and call it washout time (or time to Cmax/64 to be conservative)

Thanks
Leonid


On 4/28/2021 5:17 PM, Niurys.CS wrote:

Dear all
I need some help to assess the elimination half life of a
monoclonal antibody.
The model that describes the data is a QSS aproximation of TMDD
with Rmax constant. The model includes two binding process of
mAb to its target: in central and peripheral compartments.
Is there any specific equation to calcule lambda z and the
elimination half life for each of the TMDD aproximations?
Thanks
Niurys





Re: [NMusers] Assessment of elimination half life of mAb

2021-04-28 Thread Leonid Gibiansky
There is no such thing as half-life of elimination for the nonlinear 
drug. But one can compute something like half-life:


1. Half-life of the linear part (defined by CL, V1, V2, Q): this defines 
the  half-life at high doses/high concentrations when nonlinear 
elimination is saturated.


2. Washout time: for the linear drug, 5 half-lives can be used to define 
washout time. During this time, concentrations drop approximately 2^5=32 
times. So one can simulate the desired dosing (single dose or steady 
state), find the time from Cmax to Cmax/32 and call it washout time (or 
time to Cmax/64 to be conservative)


Thanks
Leonid


On 4/28/2021 5:17 PM, Niurys.CS wrote:

Dear all
I need some help to assess the elimination half life of a monoclonal 
antibody.
The model that describes the data is a QSS aproximation of TMDD with 
Rmax constant. The model includes two binding process of mAb to its 
target: in central and peripheral compartments.
Is there any specific equation to calcule lambda z and the elimination 
half life for each of the TMDD aproximations?

Thanks
Niurys





Re: [NMusers] Two way crossover

2021-02-12 Thread Leonid Gibiansky
it is hard to check without seeing the data. Could you post sample data 
for 1 subject?

Leonid

On 2/11/2021 11:51 PM, Andre Jackson wrote:
I have the following code for the zero order followed by 1^st order 
delayed absorption of a drug with complex absorption.  This code is for 
a two way crossover study. The simulation runs okay except that the 
occ=2 data results show that OCC2 is not getting sufficient  drug into 
compartment 1.


The code is below.   Has anyone experienced this during a simulation 
  for a drug with complex absorption and if so how was it resolved?


$INPUT ID TIME DV CMT AMT OCC SEQ TRT  EVID MDV RATE

$SUB ADVAN 5 TRANS1

$MODEL

COMP=(1);DOSE1

COMP=(2);DOSE2

COMP=(CENTRAL,DEFOBS);CENTRAL,DEFDOSE)

$PK

     ;IOV INTRAOCCASSION

     ;IIV INTERSUBJECT

 ;TRT=TREATMENT (TEST OR REFERENCE)

    ;

    ;2)PARAMETER KAT1 KAR1 FAST ZERO ORDER MG/H

    ; KAT2 KAR2 SLOW FIRST ORDER MG/H

    ;

     OCC1=0

     OCC2=0

 ;;

     ;3)SEQUENCE 1 TREATMENTS 1-2

     ;SEQ1 3,7,8,11,13,14,18,19,20,22,21,23

     ;SEQ2 1,2,4,5,6,9,10,12,15,16,17,24

     ;;

     ;   OCC1   OCC2

     ; SEQ1    1  2

 ; SEQ2    2  1

 ;;;

 IF (TRT.EQ.1.AND.SEQ.EQ.1)OCC1=1

     IF (TRT.EQ.2.AND.SEQ.EQ.1)OCC2=1

     IF (TRT.EQ.2.AND.SEQ.EQ.2)OCC1=1

     IF (TRT.EQ.1.AND.SEQ.EQ.2)OCC2=1

 IIV1=ETA(1)

     IIV2=ETA(2)

    IOV=ETA(3)*OCC1 +ETA(4)*OCC2

 KAT1=THETA(1)*EXP(IIV1+IOV)

 KAR1=THETA(2)*EXP(IIV1+IOV)

     KAT2=THETA(3)*EXP(IIV2+IOV)

     KAR2=THETA(4)*EXP(IIV2+IOV)

     K13=(KAT1*OCC1 + KAR1*OCC2)

 K23=(KAT2*OCC1+KAR2*OCC2)

     ;

     ;4)PARAMETER F ASSUMING THAT F DOES NOT CHANGE FOR PERIODS

     ;LIKE KA

     ;

     ;NO IOV ON LOGITT

 LOGITT=THETA(5)

     LOGITR=THETA(6)

     IIV5=ETA(5)

     TVF1T=1/(1+EXP(-LOGITT))

     TVF1R=1/(1+EXP(-LOGITR))

     F1=TVF1T *EXP(IIV5)*OCC1+ TVF1R *EXP(IIV5)*OCC2

     F2=1-F1

     ;

     ;7)PARAMETER DURATION FOR COMP1 F1K0*AMT

     ;

     TVDT=THETA(7)

     TVDR=THETA(8)

     IIV6=ETA(6)

 D1=TVDT*EXP(IIV6+IOV)*OCC1+TVDR*EXP(IIV6+IOV)*OCC2

     ;

     ;8)PARAMETER LAG ON COMP2

     ;

     TVLAGT=THETA(9)

     TVLAGR=THETA(10)

     IIV7=ETA(7)

     TLAG2=TVLAGT*EXP(IIV7+IOV)*OCC1+TVLAGR*EXP(IIV7+IOV)*OCC2

     ALAG2=(D1+TLAG2)    ;TOTAL LAG TIME ON COMP 2

     ;

     ;9)PARAMETER VOLUME

     ;

     TVVOL=THETA(11)

     IIV8=ETA(8)

     V3=TVVOL*EXP(IIV8+IOV)

     ;

     ;10)PARAMETER CLEARANCE

     ;

     TVCL=THETA(12)

     IIV9=ETA(9)

     CL=TVCL*EXP(IIV9+IOV)

     KE=CL/V3

 S3=V3/10

 K30=CL/V3    ;ELIMINATION FROM CENTRAL

PARAMETERS FROM PUBLICATION;;

;;KA;;

$THETA 1.11 FIX ;1)K01R FAST

$THETA 1.11 FIX  ;2)K01T FAST

$THETA 0.40 FIX  ;3)KAT2 SLOW

$THETA 0.40 FIX  ;4)KAR2 SLOW

F1;;;

$THETA -0.8 FIX    ;5)LOGIT F1T

$THETA -0.2 FIX    ;5)LOGIT F1R

;DURATION

$THETA 0.9 FIX  ;6)D1

$THETA 0.05 FIX  ;7)D1

LAG COMP 2;;

$THETA 2.89 FIX  ;8)LAG2 HR

$THETA 1.00 FIX  ;8)LAG2 HR

VOL;;L;;

$THETA 1827 FIX  ;9)V3 L

;;CL;;;L/HR

$THETA  564 FIX  ;10)CL L/HR

;IIV AND IOV;

$OMEGA BLOCK(2)

0.01   FIX ;1)IIV1 K01 SEQ1 3% CV

0.01 0.01 ;2)IIV2 K01 SEQ2

$OMEGA 0.01 FIX   ;3)IOV OCC1

$OMEGA 0.01 FIX  ;4)IOV OCC2

PARAMETERS;;;

$OMEGA 0.01 FIX    ;5)F1

$OMEGA 0.01 FIX ;6) DURATION COMP1

$OMEGA 0.01 FIX ;7) LAG 21

$OMEGA 0.01 FIX ;8) VOLUME

$OMEGA 0.010 FIX ;9) CLEAR

$ERROR

CP=A(3)/S3

Y=CP +CP*ERR(1) + ERR(2)

COUNT=IREP

IF (ICALL.EQ.4) THEN

DOWHILE (Y.LT.0.25)

    CALL SIMEPS (EPS)

Y=CP+CP*ERR(1) + ERR(2)

ENDDO

ENDIF

$SIGMA  0.01  

Re: [NMusers] Variability in Dosing Rate (and amount)

2020-12-15 Thread Leonid Gibiansky

not sure the code is correct for 3 days case
As written, it assumes that FDAY1=0.5, FDAYS2=1/4, FDAYS3=1/4 (on average)

Possible way to code this type of fractions is

FF2=THETA()*EXP(ETA())
FF3=THETA()*EXP(ETA())

F1= 1/(1+FF2+FF3)
F2= FF2/(1+FF2+FF3)
F3= FF3/(1+FF2+FF3)

Leonid




On 12/15/2020 11:31 AM, Bill Denney wrote:

Hi Paul,

Martin’s ideas are great ones.  My first thought on the “clever coding” 
would be to treat it like bioavailability.  You should be sure that you 
split it between days rather than estimate it completely separately 
between days.  I would think of doing it in general like:


; Fraction of chow consumed on the first day

FDAY1 = 1/(1+EXP(-ETA(1))

; Fraction of chow consumed on the second and third days if there are 
only two days of dosing


IF (NDAYS.EQ.2) THEN

   FDAY2=1-FDAY1

   FDAY3=0

ENDIF

; Fraction of chow consumed on the third day

IF (NDAYS.EQ.3) THEN

   FDAY2=(1-FDAY1)*(1/(1+EXP(ETA(2

   FDAY3=1-FDAY1-FDAY2

ENDIF

IF (CHOWDAY.EQ.1) F1=FDAY1

IF (CHOWDAY.EQ.2) F1=FDAY2

IF (CHOWDAY.EQ.3) F1=FDAY3

What the code does is ensure that the total dose among days is not 
greater than the total dose measured.  (Note that the code was typed 
directly into the email—there could be a typo in it, but it gives the 
intent.)  It assumes that the dataset has columns setup as:


* AMT: the total dose as measured across the 2 or 3 days (not divided by 
the number of days)


* NDAYS: the number of days where AMT was measured (i.e. 2 if it was 
measured over 2 days and 3 if it was measured over three days)


* CHOWDAY: The day number in the set of days when AMT is measured (1, 2, 
or 3)


It requires that your ETAs are setup for inter-occasion variability (you 
can find many examples of that with a web search).  It also requires 
that you have a measurement or two of PK between each of these doses so 
that the ETA values are estimable.  If you do not have PK between two 
doses (e.g. after the dark period for Day 1), you may not be able to 
estimate ETA for that dose.


Thanks,

Bill

*From:*owner-nmus...@globomaxnm.com 
 > *On Behalf Of *Paul Hutson

*Sent:* Tuesday, December 15, 2020 10:50 AM
*To:* Martin Bergstrand >

*Cc:* nmusers@globomaxnm.com 
*Subject:* RE: [NMusers] Variability in Dosing Rate (and amount)

Thank you, Martin.  That is a great idea, yet I think you give me too 
much credit to expect “clever coding”.


I’ll report back.  Be well.

Paul

Paul Hutson, PharmD, BCOP

Professor

UWisc School of Pharmacy

T: 608.263.2496

F: 608.265.5421

*From:*Martin Bergstrand >

*Sent:* Tuesday, December 15, 2020 8:51 AM
*To:* Paul Hutson mailto:paul.hut...@wisc.edu>>
*Cc:* nmusers@globomaxnm.com 
*Subject:* Re: [NMusers] Variability in Dosing Rate (and amount)

Dear Paul,

I'm sorry for the late answer. Maybe you have already solved this issue 
by now?


The approach that I would suggest is to implement the ingestion of the 
dose as a zero-order infusion with an estimated duration and start.


 1. Set the dose time to the start of the 12 h dark period.
 2. Set the AMT data item to the total ingested drug amount.
 3. Set RATE data item to '-2' (=> estimation of duration (D) of
infusion into compartment, D1 for CMP=1)
 4. Assuming that the dose is entered into CMT=1 you can in the NONMEM
control file estimate ALAG1 and D1 governing the start and duration
of an assumed constant ingestion.
Note: you can consider different types of clever coding to limit the
total ingestion within the 12 h dark period if you want.

This will of course be an approximation as the ingestions likely isn't 
constant. It should however be sufficiently flexible to fit your data 
without biasing assumptions of total dose/exposure.


Kind regards,

Martin Bergstrand, Ph.D.

Principal Consultant

Pharmetheus AB

martin.bergstr...@pharmetheus.com 

www.pharmetheus.com 

/This communication is confidential and is only intended for the use of 
the individual or entity to which it is directed. It may contain 
information that is privileged and exempt from disclosure under 
applicable law. If you are not the intended recipient please notify us 
immediately. Please do not copy it or disclose its contents to any other 
person./


On Thu, Dec 10, 2020 at 5:32 AM Paul Hutson > wrote:


Dear Users, I hope that someone can suggest a paper or method for
addressing an issue with which I am grappling.

I am working on a mouse toxicokinetic study that has two basic
cohorts.  One received a bolus gavage dose of known dose and time. 
The other was dosed by drug-laden chose.  The chow and thus drug

ingested was measured, usually daily in the morning, but 

Re: [NMusers] OFV from different algorithms

2020-09-24 Thread Leonid Gibiansky
ADVANs in my experience have little effect on OF (except convergence: 
ADVAN13 may work in cases when ADVAN6 fails). So method is FOCEI, IMP, 
IMPMAP etc. In general, with rare exceptions that could be discussed 
separately, you would expect the same OF for FOCEI fit with ADVAN6, 8, 
9, 13, 14, 15; and same for IMP fit with different methods.


OF is comparable by the order of magnitude but not identical between 
FOCEI and IMP and IMPMAP.


SAEM and BAYES OF should not be used for model comparisons (even within 
one method, e.g., SAEM to SAEM).


Leonid



On 9/24/2020 2:34 PM, Mark Tepeck wrote:

Hi Leonid,

Thanks for the tips. When you talk about Methods, are you referring to
different ADVANs?  e.g.   FOCEI ADVAN6 vs. IMP ADVAN6.

Mark

On Wed, Sep 23, 2020 at 5:52 PM Leonid Gibiansky
 wrote:


Within one method, FOCEI or IMP models can be compared by OF, but not
between methods. FOCEI and IMP OF are similar by the order of magnitude,
but should not be used for comparison between methods.

SAEM OF should not be used for model comparison at all. Among two models
that differ by SAEM OF, the one with the higher OF may have better fit
(meaning, lower FOCEI or IMP OF than the other model with lower SAEM OF).

So if you would like to compare models obtained by different methods,
you may re-run both of them with fixed parameters using the same method
(not SAEM), and then compare obtained OF values.

Leonid



On 9/23/2020 8:09 PM, Mark Tepeck wrote:

Hi NMusers,

I believe below is a very common question, but I could not find a
clear answer in literature.

Sometimes, we want to find out which algorithm offers the better model
fitting for a given dataset.

Is it possible to use the objective function value (OFV) to compare
the model fitting computed by various algorithms (e.g.  FOCE, IMP, and
SAEM)  ? Put it into another way, the same input dataset with the same
fitted model/estimates will lead to similar OFVs among FOCE, IMP, and
SAEM?


Thanks,


Mark





Re: [NMusers] OFV from different algorithms

2020-09-23 Thread Leonid Gibiansky
Within one method, FOCEI or IMP models can be compared by OF, but not 
between methods. FOCEI and IMP OF are similar by the order of magnitude, 
but should not be used for comparison between methods.


SAEM OF should not be used for model comparison at all. Among two models 
that differ by SAEM OF, the one with the higher OF may have better fit 
(meaning, lower FOCEI or IMP OF than the other model with lower SAEM OF).


So if you would like to compare models obtained by different methods, 
you may re-run both of them with fixed parameters using the same method 
(not SAEM), and then compare obtained OF values.


Leonid



On 9/23/2020 8:09 PM, Mark Tepeck wrote:

Hi NMusers,

I believe below is a very common question, but I could not find a
clear answer in literature.

Sometimes, we want to find out which algorithm offers the better model
fitting for a given dataset.

Is it possible to use the objective function value (OFV) to compare
the model fitting computed by various algorithms (e.g.  FOCE, IMP, and
SAEM)  ? Put it into another way, the same input dataset with the same
fitted model/estimates will lead to similar OFVs among FOCE, IMP, and
SAEM?


Thanks,


Mark





Re: [NMusers] Random effect on ALAG between dose events

2020-08-12 Thread Leonid Gibiansky

No, there is no other solution except IOV.
One option to lessen the impart of the discrepancy is to have inflated 
residual error in the some interval post-dose

;TAD: time after dose
SD=THETA()
IF(TAD.LE.XX) SD=SD*THETA()

$ERROR
Y=TY*(1+SD*EPS(1))

$SIGMA
1 FIX

Then observations close to the dose (with uncertain dose time) will have 
less influence on PK parameters.

Regards,
Leonid


On 8/12/2020 4:51 AM, Tingjie Guo wrote:

Dear NMusers,

I'm modeling a PK data set with a discrepancy between the documented 
dosing time and the actual dosing time. According to our clinical 
practice, actual dosing time is always >= documented time. I added a 
ALAG with IIV to address this issue using the following formulation.


ALAG1 = THETA(5) * EXP(ETA(5))

This indeed improved the model fitting quite a lot. However, this 
parameterization does not reflect the reality as I expect the ETAs 
should vary between each dosing event rather than only between patients. 
So I expect a "inter dose event variability" would better make sense to 
this end. Since there are too many dosing events per patients, a 
IOV-like approach is doable but not preferred. And it may not accurately 
reflect "inter dose event variability" either. I was wondering if there 
is any good solution to this problem? Any comments are very much 
appreciated!


Warm regards,
Tingjie Guo





Re: [NMusers] Variability on infusion duration

2020-08-05 Thread Leonid Gibiansky

may be
D1=DUR*EXP(ETA(1))
IF(D1.GT.DocumentedInfusionDuration) D1=DocumentedInfusionDuration

On 8/5/2020 12:18 PM, Patricia Kleiner wrote:

Dear all,

I am developing a PK model for a drug administered as a long-term 
infusion of 48 hours using an elastomeric pump. End of infusion was 
documented, but sometimes the elastomeric pump was already empty at this 
time. Therefore variability of the concentration measurements observed 
at this time is quite high.
To address this issue, I try to include variability on infusion duration 
assigning the RATE data item in my dataset to -2 and model duration in 
the PK routine. Since the "true" infusion duration can only be shorter 
than the documented one, implementing IIV with a log-normal distribution 
(D1=DUR*EXP(ETA(1)) cannot describe the situation.


I tried the following expression, where DUR ist the documented infusion 
duration:


D1=DUR-THETA(1)*EXP(ETA(1))

It works but does not really describe the situation either, since I 
expect the deviations from my infusion duration to be left skewed. I was 
wondering if there are any other possibilities to incorporate 
variability in a more suitable way? All suggestions will be highly 
appreciated!



Thank you very much in advance!
Patricia







Re: [NMusers] Negative concentration from simulation

2020-06-02 Thread Leonid Gibiansky

Hi Nick,
The question was not how to report measurements, but how to deal with 
the simulation from the model, that was likely developed on the data set 
where BQLs were either ignored or treated as BQLs (e.g., set to 0, set 
to BQL/2, treated with M3: in the best case, the exact method can be 
found in the paper).


Not sure that "honest" and "dishonest" belongs here any way, there are 
many ways to solve the problem, and it is not helpful to label as 
"dishonest" the way that does not coincide with the one that you prefer.


Best!
Hope it is safe and healthy on your side of the globe (where people are 
still go around upside down :) )


Best,
Leonid






On 6/2/2020 3:46 PM, Nick Holford wrote:

Hi Nyein,

For drug concentrations the additive error model assumes that the background 
noise is random with mean zero when the drug concentration is truly zero. In 
the real world there is always background noise for measurements which means 
that real measurements can appear to be a negative concentration even though 
the true concentration is zero. Simulations that simulate negative 
concentrations are therefore more realistic than those that ignore reality and 
are reported as censored measurement values.

The honest thing to do is to report measurements as they are. The dishonest 
thing is to report real measurements as below some arbitrary limit of 
quantification. There are numerous papers which describe the bias arising from 
dishonest reporting of real measurements and work arounds if you have to deal 
with this kind of scientific fraud e.g.

Beal SL. Ways to fit a PK model with some data below the quantification limit. 
Journal of Pharmacokinetics & Pharmacodynamics. 2001;28(5):481-504.
Duval V, Karlsson MO. Impact of omission or replacement of data below the limit 
of quantification on parameter estimates in a two-compartment model. Pharm Res. 
2002;19(12):1835-40.
Ahn JE, Karlsson MO, Dunne A, Ludden TM. Likelihood based approaches to 
handling data below the quantification limit using NONMEM VI. J Pharmacokinet 
Pharmacodyn. 2008;35(4):401-21.
Byon W, Fletcher CV, Brundage RC. Impact of censoring data below an arbitrary 
quantification limit on structural model misspecification. J Pharmacokinet 
Pharmacodyn. 2008;35(1):101-16.
Senn S, Holford N, Hockey H. The ghosts of departed quantities: approaches to 
dealing with observations below the limit of quantitation. Stat Med. 
2012;31(30):4280-95.
Keizer RJ, Jansen RS, Rosing H, Thijssen B, Beijnen JH, Schellens JHM, et al. 
Incorporation of concentration data below the limit of quantification in population 
pharmacokinetic analyses. Pharmacology research & perspectives. 
2015;3(2):10.1002/prp2.131

Best wishes,
Nick





--
Nick Holford, Professor Clinical Pharmacology
Dept Pharmacology & Clinical Pharmacology, Bldg 503 Room 302A
University of Auckland,85 Park Rd,Private Bag 92019,Auckland,New Zealand
office:+64(9)923-6730 mobile:NZ+64(21)46 23 53 FR+33(6)62 32 46 72
email: n.holf...@auckland.ac.nz
http://holford.fmhs.auckland.ac.nz/
http://orcid.org/-0002-4031-2514
Read the question, answer the question, attempt all questions

-Original Message-
From: owner-nmus...@globomaxnm.com  On Behalf Of 
Bill Denney
Sent: Tuesday, 2 June 2020 8:30 PM
To: Nyein Hsu Maung ; nmusers@globomaxnm.com
Subject: RE: [NMusers] Negative concentration from simulation

Hi Nyein,

Negative concentrations can be expected from simulations if the model includes additive 
residual error.  I assume that you mean additive and proportional error when you say 
"combined error model".  If the error structure does not include additive 
error, then we'd need to know more.

How you will handle them in analysis depends on the goals of the analysis.
Usually, you will either simply set negative values to zero or set all values 
below the limit of quantification to zero.

Thanks,

Bill

-Original Message-
From: owner-nmus...@globomaxnm.com  On Behalf Of 
Nyein Hsu Maung
Sent: Tuesday, June 2, 2020 2:13 PM
To: nmusers@globomaxnm.com
Subject: [NMusers] Negative concentration from simulation


Dear NONMEM users,
I tried to simulate a new dataset by using a previously published pop pk model. 
Their model was described by combined error model for residual variability. And 
after simulation, I have obtained two negative concentrations. I would like to 
know if there is any proper way to handle those negative concentrations or if 
there are some codings to prevent gaining negative concentrations. Thanks.

Best regards,
Nyein Hsu Maung





Re: [NMusers] Negative concentration from simulation

2020-06-02 Thread Leonid Gibiansky
you can treat it as any other value below quantification limit, and 
either comment it out (if you do this with other BQLs) or use it (same 
as other BQLs)

Leonid


On 6/2/2020 2:13 PM, Nyein Hsu Maung wrote:


Dear NONMEM users,
I tried to simulate a new dataset by using a previously published pop pk model. 
Their model was described by combined error model for residual variability. And 
after simulation, I have obtained two negative concentrations. I would like to 
know if there is any proper way to handle those negative concentrations or if 
there are some codings to prevent gaining negative concentrations. Thanks.

Best regards,
Nyein Hsu Maung







Re: [NMusers] Time-varying bioavailability and reproducibility in NONMEM analysis

2020-02-12 Thread Leonid Gibiansky
for nonmem to work correctly,  the R code below should be run to correct 
the data file.


code <- ds_sim_told$TIME > 0 & ds_sim_told$amt > 0
ds_sim_told$TOLD[code] <- ds_sim_told$TOLD[code] -24

Note that ALAG should be > 0 in this case (if you need zero ALAG, set it 
to 0.01).


Leonid


On 2/12/2020 7:14 AM, Le Louedec Felicien wrote:

Dear all,

Many thanks for your answers. Indeed, part of the issue comes from the 
LAG time. When I run a model without LAG time, NONMEM returns the same 
predictions between the 28 individuals (provided that TOL is high 
enough), and vs MRGSOLVE. So, obviously, when there is a lag time (eg 
1h), NONMEM needs a value of F at this time (T+1h), and does not use the 
F provided at the time of administration (T+0h).


And everything seems clear now : as NONMEM uses NOCB to deal with 
missing covariates, it reaches the value of F in the next observation, 
which differs according to the sampling design (t+6h in subjects with 
EVID =0 (same TOLD), t+24h (thus a different TOLD) for individuals who 
do not have EVID=0 anymore). However, I guess that mrgsolve uses the 
value of F1 at the time of administration (T+0h), and not at the time 
when absorption begins (T+1h) : that would explain why mrgsolve provides 
the same predictions in every subjects even with a lag time. Am I right ?


The question now is: is it possible to have a model in NONMEM, with lag 
time and time-decreasing bioavailability, that returns predictions 
independently of the sampling strategy ?


Thank you

Félicien

PS: please find below the R code that I used to generate the dataset.

; R code for dataset 

library(tidyverse)

adm <- tibble(ID = 1:28,

   evid = 1,

   amt = 400,

cmt = 1,

   DV = as.double(NA)) %>%

crossing(time = 24*c(0:28)) %>%

   arrange(ID, time) %>%

   select(ID, time, everything())

obs <- tibble(ID = 1:28,

   evid = 0,

   amt = NA,

   cmt = 2,

   DV = as.double(NA)) %>%

   crossing(time = (0:(28*4))*6) %>%

   filter(time <= ID*24) %>%

   arrange(ID, time) %>%

   select(ID, time, everything())

ds_sim_told <- bind_rows(adm, obs) %>%

   arrange(ID, time) %>%

   mutate(MDV = 1)%>%

   mutate(TOLD = ifelse(evid == 1, time, NA)) %>%

   fill(TOLD)

ds_sim_told #  for the data_set function in mrgsolve

ds_sim_told %>%

   rename(CID = ID, TIME = time, EVID = evid, AMT = amt, CMT = cmt) %>%

   write_csv("ds_sim_told.csv", na = ".") #data for NONMEM



*De :*Kyle Baron [mailto:ky...@metrumrg.com]
*Envoyé :* mardi 11 février 2020 17:59
*À :* Saeheum Song 
*Cc :* Le Louedec Felicien ; 
nmusers@globomaxnm.com; Leonid Gibiansky 
*Objet :* Re: [NMusers] Time-varying bioavailability and reproducibility 
in NONMEM analysis


Can you please share the ds_sim_told.csv file (or at least 2 complete 
individuals from that file)?  It does seem like TOLD is the critical 
thing here, but I'd like to know what exactly is the discrepancy.  IMO 
we need the input data to figure that out.  mrgsolve does read 
covariates from the next record and we have some tests around that.  If 
ADDL isn't involved, then I'd expect it to behave like any other 
time-varying covariate.  Would like to look at ALAG too as Leonid suggested.


It does appear that there is no variability in either simulation with 
THETA9 and THETA10 fixed to zero.


I have cross-posted this issue on the mrgsolve Issue Tracker.  If 
nmusers doesn't allow attachments, you can post the input data file there:


https://github.com/metrumresearchgroup/mrgsolve/issues/634

On Tue, Feb 11, 2020 at 10:15 AM Saeheum Song <mailto:ss.pkpdmo...@gmail.com>> wrote:


Pls check what you have compared. You have used sigma 1 for nonmem
not for mrgsolve.

Even if you use sigma q for Mrgsolve, the results will be
slightly different due to fittong vs. Simulation via random numbers.

Hope it helps

On Tue, Feb 11, 2020, 10:43 AM Le Louedec Felicien
mailto:lelouedec.felic...@iuct-oncopole.fr>> wrote:

Please find below the code for NONMEM analysis and for mrgsolve
which is the package I use to perform simulations in R

Thanks you very much

Félicien

;NONMEM CODE;;;

$PROBLEM TEST F DECREASE
$INPUT ID TIME EVID AMT CMT DV MDV TOLD
$DATA ds_sim_told.csv IGNORE=@
$SUBROUTINES ADVAN13 TOL=4
$MODEL COMP=(DEPOT) COMP=(CENTRAL) COMP=(PERIPH)

$PK
TVCL      = THETA(1)
TVVC      = THETA(2)
TVKA      = THETA(3)
TVALAG1   = THETA(4)
TVQ       = THETA(5)
TVVP      = THETA(6)
TVLAMBDA  = THETA(7)
TVMAXDECR = THETA(8)
ERRADD    = THETA(9)
ERRPROP   = THETA(10)

CL     = TVCL
VC     = TVVC
KA     = TVKA
ALAG1  = TVALAG1
Q      = TV

Re: [NMusers] Time-varying bioavailability and reproducibility in NONMEM analysis

2020-02-11 Thread Leonid Gibiansky
could you try models without ALAG? will they also differ? Not sure how 
MRGsolve treats time-dependent covariates for F1. Nonmem will use the 
next value of TOLD. In this case,  TOLD at dose records should be equal 
to the time of the previous dose (assuming that ALAG < inter-dose 
interval), is this how it was coded?


Leonid

On 2/11/2020 10:34 AM, Le Louedec Felicien wrote:

Please find below the code for NONMEM analysis and for mrgsolve which is the 
package I use to perform simulations in R

Thanks you very much

Félicien

;NONMEM CODE;;;

$PROBLEM TEST F DECREASE
$INPUT ID TIME EVID AMT CMT DV MDV TOLD
$DATA ds_sim_told.csv IGNORE=@
$SUBROUTINES ADVAN13 TOL=4
$MODEL COMP=(DEPOT) COMP=(CENTRAL) COMP=(PERIPH)

$PK
TVCL  = THETA(1)
TVVC  = THETA(2)
TVKA  = THETA(3)
TVALAG1   = THETA(4)
TVQ   = THETA(5)
TVVP  = THETA(6)
TVLAMBDA  = THETA(7)
TVMAXDECR = THETA(8)
ERRADD= THETA(9)
ERRPROP   = THETA(10)

CL = TVCL
VC = TVVC
KA = TVKA
ALAG1  = TVALAG1
Q  = TVQ
VP = TVVP

LAMBDA = TVLAMBDA / 24
MAXDECR= TVMAXDECR
TVF= 1-MAXDECR+MAXDECR*EXP(-LAMBDA*TOLD)
F1 = TVF

K20 = CL/VC
K23 = Q/VC
K32 = Q/VP
S2  = VC

$DES
DADT(1) = - KA * A(1)
DADT(2) =   KA * A(1) - K20*A(2) - K23*A(2) + K32*A(3)
DADT(3) =   K23* A(2) - K32*A(3)

$ERROR
IPRED=F
W=SQRT(ERRADD**2+(ERRPROP*IPRED)**2)
Y=IPRED+W*EPS(1)
IRES=DV-IPRED
IWRES=IRES/(W+0.001)

$THETA
(0, 0.5)  FIX ; 1  CL
(0, 3)FIX ; 2  VC
(0, 0.1)  FIX ; 3  KA
(0, 1)FIX ; 4  ALAG
(0, 1)FIX ; 5  Q
(0, 25)   FIX ; 6  VP
(0, 0.15) FIX ; 7  LAMBDA
(0, 0.50) FIX ; 8  MAXDECR
(0)   FIX ; 9 ADD
(0)   FIX ; 10 PROP

$OMEGA 0 FIX
$SIGMA 1 FIX

$ESTIMATION METHOD=1 INTER NOABORT MAXEVAL=0 SIG=3 PRINT=5 POSTHOC FORMAT= 
s1PE16.8E3
$COV PRINT=E MATRIX=S
$TABLE ID TIME EVID AMT CMT DV MDV TOLD F1 PRED IPRED IWRES IRES ONEHEADER 
NOPRINT FILE = run301.TAB FORMAT= s1PE16.8E3

 end of NONMEM CODE ;

 MRGSOLVE CODE ;
$PROB test F decrease
   
$PARAM @annotated

TVCL : 0.5  : 1  Clearance (L.h-1)
TVVC : 3: 2  Volume (L)
TVKA : 0.1  : 3  Absorption rate constant (h-1)
TVALAG   : 1: 5  Lag time (h)
TVQ  : 1: 6  Intercompartmental Clearance (L.h-1)
TVVP : 25   : 7  Volume (L)
TVLAMBDA : 0.15 : 8  First-order decay constant (day-1)
TVMAXDECR: 0.50 : 9  Magnitude of decrease constant (%)

TOLD  : 0 : default TOLD

$CMT @annotated
DEPOT : Depot compartment
CENTRAL : Central compartment
PERIPHERAL : Peripheral compartment

$GLOBAL
double CL, VC, KA, ALAG, Q, VP, LAMBDA, MAXDECR, TVF, K20, K23, K32, F1 ;

$TABLE
double DV  = (CENTRAL / VC) ;

$MAIN
CL   = TVCL ;
VC   = TVVC ;
KA   = TVKA ;
ALAG = TVALAG   ;
Q= TVQ  ;
VP   = TVVP ;
LAMBDA   = TVLAMBDA / 24   ;
MAXDECR  = TVMAXDECR   ;

TVF = 1 - MAXDECR + MAXDECR * exp(-LAMBDA*TOLD) ;
F1 = TVF ;

K20 = CL / VC ;
K23 = Q / VC ;
K32 = Q / VP ;

F_DEPOT = F1 ;
ALAG_DEPOT = ALAG ;

$ODE
dxdt_DEPOT  = -KA * DEPOT ;
dxdt_CENTRAL= KA * DEPOT - K20 * CENTRAL - K23 * CENTRAL + K32 * PERIPHERAL 
;
dxdt_PERIPHERAL = K23 * CENTRAL - K32 * PERIPHERAL;

$CAPTURE @annotated
DV : Concentration central (mcg/L)
F_DEPOT : F

 end of MRGSOLVE CODE 







-Message d'origine-
De : owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] De la 
part de Leonid Gibiansky
Envoyé : mardi 11 février 2020 15:19
À : nmusers@globomaxnm.com
Objet : Re: [NMusers] Time-varying bioavailability and reproducibility in 
NONMEM analysis

could you show equations? Bioavailability is treated differently in Nonmem and 
R, so code should reflect it.
Thanks
Leonid


On 2/11/2020 3:52 AM, Le Louedec Felicien wrote:

Dear NONMEM users,

I'm struggling for a couple of weeks against contradictory results
between NONMEM and R analysis of the same data with the same model which
includes a time-varying bioavailability. Here is a simplified example of
my issue:

On the one hand, let's introduce a bicompartmental model with a depot
compartment, where bioavailability is decreasing over time given a
maximum in decrease (MAXDECR) and a first-order decay constant (LAMBDA).
Instead of the variable TIME, I use a covariate TOLD (Time Of Last Dose)
in order to be sure that the value of F1 computed by NONMEM will be
independent of the time used for computation:

---

$INPUT CID TIME EVID AMT CMT DV MDV TOLD

$MODEL COMP=(DEPOT) COMP=(CENTRAL) COMP=(PERIPH)

$PK

MAXDECR = THETA(1)

LAMBDA   = THETA(2) / 24  ; TIME is in hour, Lambda in day-1

F1   = 1 - MAXDECR + MAXDECR * EXP(-LAMBDA * TOLD)

$THETA

(0, 0.5, 1) FIX

(0, 0.15 ) FIX

---

On the other hand, we have a dataset of 28 IDs with:

-the same dosing regimen of 400 mg qd for 28 days (one line with EVID=1
per administration, no ADDL).

-different "sampling occasions" at 0h, 6h, 12 and 18h post-dose; at day
1 for ID1, at day 1&2 for ID2, at day 1&2&3 for ID3, and so on until
ID28 who has a complete PK exploration from day 1 to 28. All these lines
are filled w

Re: [NMusers] Time-varying bioavailability and reproducibility in NONMEM analysis

2020-02-11 Thread Leonid Gibiansky
could you show equations? Bioavailability is treated differently in 
Nonmem and R, so code should reflect it.

Thanks
Leonid


On 2/11/2020 3:52 AM, Le Louedec Felicien wrote:

Dear NONMEM users,

I’m struggling for a couple of weeks against contradictory results 
between NONMEM and R analysis of the same data with the same model which 
includes a time-varying bioavailability. Here is a simplified example of 
my issue:


On the one hand, let’s introduce a bicompartmental model with a depot 
compartment, where bioavailability is decreasing over time given a 
maximum in decrease (MAXDECR) and a first-order decay constant (LAMBDA). 
Instead of the variable TIME, I use a covariate TOLD (Time Of Last Dose) 
in order to be sure that the value of F1 computed by NONMEM will be 
independent of the time used for computation:


---

$INPUT CID TIME EVID AMT CMT DV MDV TOLD

$MODEL COMP=(DEPOT) COMP=(CENTRAL) COMP=(PERIPH)

$PK

MAXDECR = THETA(1)

LAMBDA   = THETA(2) / 24  ; TIME is in hour, Lambda in day-1

F1   = 1 - MAXDECR + MAXDECR * EXP(-LAMBDA * TOLD)

$THETA

(0, 0.5, 1) FIX

(0, 0.15 ) FIX

---

On the other hand, we have a dataset of 28 IDs with:

-the same dosing regimen of 400 mg qd for 28 days (one line with EVID=1 
per administration, no ADDL).


-different “sampling occasions” at 0h, 6h, 12 and 18h post-dose; at day 
1 for ID1, at day 1&2 for ID2, at day 1&2&3 for ID3, and so on until 
ID28 who has a complete PK exploration from day 1 to 28. All these lines 
are filled with EVID=0, DV=., and MDV=1.


Then, I estimate these concentrations in maximum a posteriori Bayesian 
manner (MAXEVAL = 0) with ADVAN 13 (there is no inter-individual nor 
residual variability).


My problem is that NONMEM found different concentrations in these 28 
individuals, even though they received the same dose. Besides, as 
excepted, I found that all individuals had the same value for F1 (at a 
given time point).


Would any of you have an idea of why NONMEM does not return the same 
predictions ?


Thank you very much in advance

Kind regards

Félicien LE LOUEDEC, PharmD

PhD student

Centre de Recherches en Cancérologie de Toulouse (CRCT), Toulouse, FRANCE

Team 14: « Dose Individualization of Anticancer Drugs »

+335 31 15 55 69

lelouedec.felic...@iuct-oncopole.fr 







Re: [NMusers] Final Parameter is same as initial parameter - integrated PK-PD models with two PD parameters

2019-12-08 Thread Leonid Gibiansky
ERR3 and ERR4 are mixed up, should be in order (in the ERROR block). Can 
it be the reason?

Leonid

On 12/8/2019 2:24 PM, Singla, Sumeet K wrote:

Hello everyone!

This is giving me great deal of headache. If you can please help me with 
this, my subsequent models will start working too. Please take your 
extremely valuable time to help me, if you can.


I am trying to relate THC concentration in the central compartment to 
heart rate, and in the effect compartment to psychological highness. So, 
there are two PD effects, heart rate and highness. My model is 
successfully converging and giving me physiologically relevant 
parameters when I run those models separately i.e. simultaneous 
estimation of PK parameters and PD parameters related to heart rate OR 
simultaneous estimation of PK parameters and PD parameters related to 
highness. However, when I try to estimate all of them all at once, my 
model is just not running. Errors are:


 1. My final parameters are same as initial parameters
 2. “OCCURS DURING SEARCH FOR ETA AT A NONZERO VALUE OF ETA  NUMERICAL
DIFFICULTIES WITH INTEGRATION ROUTINE. 

NO. OF REQUIRED SIGNIFICANT DIGITS IN SOLUTION VECTOR   TO DIFFERENTIAL 
EQUATIONS,   6, MAY BE TOO LARGE.”


I have tried ADVAN 6, 8 AND 13 and have mostly used FOCE as estimation. 
Couple of things to be noticed in NONMEM control stream below:


 1. I am fixing THETA and OMEGAS for PK parameters derived from poppk
model.
 2. There are two emax equations, one for heart (EMAX) and one for
effect compartment in brain (EMAXB).
 3. I am not estimating ETA around effect compartment rate constant and
EC50.
 4. I have 4 observations per subject for heart rate and highness.
 5. TYPE 1 (=1) relates to highness, TYPE2(=1) relates to heart rate and
TYPE 3 (=1) relates to Cp

So, you can see that overall, I am estimating very few parameters. Below 
is the dataset and control stream


ID



TIME



AMT



DV



CMT



MDV



TYPE1



TYPE2



TYPE3

1402



9:39



4



0



1



1



0



0



1

1402



9:39



.



0



3



0



1



0



0

1402



9:39



.



0



1



0



0



1



0

1402



9:50



.



271.8



1



0



0



0



1

1402



9:50



.



6



3



0



1



0



0

1402



9:50



.



34.4



1



0



0



1



0

$INPUT C ID TIME AMT DV CMT MDV TYPE1 TYPE2 TYPE3

$SUBROUTINE ADVAN6 TRANS1 TOL=6

$MODEL NCOMP = 3

COMP = (CENTRAL)

COMP = (PERIPH1)

COMP = (EFFTHC)

$PK

TVV1 = THETA(1) ;Central Volume of 
distribution in L


V1 = TVV1*EXP(ETA(1))

TVCL = THETA(2)

CL = TVCL*EXP(ETA(2))   ;Clearance

TVQ = THETA(3)

Q = TVQ*EXP(ETA(3)) ;Intercompartment Clearance

TVV2 = THETA(4)

V2 = TVV2*EXP(ETA(4))

KE01= THETA(5)

EC50H = THETA(6)

EMAXH = THETA(7)*EXP(ETA(5))

EC50B = THETA(8)

EMAXB = THETA(9)

HILL = THETA(10)

S1  = V1

A_0(1)=0

A_0(3)=0

$DES

C1 = A(1)/V1

C3 = A(3)   ;effect compartment for THC

DADT(1) = (Q/V2)*A(2) - (Q/V1)*A(1) - (CL/V1)*A(1)

DADT(2) = (Q/V1)*A(1) - (Q/V2)*A(2)

DADT(3) = KE01*C1  - KE01*A(3)

$ERROR

CP   = A(1)/V1

CE1  = A(3)

CONC = CP*(1 + ERR(1)) + ERR(3)

H    = EMAXH*(((CP**HILL))/((EC50H**HILL)+(CP**HILL)))

B    = EMAXB*(((CE1**HILL))/((EC50B**HILL)+(CE1**HILL)))

EFF1  = B + ERR(2)

EFF2  = H + ERR(4)

IF(TYPE1.EQ.1) IPRED = B

IF(TYPE2.EQ.1) IPRED = H

IF(TYPE3.EQ.1) IPRED = CP

Y    = (EFF1*TYPE1) + (EFF2*TYPE2) + (CONC*TYPE3)

$THETA

16.5 FIX    ; [V1]

255 FIX     ; [CL]

33.5 FIX    ; [Q]

29.7 FIX    ; [V2]

(0, 1, 10)  ; [KEO1]

(0.01,16.3)  ; EC50H

(0, 79)      ; EMAXH

(0.01,16.3)  ; EC50B

10 FIX    ; EMAXB

1 FIX ; HILL

$OMEGA

0.085 FIX ; [V1]

0.159 FIX ; [CL]

0.140 FIX ; [Q]

0.191 FIX     ; [V2]

(0.001, 0.1) ; EMAXH

$SIGMA

0.0672 ;ERR1

178 ;ERR2

100 ;ERR4

$SIGMA

0.4 FIX    ;[ERR3]

$COV MATRIX=R UNCONDITIONAL

$ESTIMATION METHOD=1 MAXEVAL=9 SIG=3 NOABORT PRINT=5 
MSFO=simultaneous.MSF


Regards,

*Sumeet K. Singla*

*Ph.D. Candidate*

*Division of Pharmaceutics and Translational Therapeutics*

*College of Pharmacy | University of Iowa*

*Iowa City, Iowa*

*sumeet-sin...@uiowa.edu *

*518.577.5881*





Re: [NMusers] AMD vs Intel

2019-11-19 Thread Leonid Gibiansky

Thanks to all who shared their experience.

Here is the brief summary of observations:
4 combinations of Intel Fortran or gfortran with Xeon or AMD processors 
(of approximately the same base frequency) provided similar speed but 
different results. Time comparison is not straightforward as the number 
of iterations required for convergence varied between these 4 versions 
(FOCEI, LAPLACIAN, and SAEM with ADVAN13 were used for all tests). 
Results are numerically different, but not really different as parameter 
estimates differ by no more than the respective confidence intervals of 
parameter estimates: few percents for the well defined parameters, more 
for parameters with large RSEs. Thus, any of these 4 combinations can be 
used, but it is better not to mix them in one analysis. Also it seems to 
be a good practice to specify not only OS and compiler with options, but 
also processor or at least processor type to ensure exact 
reproducibility of results.


Unlike earlier (10+ years ago) reports, Intel (old, v.11) compiler seems 
to provide similar speed on both Intel and AMD new processors.


Thanks!
Leonid



On 11/19/2019 4:32 AM, Rikard Nordgren wrote:

Hi Leonid,

When upgrading from gfortran 4.4.7 to 5.1.1 we ran around 20 models with 
both compilers and turning off the -ffast-math. The runs where on the 
same hardware. The differences in the parameter estimates and OFV were 
in general small. One big difference we could see was that the success 
of the covariance step was seemingly random. It could succeed on one 
compiler version, but not the other and it could also start failing when 
the option was turned off. I have kept the runs, so let me know if you 
would be interested. I also started some experiments using machine 
dependent compiler flags, but as our cluster is heterogeneous I 
abandoned this testing.


I think that getting identical results could be possible, but that it 
would be quite a challenge. There are many components that affect the 
results. The compiler, the compiler flags, the libc implementation, the 
hardware and sometimes the operating system. To see for example where 
the standard libraries comes into play you can do nm nonmem on the 
nonmem executable (in linux) to list all symbols compiled in. Some are 
function from external libraries, for example my exponential function is 
from libc: exp@@GLIBC_2.2.5 . Even the functions that read in numbers 
from text strings could introduce rounding errors since the text 
representation is decimal and the internal floating point number is binary.


Best regards,
Rikard Nordgren

--
Rikard Nordgren
Systems developer

Dept of Pharmaceutical Biosciences
Faculty of Pharmacy
Uppsala University
Box 591
75124 Uppsala

Phone: +46 18 4714308
www.farmbio.uu.se/research/researchgroups/pharmacometrics/




On 2019-11-18 23:54, Leonid Gibiansky wrote:

Hi Jeroin,

Thanks for your input, very interesting. As far as the goal is 
concerned, I am mostly interested to find options that would give 
identical results on two platform rather than in speed. So far no 
luck: 4 combinations of gfortran / Intel compilers on Xeon / AMD 
processors give 4 sets of results that are close but not identical.


Related question to the group: have anybody experimented with gfortran 
options (rather than using default provided by Nonmem distribution)? 
Any recommendations? Same goal: maximum reproducibility across 
different OSs, parallelization options, and processor types.


Thanks
Leonid




On 11/18/2019 5:28 PM, Jeroen Elassaiss-Schaap (PD-value B.V.) wrote:

Hi Leonid,

"A while" back we compared model development trajectories and results 
between two computational platforms, Itanium and Xeon, see 
https://www.page-meeting.org/?abstract=1188. The results roughly 
were: 1/3 equal, 1/3 rounding differences and 1/3 real different 
results. From discussions with the technical knowledgeable people I 
worked with at the time, I recall that there are three levels/sources 
for those differences:


1) computational (hardware) platform

2) compilers (+ optimization settings)

3) libraries (floating point handling does matter)

Assuming you would like to compare the speed of the platforms wrt 
NONMEM, my advice would be to test a large series of different 
models, from simple ADVAN1 or 2 to complex ODE, ranging from FO to 
LAPLACIAN INT NUMERICAL, while keeping compilers and libraries the 
same. Also small and large datasets, as in some instances you might 
be testing only the L1/L2/L3 cache strategies and Turbo settings. And 
with and without parallelization - as that might determine runtime 
bottlenecks in practice.


Just having a peek at Epyc - seems interesting (noticed results w 
gcc7.4 compilation). As long as you are able to hold the computation 
in cache, a big if for the 64-core, there might be an advantage.


All in all I am not sure that it is worth the trouble. For any given 
PK-PD model there is a lot you can tune to gain speed, but the 
optima

Re: [NMusers] AMD vs Intel

2019-11-18 Thread Leonid Gibiansky

Hi Jeroin,

Thanks for your input, very interesting. As far as the goal is 
concerned, I am mostly interested to find options that would give 
identical results on two platform rather than in speed. So far no luck: 
4 combinations of gfortran / Intel compilers on Xeon / AMD processors 
give 4 sets of results that are close but not identical.


Related question to the group: have anybody experimented with gfortran 
options (rather than using default provided by Nonmem distribution)? Any 
recommendations? Same goal: maximum reproducibility across different 
OSs, parallelization options, and processor types.


Thanks
Leonid




On 11/18/2019 5:28 PM, Jeroen Elassaiss-Schaap (PD-value B.V.) wrote:

Hi Leonid,

"A while" back we compared model development trajectories and results 
between two computational platforms, Itanium and Xeon, see 
https://www.page-meeting.org/?abstract=1188. The results roughly were: 
1/3 equal, 1/3 rounding differences and 1/3 real different results. From 
discussions with the technical knowledgeable people I worked with at the 
time, I recall that there are three levels/sources for those differences:


1) computational (hardware) platform

2) compilers (+ optimization settings)

3) libraries (floating point handling does matter)

Assuming you would like to compare the speed of the platforms wrt 
NONMEM, my advice would be to test a large series of different models, 
from simple ADVAN1 or 2 to complex ODE, ranging from FO to LAPLACIAN INT 
NUMERICAL, while keeping compilers and libraries the same. Also small 
and large datasets, as in some instances you might be testing only the 
L1/L2/L3 cache strategies and Turbo settings. And with and without 
parallelization - as that might determine runtime bottlenecks in practice.


Just having a peek at Epyc - seems interesting (noticed results w gcc7.4 
compilation). As long as you are able to hold the computation in cache, 
a big if for the 64-core, there might be an advantage.


All in all I am not sure that it is worth the trouble. For any given 
PK-PD model there is a lot you can tune to gain speed, but the optimal 
settings might be very different for the next and overrule any platform 
differences.


Hope this helps,

Jeroen

http://pd-value.com
jer...@pd-value.com
@PD_value
+31 6 23118438
-- More value out of your data!

On 18/11/19 6:34 pm, Leonid Gibiansky wrote:

Thanks Bob and Peter!

The model is quite stable, but this is LAPLACIAN, so requires second 
derivatives. At iteration 0, gradients  differ by about 50 to 100% 
between Intel and AMD. This leads to differences in minimization path, 
and slightly different results. Not that different to change the 
recommended dose, but sufficiently different to notice (OF difference 
of 6 points; 50% more model evaluations to get to convergence).

Thanks
Leonid



On 11/18/2019 12:15 PM, Bonate, Peter wrote:
Leonid - when you say different.  What do you mean?  Fixed effect and 
random effects?  Different OFV?


We did a poster at AAPS a decade or so ago comparing results across 
different platforms using the same data and model.  We got different 
results on the standard errors (which related to matrix inversion and 
how those are done using software-hardware configurations).  And with 
overparameterized models we got different error messages - some 
platforms converged with no problem while some did not converge and 
gave R matrix singularity.


Did your problems go beyond this?

pete



Peter Bonate, PhD
Executive Director
Pharmacokinetics, Modeling, and Simulation
Astellas
1 Astellas Way, N3.158
Northbrook, IL  60062
peter.bon...@astellas.com
(224) 205-5855



Details are irrelevant in terms of decision making -  Joe Biden.






-Original Message-
From: owner-nmus...@globomaxnm.com  On 
Behalf Of Leonid Gibiansky

Sent: Monday, November 18, 2019 11:05 AM
To: nmusers 
Subject: [NMusers] AMD vs Intel

Dear All,

I am testing the new Epyc processors from AMD (comparing with Intel 
Xeon), and getting different results. Just wondering whether anybody 
faced the problem of differences between AMD and Intel processors and 
knows how to solve it. I am using Intel compiler but ready to switch 
to gfortran or anything else if this would help to get identical 
results.
There were reports of Intel slowing the AMD execution in the past, 
but in my tests, speed is comparable but the results differ.


Thanks
Leonid









Re: [NMusers] AMD vs Intel

2019-11-18 Thread Leonid Gibiansky

Thanks Bob and Peter!

The model is quite stable, but this is LAPLACIAN, so requires second 
derivatives. At iteration 0, gradients  differ by about 50 to 100% 
between Intel and AMD. This leads to differences in minimization path, 
and slightly different results. Not that different to change the 
recommended dose, but sufficiently different to notice (OF difference of 
6 points; 50% more model evaluations to get to convergence).

Thanks
Leonid



On 11/18/2019 12:15 PM, Bonate, Peter wrote:

Leonid - when you say different.  What do you mean?  Fixed effect and random 
effects?  Different OFV?

We did a poster at AAPS a decade or so ago comparing results across different 
platforms using the same data and model.  We got different results on the 
standard errors (which related to matrix inversion and how those are done using 
software-hardware configurations).  And with overparameterized models we got 
different error messages - some platforms converged with no problem while some 
did not converge and gave R matrix singularity.

Did your problems go beyond this?

pete



Peter Bonate, PhD
Executive Director
Pharmacokinetics, Modeling, and Simulation
Astellas
1 Astellas Way, N3.158
Northbrook, IL  60062
peter.bon...@astellas.com
(224) 205-5855



Details are irrelevant in terms of decision making -  Joe Biden.






-Original Message-
From: owner-nmus...@globomaxnm.com  On Behalf Of 
Leonid Gibiansky
Sent: Monday, November 18, 2019 11:05 AM
To: nmusers 
Subject: [NMusers] AMD vs Intel

Dear All,

I am testing the new Epyc processors from AMD (comparing with Intel Xeon), and 
getting different results. Just wondering whether anybody faced the problem of 
differences between AMD and Intel processors and knows how to solve it. I am 
using Intel compiler but ready to switch to gfortran or anything else if this 
would help to get identical results.
There were reports of Intel slowing the AMD execution in the past, but in my 
tests, speed is comparable but the results differ.

Thanks
Leonid







[NMusers] AMD vs Intel

2019-11-18 Thread Leonid Gibiansky

Dear All,

I am testing the new Epyc processors from AMD (comparing with Intel 
Xeon), and getting different results. Just wondering whether anybody 
faced the problem of differences between AMD and Intel processors and 
knows how to solve it. I am using Intel compiler but ready to switch to 
gfortran or anything else if this would help to get identical results.
There were reports of Intel slowing the AMD execution in the past, but 
in my tests, speed is comparable but the results differ.


Thanks
Leonid





Re: [NMusers] cumulative AUC

2019-11-08 Thread Leonid Gibiansky
actually CMT=-4 turns compartment 4 off. It need to be turned on again, 
see by the record with CMT=4 (positive), see


Plasma urine example

in Nonmem help

Leonid

On 11/8/2019 1:47 PM, Leonid Gibiansky wrote:
Negative CMT (in the data set) resets compartment amounts (CMT=-4 in 
this case)
EVID = 4 resets all system, including A(4)  (if washout time was long 
and all compartments can be set to 0)
Also, it is easy to do post-processing, subtracting Cycle 1 AUC from 
Cycle 1-2 AUC to get Cycle 2 AUC, etc.

Hope this helps
Leonid

On 11/8/2019 12:59 PM, Sam Liao wrote:

Dear nmusers,

I have a question how to reset cumulative AUC in a $DES model, where 
DADT(4) is set up as cumulative AUC of C1.


Because the study is a four way crossover study, the cumulative value 
of A(4) need to rest to zero before each subsequent treatments.


However, NONMEM do not allow setting A(4) to zero in the statement of 
control stream.


Is there any work around of this problem?

Best regards,

Sam Liao

On 11/7/2019 7:17 AM, Farrell, Colm wrote:


At ICON, it’s our people that set us apart.

As a global provider of drug development solutions, our work is 
serious business. But that doesn’t mean you can’t have fun while you 
do it. With our vision to be the partner of choice in drug 
development, we hire only the best and brightest in the industry. Are 
you one of them?


Due to the continuing demand to support our clients in drug 
development through the integration of modelling and simulation into 
clinical development programs, ICON is looking for an experienced 
pharmacometrician to join our global PKPDM team, with a key 
responsibility to develop and apply M strategies, study designs and 
analyses across all phases of drug development.  With members of the 
PKPDM based in UK and USA, we’re flexible on the location.


*The role*

·Prepare PKPDM strategies to support clinical development 
programs;


·Design, conduct, interpret and prepare appropriate study and 
regulatory summaries of PKPDM activities;


·Build and sustain great relationships with clients;

·Prepare and present scientific publications;

·Serve as scientific reviewer and staff mentor to colleagues.

*What you need*

·PhD (or equivalent experience) in pharmacokinetics/PKPD with proven 
industry or academic post PhD experience;


·Excellent understanding of the integration of PKPDM into clinical 
drug development


·Demonstrated expertise in hands-on population-based modelling e.g. 
NONMEM, Monolix and related software (R, Xpose, PsN);


·Proven mentorship capabilities.

*Why join us*

ICON enjoys a strong reputation for quality and is focused on staff 
development. We make it our mission to attract the most diverse and 
creative minds into the business and we continually strive to provide 
opportunities for our people to excel, grow and build a phenomenal 
career. We understand that our greatest asset is the skills and 
talents of our people and they are truly what set us apart. Other 
than working with a great team of switched on and ambitious people, 
we also offer a very competitive benefits package. This varies from 
country to country so a dedicated recruiter will discuss this with 
you at interview stage. We care about our people as they are the key 
to our success. We provide an open and friendly work environment 
where we empower people and provide them with opportunities to 
develop their long term career. ICON is an equal opportunity employer 
- M/F/D/V and committed to providing a workplace free of any 
discrimination or harassment.


To find out more and apply for the role, please visit: 
https://icon.wd3.myworkdayjobs.com/broadbean_external/job/UK-London-Marlow/Senior-PK-Scientist-II_061382-1 





ICON plc made the following annotations.
-- 

This e-mail transmission may contain confidential or legally 
privileged information that is intended only for the individual or 
entity named in the e-mail address. If you are not the intended 
recipient, you are hereby notified that any disclosure, copying, 
distribution, or reliance upon the contents of this e-mail is 
strictly prohibited. If you have received this e-mail transmission in 
error, please reply to the sender, so that ICON plc can arrange for 
proper delivery, and then please delete the message.


Thank You,

ICON plc
South County Business Park
Leopardstown
Dublin 18
Ireland
Registered number: 145835


--
Sam Liao





Re: [NMusers] cumulative AUC

2019-11-08 Thread Leonid Gibiansky
Negative CMT (in the data set) resets compartment amounts (CMT=-4 in 
this case)
EVID = 4 resets all system, including A(4)  (if washout time was long 
and all compartments can be set to 0)
Also, it is easy to do post-processing, subtracting Cycle 1 AUC from 
Cycle 1-2 AUC to get Cycle 2 AUC, etc.

Hope this helps
Leonid

On 11/8/2019 12:59 PM, Sam Liao wrote:

Dear nmusers,

I have a question how to reset cumulative AUC in a $DES model, where 
DADT(4) is set up as cumulative AUC of C1.


Because the study is a four way crossover study, the cumulative value of 
A(4) need to rest to zero before each subsequent treatments.


However, NONMEM do not allow setting A(4) to zero in the statement of 
control stream.


Is there any work around of this problem?

Best regards,

Sam Liao

On 11/7/2019 7:17 AM, Farrell, Colm wrote:


At ICON, it’s our people that set us apart.

As a global provider of drug development solutions, our work is 
serious business. But that doesn’t mean you can’t have fun while you 
do it. With our vision to be the partner of choice in drug 
development, we hire only the best and brightest in the industry. Are 
you one of them?


Due to the continuing demand to support our clients in drug 
development through the integration of modelling and simulation into 
clinical development programs, ICON is looking for an experienced 
pharmacometrician to join our global PKPDM team, with a key 
responsibility to develop and apply M strategies, study designs and 
analyses across all phases of drug development.  With members of the 
PKPDM based in UK and USA, we’re flexible on the location.


*The role*

·Prepare PKPDM strategies to support clinical development 
programs;


·Design, conduct, interpret and prepare appropriate study and 
regulatory summaries of PKPDM activities;


·Build and sustain great relationships with clients;

·Prepare and present scientific publications;

·Serve as scientific reviewer and staff mentor to colleagues.

*What you need*

·PhD (or equivalent experience) in pharmacokinetics/PKPD with proven 
industry or academic post PhD experience;


·Excellent understanding of the integration of PKPDM into clinical 
drug development


·Demonstrated expertise in hands-on population-based modelling e.g. 
NONMEM, Monolix and related software (R, Xpose, PsN);


·Proven mentorship capabilities.

*Why join us*

ICON enjoys a strong reputation for quality and is focused on staff 
development. We make it our mission to attract the most diverse and 
creative minds into the business and we continually strive to provide 
opportunities for our people to excel, grow and build a phenomenal 
career. We understand that our greatest asset is the skills and 
talents of our people and they are truly what set us apart. Other than 
working with a great team of switched on and ambitious people, we also 
offer a very competitive benefits package. This varies from country to 
country so a dedicated recruiter will discuss this with you at 
interview stage. We care about our people as they are the key to our 
success. We provide an open and friendly work environment where we 
empower people and provide them with opportunities to develop their 
long term career. ICON is an equal opportunity employer - M/F/D/V and 
committed to providing a workplace free of any discrimination or 
harassment.


To find out more and apply for the role, please visit: 
https://icon.wd3.myworkdayjobs.com/broadbean_external/job/UK-London-Marlow/Senior-PK-Scientist-II_061382-1




ICON plc made the following annotations.
-- 

This e-mail transmission may contain confidential or legally 
privileged information that is intended only for the individual or 
entity named in the e-mail address. If you are not the intended 
recipient, you are hereby notified that any disclosure, copying, 
distribution, or reliance upon the contents of this e-mail is strictly 
prohibited. If you have received this e-mail transmission in error, 
please reply to the sender, so that ICON plc can arrange for proper 
delivery, and then please delete the message.


Thank You,

ICON plc
South County Business Park
Leopardstown
Dublin 18
Ireland
Registered number: 145835


--
Sam Liao





Re: [NMusers] RE: Stepwise covariate modeling

2019-10-29 Thread Leonid Gibiansky
I think we are making it more difficult than needed, especially for the 
people who just started using the NLME. It does not hurt to include 
statistically significant covariate in the model even if the actual 
effect is small and does no manifest itself on the standard diagnostic 
plots.


It make sense to check whether there is an error in the model code. 
Plots of random effects versus covariates of interest should help to see 
whether covariate model changed the individual random effects. If not 
(that is, random effects of the model with and without covariate effect 
are numerically identical) then the coding is wrong and should be checked.


Thanks
Leonid



On 10/29/2019 11:00 AM, Luann Phillips wrote:

Hi,

If _all_ of the individual predictions are the same for the model with 
the covariate and without the covariate, then it sounds like the 
original model is at a local minimum instead of a global minimum.


Best regards,

Luann

*From:* owner-nmus...@globomaxnm.com  *On 
Behalf Of *Singla, Sumeet K

*Sent:* Tuesday, October 29, 2019 10:00 AM
*To:* nmusers@globomaxnm.com
*Subject:* [NMusers] Stepwise covariate modeling

Hi!

I am performing stepwise covariate modeling using PsN feature in Pirana. 
I am getting some covariates which are statistically reducing OFV 
significantly, however, when I include those covariates in the PK model, 
the results I am getting are exactly similar to what I am getting in my 
base model, i.e. there is no difference in individual predictions or pop 
predictions or any other diagnostic plots. So, does that mean I should 
move forward WITHOUT including those covariates as they don’t seem to be 
explaining inter-individual variability despite scm telling me that they 
are statistically significant?


Regards,

*Sumeet K. Singla*

*Ph.D. Candidate*

*Division of Pharmaceutics and Translational Therapeutics*

*College of Pharmacy | University of Iowa*

*Iowa City, Iowa*

*sumeet-sin...@uiowa.edu *

*518.577.5881*





Re: [NMusers] Algebraic equations and IOV

2019-10-24 Thread Leonid Gibiansky
We have to be careful with data structure. When Nonmem advances from 
time T1=0 to time T2=24 (whether with DES or exact solution), covariate 
values (OCC in this case) at time T2=24 are used. So for the data file 
presented, 0 to 24 hours will be treated as OCC=2


> will then use CL1=THETA*EXP(ETA1 + ETA3) from 0 to 24h (NOT ETA2)

Value of CL for time > 24 will depend on OCC value of the next record 
(not presented in the data file)


Leonid


On 10/23/2019 2:11 PM, Eleveld-Ufkes, DJ wrote:

Hi Ruben,


As I understand it the way you seem to intend it that CL takes value 
THETA(1)*EXP(ETA(2)) when IOV1=1 (thus IOV2=0) from 0 to 24h, and 
THETA(1)*EXP(ETA(3)) when IOV2=1 (thus IOV1=0) from 24h onwards. These 
are separate occasions and do not overlap, so ETA(2) has no meaning (no 
influence on any predictions or likelihood or anything) after 24 hours. 
The opposite meaning is for ETA(3). So in my view only A) makes sense 
and B) and C) dont. At least as far as I understand your code style.



When you say algebraic equations do you mean the closed-form solutions 
for particular mammilary models? You only have to know the equations 
which can handle non-zero initial conditions which you would need 
starting at 24h to do the prediction at 30h. I dont know what structures 
you need, if you have unusual structures then DES is the only way I 
think. For some normal structures take a look at: Abuhelwa, A.Y., 
Foster, D.J. and Upton, R.N., 2015. ADVAN-style analytical solutions for 
common pharmacokinetic models. /Journal of pharmacological and 
toxicological methods/, /73/, pp.42-48. In the supplements is R code I 
believe. I have gotten some of the models to work in C just by 
cut-paste-compile and fix errors, so they should work in R as well.



I hope this helps.


Warm regards,


Douglas Eleveld



*From:* owner-nmus...@globomaxnm.com  on 
behalf of Ruben Faelens 

*Sent:* Wednesday, October 23, 2019 5:59:04 PM
*To:* nmusers@globomaxnm.com
*Subject:* [NMusers] Algebraic equations and IOV
Dear colleagues,

I am implementing my own simulation engine for non-linear mixed-effects 
models in R. To ensure that I can reproduce models estimated in NONMEM 
or Monolix, I was wondering how IOV is treated in those software.


Usually, IOV is implemented as follows (NONMEM code):
IOV1=0
IOV2=0
IF(OCC.EQ.1) IOV1=1
IF(OCC.EQ.2) IOV2=1
CL = THETA(1) * EXP( ETA(1) + IOV1*ETA(2) + IOV2*ETA(3) )

Assume a data-set with the following items:
TIME;OCC;EVID;AMT
0;1;1;50
24;2;1;50

We will then use CL1=THETA*EXP(ETA1 + ETA2) from 0 to 24h, and 
CL2=THETA*EXP(ETA1+ETA3) from 24h onwards.


When using differential equations, the implementation is clear. We 
integrate the ODE system using CL1 until time 24h. We then continue to 
integrate from 24h onwards, but using CL2.


My question is how this works when we use algebraic equations. Let's 
define CONC_tmt1(CL, X) to represent the concentration at time X due to 
treatment 1 at time 0. For tmt2 (which happens at t=24), we write 
CONC_tmt2(CL, X-24).
Suppose we need a prediction at times 5h and 30h. Without IOV, we would 
calculate this as follows:

CONC(5) = CONC_tmt1(CL, 5)
CONC(30) = CONC_tmt1(CL, 30) + CONC_tmt2(CL, 30-24)

There are multiple options to do this with IOV:
A) Approximate what an ODE implementation would do:
CONC(5) = CONC_tmt1(CL1, 5)
INIT_24 = CONC_tmt1(CL1, 24)
CONC(30) = CONC_virtualTreatment(  Dose=INIT_24, CL2, 30-24 ) + 
CONC_tmt2(CL2, 30-24)
We calculate the elimination of the remaining drug amounts in each 
compartment, and calculate the elimination of them into occasion 2.

Are these equations available somewhere?

B) We ignore overlap in dosing profiles. The full profile of tmt1 (even 
the part in occasion 2) is calculated using CL1.

CONC(5) = CONC_tmt1(CL1, 5)
CONC(30) = CONC_tmt1(CL1, 30) + CONC_tmt2(CL2, 30-24)

C) We can ignore continuity in concentrations. The contribution of tmt1 
in occasion 2 is calculated as if the full treatment occurred under CL2.

CONC(5) = CONC_tmt1(CL1, 5)
CONC(30) = CONC_tmt1(CL2, 30) + CONC_tmt2(CL2, 30-24)

*_Which technique does NONMEM and Monolix use for simulating PK 
concentration using algebraic equations with IOV?_*
This is important for numerical validation between my framework and 
NONMEM / Monolix.


Best regards,
Ruben Faelens

De inhoud van dit bericht is vertrouwelijk en alleen bestemd voor de 
geadresseerde(n). Anderen dan de geadresseerde(n) mogen geen gebruik 
maken van dit bericht, het niet openbaar maken of op enige wijze 
verspreiden of vermenigvuldigen. Het UMCG kan niet aansprakelijk gesteld 
worden voor een incomplete aankomst of vertraging van dit verzonden 
bericht.


The contents of this message are confidential and only intended for the 
eyes of the addressee(s). Others than the addressee(s) are not allowed 
to use this message, to make it public or to 

Re: [NMusers] Is there an easy way to get PRED from NONMEM when objective function instead of error model is specified?

2019-09-28 Thread Leonid Gibiansky
To get PRED, nonmem runs the model with all ETAs fixed to zero. COMACT 
variable is equal to 1 at this step. So if IPRED is the concentration 
predictions, then the line:


  IF(COMACT.EQ.1) PREDI=IPRED

will give you concentration predictions when all ETAs are equal to zero 
(that is PRED). This is also convenient to get PRED for BQL observations 
(when M3 is used)


Leonid

On 9/28/2019 8:42 AM, Belo wrote:

Hello NONMEM Community,

Is there an easy way to get PRED from NONMEM when objective function 
instead of error model is specified?  It is possible to add another set 
of differential equations where parameters do not have between-subject 
variability, but it seems cumbersome.


Thanks,

Pavel





Re: [NMusers] NONMEM Unexpected result with the $LEVEL statement

2019-08-13 Thread Leonid Gibiansky

You need to use FNLETA=0 option.
Thanks
Leonid


On 8/13/2019 4:39 PM, Dumortier, Thomas wrote:

Hello

I would like to model inter-study variability on the of the 
inter-subject variability  with NONMEM 74


A subject belongs to a study, therefore those two random effects are nested

Following the instruction, I code  as below, where ETA8 is at the 
subject level and ETA9 is at the study (STUD) level.


$INPUT  ID STUD TIME TIM2 TAD NT CMT EVID MDV BLOQ AMT DOSE RATE 
LIDV=DV LNDV WT0 TRT1S TRT2S PAS0


...

$PK

...

     TVF1 = THETA(8) + ETA(8) + ETA(9)

     F1  = (EXP(TVF1)/(1+EXP(TVF1)))

...

$LEVEL STUD=(9[8])

The fit with NONMEM 743 (fortran compliler) seems correct and the 
estimates for the variance (OMEGA) of ETA8 and for ETA9 are provided.


But when looking at the ETA9 values (the empirical bayes estimates), I 
notice that


(1)Different subjects of the same study have different ETA9 values, 
while I would expect that all subjects of a study have same ETA9 value


(2)The ETA9 value is proportional to the ETA8 value (ie, plotting ETA8 
versus ETA9 from all subjects of all studiers results in all points on 
the same diagonal line)


Am I coding incorrectly ?

Any help would be more than welcome

Many thanks

Thomas Dumortier

*Thomas Dumortier*

Director Pharmacometrics

Novartis Pharma AG

Postfach

CH-4002 Basel

SWITZERLAND

Phone   +41 61 3240946

Fax     +41 61 3241246

thomas.dumort...@novartis.com __





Re: [NMusers] $ERROR block for feedback mechanism

2019-08-01 Thread Leonid Gibiansky
Alternatively, if OMEGA block is placed before the $PK block, A() 
variables can be used in the PK block, so F1=function(A4) can be defined 
there. Results will be different, as this procedure will use A(4) at 
dose time to reduce the dose while the $DES block version will use 
current A(4) to reduce amount transferred to the second compartment.

Regards
Leonid

On 8/1/2019 3:19 AM, Sven Mensing wrote:

Dear Hyun,

my first idea is to add F to the differential equations.

DADT(1) = -KA*A(1) __

DADT(2) = -K23*A(2)-K20*A(2)+K32*A(3)+KA*A(1)* BioaPH


where only a fraction of the drug (BioaPH) is absorbed to your second 
compartment.



Kind regards


Sven Mensing


Am Do., 1. Aug. 2019 um 08:34 Uhr schrieb "이현아" >:


Hello, NMusers.


I have a question about a feedback mechanism in a PK/PD model.

Drug X is an acid reducing agent, and after multiple oral
administration, the systemic exposure to drug X decreased. Our
previous result suggested that the main cause of the reduced
exposure was the reduced solubility of drug X caused by elevated
intragastric pH after treatment with drug X. Base on this result, we
developed a PK/PD model. The PK/PD profile was best described using
a 2 compartment PK model with lagged first-order absorption model
and sigmoid Emax model linked with an effect compartment. To address
changes in intragastric pH over time affecting the relative
bioavailability (F1), we introduced a feedback path such that
increased intragastric pH decreases the F1 of drug X. 

I have tried to add feedback path in our NONMEM code, but I need
help writing code.

Here is the control stream that I have used:


$SUBROUTINE ADVAN13 TOL=6

$MODEL 

COMP=(DEPOT) 

COMP=(CENTRAL)

COMP=(PERIPH) 

COMP=(EFFECT)




$PK 

CL = THETA(1)*EXP(ETA(1))*(WT/70)**THETA(22)

V2 = THETA(2)*EXP(ETA(2))

Q = THETA(3)*EXP(ETA(3))

V3 = THETA(4)*EXP(ETA(4))

KA = THETA(5)*EXP(ETA(5))

ALAG1 = THETA(6)*EXP(ETA(6))




EMAX = THETA(17)*EXP(ETA(8))

EC50 = THETA(18)*EXP(ETA(9))

KE0 = THETA(19)*EXP(ETA(10))

EDMAX = THETA(20)*EXP(ETA(11)) ; maximal reduction of F1

ED50 = THETA(21)*EXP(ETA(12)) ; intragastric pH producing 50% of
maximal reduction of F1

$DES 

DCP = A(2)/V2

DCE = A(4)

DADT(1) = -KA*A(1)

DADT(2) = -K23*A(2)-K20*A(2)+K32*A(3)+KA*A(1)

DADT(3) = -K32*A(3)+K23*A(2)

DADT(4) = KE0*(DCP-DCE)

$ERROR 

CP = A(2)/V2

CE = A(4)

Q1 = 1 ; dummy indicator for compartment 2

IF (CMT .EQ. 4) Q1=0

PH = E0*(1+(EMAX*CE)/(EC50+CE)) ; Emax model for pH driven by effect
compartment concentration

PHPK = CP*(1-(EDMAX*(PH-7))/(ED50+(PH-7)))  ; Inhibitory effect
model for the feedback by pH for plasma concentration of YH4808, 7
is a maximum intagastric pH by drug X treatment.

F1=THETA(PH) <-I’d like to estimate F1 by changing intragastric pH
in my $ERROR block. 

My question is that how can I make NONMEM code to address changes in
intragastric pH affecting the F1 (feedback mechanism to describe a
phenomenon that PD (intragastric pH) affects PK (F1)) in my $ERROR
block?

Thanks in advance.




*Hyun A Lee*

Department of Clinical Pharmacology and Therapeutics,

Seoul National University College of Medicine and Hospital

101 Daehak-ro, Jongno-gu,

Seoul 03080, Korea

Tel: +82-31-888-9574, Fax: +82-31-888-9575

Mobile: +82-10-8629-5014

E-mail: lha2...@snu.ac.kr  ;
hyu...@gmail.com 






Re: [NMusers] ERROR NUMBER OF BASIC PK PARAMETERS EXCEEDS VALUE OF NPARAM IN $MODEL

2019-07-30 Thread Leonid Gibiansky
Not sure why would we need NPARAM there, is it necessary? I think one 
can skip it.


On the different topic, expression for IWRES is not correct:

IWRES=IRES/(IPRED * EPS(1) + EPS(2))

This is the version that should be used:

IWRES=IRES/SQRT(IPRED**2 * SIGMA(1,1) + SIGMA(2,2))

Also it is more traditional to define IRES=DV-IPRED (negative residual 
corresponds to over-predictions; observed=predicted + residual error).


Regards
Leonid


On 7/29/2019 2:49 PM, Paul Hutson wrote:
I’ve fixed it by increasing NPARAM until it worked (increasing to 7), 
but can anyone explain why I get this message for a 3 compartment model 
with additional compartments to output a time above a concentration 
threshold (#4) and AUC while above that concentration threshold (#5)?  
It appears that NONMEM wants additional parameters added to NPARAM when 
I added calculated value CC in the $DES block.  What is also odd to me 
is that the fault is called on the line defining DADT(2). Thanks in 
advance. Paul


$SUBROUTINES ADVAN6 TOL=3

$MODEL NCOMP=5 NPARAM=6

COMP=(CENTRAL DEFDOSE DEFOBS)

COMP=(TISU1)

COMP=(TISU2)

COMP=(TTIME)

COMP=(AUCC)

;--PK BLOCK

$PK

MIC=0.12 ; SEEKING TIME AND AUC ABOVE 0.12 MCG/ML

TVCL = POP_CL

TVV1=POP_V1

TVQ2=POP_Q2

TVV2=POP_V2

TVQ3=POP_Q3

TVV3=POP_V3

CL=TVCL * EXP(ETA(1))

V1=TVV1 * EXP(ETA(2))

Q2=TVQ2 * EXP(ETA(3))

V2=TVV2 * EXP(ETA(4))

Q3=TVQ3 * EXP(ETA(5))

V3=TVV3 * EXP(ETA(6))

K10=CL/V1

K12=Q2/V1

K21=Q2/V2

K13=Q3/V1

K31=Q3/V3

;DES BLOCK

$DES

CC=A(1)/V1

RT=0

IF(CC.GT.MIC) RT=1

DADT(1) = - A(1) * (K10 + K12 + K13) + A(2) * K21  + A(3)*K31

DADT(2) = A(1) * K12 - A(2) * K21

DADT(3) = A(1) * K13 - A(3) * K31

DADT(4)=RT ; TIME ABOVE THRESHOLD TAT

DADT(5)=RT*CC ; AUC ABOVE THRESHOLD AUCAT

;- ERROR MODEL --

$ERROR

IPRED = F

    IRES=IPRED-DV

    Y=IPRED * (1 + EPS(1)) + EPS(2)

    IWRES=IRES/(IPRED * EPS(1) + EPS(2))

TAT=A(4)

AUCAT=A(5)

TIS2 = A(2)/V2

TIS3 = A(3)/V3

Paul Hutson, PharmD, BCOP

Professor

UWisc School of Pharmacy

T: 608.263.2496

F: 608.265.5421





Re: [NMusers] Help with a PK code

2019-04-23 Thread Leonid Gibiansky
The code will work as written (if you add  ENDIF after TVVM = THETA (6), 
and define all parameters CL, V1, Q, V2, KINT, ...) but mechanistically, 
this is not a good idea to have two models for two dose levels. You may 
want to try QSS model with non-constant Rtot (MM model is usually good 
when Rtot is low, while  QSS is good when Rtot has accumulation, so may 
be this is why you see MM model at low doses and QSS at high doses).


Also, what is measured, is it free or total concentration? This part of 
the code was not shown, and it depends on the assay (for QSS part of the 
model).


Thanks
Leonid


On 4/23/2019 8:34 AM, Niurys.CS wrote:

Dear All,

I'm working on the population pharmacokinetics of a mAb, in this study
  4 dose levels (50, 100, 200 and 400 mg) were evaluated. I tested
different models, but none of them fit well; that's why I decided to
find for each dose level the best model. I found the two lower dose
levels fitted to Michaelis Menten + CL linear model and the two higher
dose levels fitted to QSS Rtot model.
I think if I use this code, I'll find the best model for my data, so I
appreaciate your  suggestions:

$PK

TVCL= THETA(1)
TVV1= THETA(2)
TVQ = THETA(3)
TVV2 = THETA (4)

TVKM = 0
TVVM = 0

IF(DOS.LT.200) THEN
TVKM = THETA (5)
TVVM = THETA (6)

TVKSS = 0
TVKINT = 0
TVKDEG = 0
TVRMAX = 0

IF(DOS.GT.100) THEN

TVKSS = THETA (7)
TVKKINT = THETA(8)
TVRMAX = THETA(9)

ENDIF


K   = CL/V1
K12 = Q/V1
K21 = Q/V2
S1 = V1

;--
$DES

CONC=0.5*(A(1)/V1-RMAX-KSS)+0.5*SQRT((A(1)/V1-RMAX-KSS)**2+4*KSS*A(1)/V1)

DADT(1) = 
-(K+K12)*CONC*V1+K21*A(2)-KINT*RMAX*CONC*V1/(KSS+CONC)-VM*CONC*V1/(KM+CONC)
DADT(2) =  K12*CONC*V1-K21*A(2)


Thank you,

Regards,
Niurys de Castro





Re: [NMusers] Simulation error

2019-04-22 Thread Leonid Gibiansky
Nonmem needs to allocate memory for all variables stored in the MSF 
file. For that, it counts THETAs and ETAs and SIGMAs in the control 
stream. If some of them are not listed (often we do not need all 
parameters of the model for a specific simulation), one needs to tell 
the program how much memory is needed

(e.g., $SIZES LTH= 7 LVR= 2)

One can always keep all THETAs and OMEGAs in the controls stream, for 
example by adding dummy variable (in the ERROR block)

TEMP=THETA(5)+ETA(3)+EPS(2)
listing all unused parameters (or all parameters)

The second problem is related to the absence of one of the MSF files. 
MSF options actually reads two files, FILENAME.MSF and 
FILENAME_ETAS.MSF. Both should be in the same file path specified in the 
control stream for FILENAME.MSF file.


Leonid




On 4/20/2019 11:11 AM, Ayyappa Chaturvedula wrote:

Dear All,
I am trying to run a simulation using $MSFI option instead of giving 
$THETA and $OMEGA.  One of the model gave the following error and a 
recommendation:

  F_LVR AND/OR F_LTH ARE TOO SMALL TO DEAL WITH INPUT FROM MSFI FILE.
  RECOMMEND INSERTING AS FIRST LINE OF CONTROL STREAM:
  $SIZES LTH= 7 LVR= 2.

I changed the control file with the recommended $SIZES LTH= 7 LVR= 2 as 
first line in the control file.  But, then I get the following error 
message:


  WARNING: EXTRA MSF FILE COULD NOT BE OPENED: msf2_ETAS
*** Error in `./nonmem': malloc(): memory corruption: 0x0400f470 ***

I appreciate any help resolving this issue.
Regards,
Ayyappa




Re: [NMusers] SAEM with MAXEVAL= 0 possible? or any work around?

2019-03-04 Thread Leonid Gibiansky

Hi Andrew,

SAEM equivalent of MAXEVAL=0 would be EONLY=1
Results should not depend on the method so you should be safe just using 
FOCEI for model evalaution. Also, what was the problem with 
"non-convergence of the base model", was it just error code 134? If the 
model is stable (solution does not depend on initial conditions, RSEs 
are reasonable) then error code 134 is not a real problem, you can 
continue to use FOCEI with that. Often, it helps to clean the date and 
remove obvious outliers (makes model more stable and more likely to 
converge).


Thanks
Leonid



On 3/4/2019 9:56 PM, Andrew Tse wrote:

Dear All,

I have a data file that I would like to fit into other people's model 
(using their THETA values to run my datafile) to see how good the 
predictions are.
I have switched to SAEM from FOCE due to non-convergence issue for my 
base model, therefore I would like to use SAEM method to evaluate model 
fitting with published models for a fair comparison. When I tried the 
same MAXEVAL=0 with SAEM, I have got an error message on MAXEVAL must be 
at least 1 with SAEM. There are very little discussion on this topic 
with SAEM. Can anyone please shed some light on any work around? If 
there is no work around for using published model's THETA values to run 
my data file with SAEM, how should I create a fair comparison to compare 
my SAEM base model with published models that are run on FOCE?


Thank you.

Kind regards,
Andrew Tse

Research Pharmacist




Re: [NMusers] Strange PRED prediction in SAEM with M3 BQL handling

2019-02-22 Thread Leonid Gibiansky

One can get PRED at BQLs using COMACT option, see manual:
"COMACT=1  is a pass with final thetas and zero-valued etas".

Things computed at this stage are based on THETAS with zero ETAs and 
thus provide PRED values. E.g.,

IPRED = ...
IF(COMACT==1) PRED1=IPRED

creates the PRED1 variable that is a population prediction even at BQL 
records.


Have not tried it but it looks like it can be used to create population 
predictions of other derived variables as well (like AUC, half-life, 
etc.) that need to be computed based on THETAs (with ETAs equal to zero).


Leonid


On 2/22/2019 5:11 AM, Smit, Cornelis (Klinische Farmacie) wrote:

Hi Andrew,

When your observation is being < BLQ in the PRED column. So this value will be close to 1 when 
the model is fairly sure that the concentration should be BLQ. This 
might explain why the PREDs might be relatively high in your diagnostics 
here. I usually exlude the for model misspecification with a VPC showing BLQ data (as described in 
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2691472/ ). You can do this 
with the old xpose package. I don’t think there is any way to visualize 
the BLQ prediction in the ‘usual’ GOF but I’m very curious if someone 
else has some ideas regarding this.


Kind regards,

*Cornelis Smit*
Hospital Pharmacist / PhD candidate

Dept. of Clinical Pharmacy
St. Antonius Hospital

Dept. of Pharmacology,

Leiden Academic Centre for Drug Research,

Leiden University, Leiden, The Netherlands

//

/VRIJWARING: Dit e-mail bericht is uitsluitend bestemd voor de 
geadresseerde(n). Verstrekking aan en gebruik door anderen
is niet toegestaan. Als u niet de geadresseerde bent, stel dan de 
verzender hiervan op de hoogte en verwijder het bericht.

Aan de inhoud van dit bericht kunnen geen rechten worden ontleend./

*Van:*owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] 
*Namens *Andrew Tse

*Verzonden:* vrijdag 22 februari 2019 10:23
*Aan:* nmusers@globomaxnm.com
*Onderwerp:* [NMusers] Strange PRED prediction in SAEM with M3 BQL handling

Dear all,

I am running SAEM with M3 BQL handling method via PsN but having some 
strange PRED values in mytab table if someone can shed some light:


I have tried using FOCE (excluding BQL data) & SAEM (excluding BQL data) 
both have normal looking fitting with data in individual plots.


Once I have coded SAEM with M3 codes and include BQL data it showed very 
strange PRED vs time plots (eg. 100 times over prediction at BQL time 
point). IPRED had normal results.


Here are the control stream that I have used:

$PK

  TVCL=THETA(1)

  MU_1=LOG(TVCL)

  CL=EXP(MU_1+ETA(1))

  TVV2=THETA(2)

  MU_2=LOG(TVV2)

  V2=EXP(MU_2+ETA(2))

  TVQ=THETA(3)

  MU_3=LOG(TVQ)

  Q=EXP(MU_3+ETA(3))

  TVV3=THETA(4)

  MU_4=LOG(TVV3)

  V3=EXP(MU_4+ETA(4))

  K23=Q/V2                   ;Distribution rate constant

K32=Q/V3                   ;Distribution rate constant

KA=0

A_0(1)=0

A_0(2)=0

A_0(3)=0

$DES

DADT(1)= -KA*A(1)

DADT(2)= -CL*A(2)/V2-K23*A(2)+K32*A(3)

DADT(3)=             K23*A(2)-K32*A(3)

$ERROR

IPRED=A(2)/V2

W=SQRT(THETA(5)**2+((THETA(6)*IPRED)**2))

IF (LIMI.EQ.1) LIM= 0.05 ;BATCH 1

IF (LIMI.EQ.2) LIM= 0.01 ;BATCH 2

IF (LIMI.EQ.3) LIM= 0.025 ;BATCH 3

IF(BQL.EQ.0) THEN

F_FLAG=0

Y=IPRED+W*ERR(1)

ELSE

F_FLAG=1   ;BQL so Y is likelihood

Y=PHI((LIM-IPRED)/W)

ENDIF

IWRES=(DV-IPRED)/W

IRES=DV-IPRED

My question is that whether there is error in my M3 $ERROR model? or 
whether PRED values for BQL means something else other than prediction 
for BQL data?


Thanks a lot.

Kind regards,

Andrew Tse

Research Pharmacist





Re: [NMusers] OFV or Diagnostic Plot ?? Which one rules...

2019-02-13 Thread Leonid Gibiansky
Sumeet,
DV vs IPRED is only one, and the least helpful plot. You may want to look on DV 
vs PRED, both in original scale and on log log scale, CWRES vs time, PRED, 
distributions and correlation of random effects, etc. and only then one can 
decide which of the models is better. Based on the description, I would guess 
that model with proportional error provides better fit at very low 
concentrations, visible in log scale plots. So you may also factor this in in 
the decision process. If max concentrations are more important, additive error 
may help but if low concentrations are more important, you may want to use 
combined or proportional error.
Regards,
Leonid


> On Feb 13, 2019, at 7:28 PM, Singla, Sumeet K  wrote:
> 
> Hi Everyone,
>  
> I am fitting two compartment PK model to Marijuana (THC) concentrations. When 
> I apply proportional error (or proportional plus additive) residual model, I 
> get pretty good fits (except 15% of subjects) at all time points.
> However, when I apply only additive error residual model, I get perfect fits 
> in all subjects but objective functional value is increased by about 20 
> units. DV vs IPRED reveal all concentrations on line of unity.
> My question is: should I go with additive error model which gives me perfect 
> fit but higher OFV or should I go with proportional error model which gives 
> me lower OFV but not so good fit in couple of subjects?
>  
> Regards,
> Sumeet Singla
> Graduate Student
> Dpt. of Pharmaceutics & Translational Therapeutics
> College of Pharmacy- University of Iowa
>  


Re: [NMusers] Why should we avoid using micro rate constants?

2019-02-03 Thread Leonid Gibiansky
It could be just coding error, could you show the control stream? 
Thanks
Leonid

> On Feb 3, 2019, at 12:44 PM, Singla, Sumeet K  wrote:
> 
> Hello everyone!
>  
> I have a question. I was trying to build a 2-compartment PK model for 
> marijuana use in occasional and chronic smokers. Initially, I was using 
> providing rate constants K12 and K21 ­in PK block and it resulted in poor 
> fitting. Then, I later changed to CL,V1, V2 , Q and it resulted in proper 
> fitting. I was perplexed as to why I couldn’t get a proper fit by providing 
> rate constants? I tried to look online but couldn’t find any proper 
> explanation about when (or not) we should use micro constants in PK block to 
> define our model in NONMEM? Does anyone has any useful insights into this?
>  
> Regards,
> Sumeet Singla
> Graduate Student
> Dpt. of Pharmaceutics & Translational Therapeutics
> College of Pharmacy- University of Iowa
>  


Re: [NMusers] Potential bug in NM 7.3 and 7.4.2

2018-11-21 Thread Leonid Gibiansky

Thanks!
Checked on nm 7.4.3, same result. I also output DV and LNDV for the 
"bad" run (in tab file, in addition to DV that was appended). It is 
interesting that DV was cut (as in the appended version) but LNDV was 
intact and printed with correct rounding. Also added digits to AMT and 
TIME, and those were rounded correctly. So may be this is specific to DV 
field (worrying about checking whether specific runs are affected: if 
AMT, TIME, and covariates are not affected, then it is sufficient to 
check that DV in output = DV in the data file (up to reasonable 
rounding) in the scripts for diagnostic plots to detect this bug).

Thanks
Leonid

On 11/21/2018 6:02 AM, Lindauer, Andreas (Barcelona) wrote:

Hi all,

I managed to create a dummy dataset and model code without proprietary 
information.
I've put the files on github for those of you who are interested in 
investigating further.

https://github.com/lindauer1980/NONMEM_DROP_ISSUE

Included in this repository are 4 model files,

1) without dropping
2) 3 variables dropped,
3) with the WIDE option and 3 dropped variables
4) 3 dropped variables and variables in input file rounded

#1, #3 and #4 give the same OFV, while #2 results in a 220 units higher OFV. If 
you run the models you will observe that in the DV column in the sdtab output 
of #2 the value rounded to the next integer which is clearly in contrast to the 
input dataset.

Again thank you very much for looking into this issue.

Best regards, Andreas.


-Original Message-
From: owner-nmus...@globomaxnm.com  On Behalf Of 
Lindauer, Andreas (Barcelona)
Sent: Mittwoch, 21. November 2018 08:40
To: nmusers@globomaxnm.com
Subject: RE: [NMusers] Potential bug in NM 7.3 and 7.4.2

@Franziska
It is not an scm issue. The scm routine just made me aware of this problem by 
including the DROP statements. I then manually tested it out side of scm with 
the same result.

@Leonid
With the 'bad' model indeed incorrect DV's (with decimals cut off) are output 
to the tab file.

@all
I have investigated further and think the problem is related to the length of 
the lines in the datafile, as Katya suspected.
Turns out that when preparing the dataset in R I did not round derived 
variables (eg. LNDV, some covariates) and as a result, some variables have up 
to 15 decimal places. May not be a problem from a computational point of view, 
but results in some lines in the datafile (when opening in a text editor) are 
as long as 160 characters. If I round my variables to say 3 decimals, the issue 
is gone.
I suspect, that there is a problem in the way NONMEM generates the FDATA file 
when there are data records exceeding a specific number of characters.

I always thought the limit was 300, but apparently it may be less.

Again, thanks to all for thinking through this.


-Original Message-----
From: Leonid Gibiansky 
Sent: Dienstag, 20. November 2018 20:13
To: Lindauer, Andreas (Barcelona) ; 
nmusers@globomaxnm.com
Cc: Ekaterina Gibiansky 
Subject: Re: [NMusers] Potential bug in NM 7.3 and 7.4.2

Thanks!
One more question: in the "bad" model, if you look on output (tab) file, can you detect 
the differences, or it is different inside but output correct DV values to tab file? I think you 
describe it in the email below (that output is "bad") but I just wanted to be 100% sure. 
Then at least we can compare output with the true data (say, in R, read the tab file and csv file 
and compare) and detect the problem not looking on FDATA.
Thanks
Leonid

On 11/20/2018 1:53 PM, Lindauer, Andreas (Barcelona) wrote:

@Leonid
It is the very DV column that is damaged.
In the 'good' model, the one with less than 3 variables dropped or when using 
the WIDE option, DVs show up in sdtab as they are in the input file. While the 
'bad' model cuts off the decimals, e.g.
3.17, 3.19, 3.74 in the input data file (and the good sdtab) become
3.0, 3.0, 3.0 with the bad model

@Katya
Yes, originally I did have lines longer than 80 characters but not longer than 
300. I just did a quick test with keeping all lines <80 chars and the issue 
remains.

@Alejandro
No I don't have spaces in my variables. Neither in the name nor in the
record itself

@Luann
Yes I'm using a csv file. As far as I can see all my variables are numeric, and 
do not contain special characters. The datafile is correctly opened in Excel 
and R. But I will double check.

Thanks to all to help detecting the problem. I will try to make a reproducible 
example with dummy data that can be shared.

Regards, Andreas.

-Original Message-
From: Ekaterina Gibiansky 
Sent: Dienstag, 20. November 2018 16:29
To: Leonid Gibiansky ; Lindauer, Andreas
(Barcelona) ; nmusers@globomaxnm.com
Subject: Re: [NMusers] Potential bug in NM 7.3 and 7.4.2

And one more question, do you have long lines - compared to 80 and to
300 characters that become shorter than these thresholds when you drop the 
third variable?

Regards,

Katya

On 11/20/2018 10:01 A

Re: [NMusers] Potential bug in NM 7.3 and 7.4.2

2018-11-20 Thread Leonid Gibiansky

Thanks!
One more question: in the "bad" model, if you look on output (tab) file, 
can you detect the differences, or it is different inside but output 
correct DV values to tab file? I think you describe it in the email 
below (that output is "bad") but I just wanted to be 100% sure. Then at 
least we can compare output with the true data (say, in R, read the tab 
file and csv file and compare) and detect the problem not looking on FDATA.

Thanks
Leonid

On 11/20/2018 1:53 PM, Lindauer, Andreas (Barcelona) wrote:

@Leonid
It is the very DV column that is damaged.
In the 'good' model, the one with less than 3 variables dropped or when using 
the WIDE option, DVs show up in sdtab as they are in the input file. While the 
'bad' model cuts off the decimals, e.g.
3.17, 3.19, 3.74 in the input data file (and the good sdtab) become 3.0, 3.0, 
3.0 with the bad model

@Katya
Yes, originally I did have lines longer than 80 characters but not longer than 
300. I just did a quick test with keeping all lines <80 chars and the issue 
remains.

@Alejandro
No I don't have spaces in my variables. Neither in the name nor in the record 
itself

@Luann
Yes I'm using a csv file. As far as I can see all my variables are numeric, and 
do not contain special characters. The datafile is correctly opened in Excel 
and R. But I will double check.

Thanks to all to help detecting the problem. I will try to make a reproducible 
example with dummy data that can be shared.

Regards, Andreas.

-Original Message-
From: Ekaterina Gibiansky 
Sent: Dienstag, 20. November 2018 16:29
To: Leonid Gibiansky ; Lindauer, Andreas (Barcelona) 
; nmusers@globomaxnm.com
Subject: Re: [NMusers] Potential bug in NM 7.3 and 7.4.2

And one more question, do you have long lines - compared to 80 and to
300 characters that become shorter than these thresholds when you drop the 
third variable?

Regards,

Katya

On 11/20/2018 10:01 AM, Leonid Gibiansky wrote:

Never seen it.

This will not solve the problem, but just for diagnostics, have you
found out what is "damaged" in the created data files: is the number
of subjects (and number of data records) the same in both versions
(reported in the output file)? Among columns used in the base model
(ID, TIME, AMT, RATE, DV, EVID, MDV), which are different? (can be
checked if printed out to .tab file)? And which of the data file
versions is interpreted correctly by the nonmem code, with or without
WIDE option?

Thanks
Leonid


On 11/20/2018 6:45 AM, Lindauer, Andreas (Barcelona) wrote:

Dear all,

I would like to share with the group an issue that I encountered
using NONMEM and which appears to me to be an undesired behavior.
Since it is confidential matter I can't unfortunately share code or
data.

I have run a simple PK model with 39 data items in $INPUT. After a
successful run I started a covariate search using PsN. To my surprise
the OFVs when including covariates in the forward step turned out to
be all higher than the OFV of the base model. I mean higher by ~180
units.
I realized that PsN in the scm routine adds =DROP to some variables
in $INPUT that are not used in a given covariate test run.
I then ran the base model again with DROPPING some variables from
$INPUT. And indeed the run with 3 or more variables dropped (using
DROP or SKIP) resulted in a higher OFV (~180 units), otherwise being
the same model.
In the lst files of both models I noticed a difference in the line
saying "0FORMAT FOR DATA" and in fact when looking at the temporarily
created FDATA files, it is obvious that the format of the file from
the model with DROPped items is different.
In my concrete case the issue only happens when dropping 3 or more
variables. I get the same behavior with NM 7.3 and 7.4.2. Both on
Windows 10 and in a linux environment.
The problem is fixed by using the WIDE option in $DATE.
I'm not aware of any recommendation or advise to use the WIDE option
when using DROP statements in the dataset. But am happy to learn
about it in case I missed it.

Would be great to hear if anyone else had a similar problem in the past.

Best regards, Andreas.

Andreas Lindauer, PhD
Agriculture, Food and Life
Life Science Services - Exprimo
Senior Consultant

Information in this email and any attachments is confidential and
intended solely for the use of the individual(s) to whom it is
addressed or otherwise directed. Please note that any views or
opinions presented in this email are solely those of the author and
do not necessarily represent those of the Company. Finally, the
recipient should check this email and any attachments for the
presence of viruses. The Company accepts no liability for any damage
caused by any virus transmitted by this email. All SGS services are
rendered in accordance with the applicable SGS conditions of service
available on request and accessible at
http://www.sgs.com/en/Terms-and-Conditions.aspx





Information in this email and any attachments is co

Re: [NMusers] Potential bug in NM 7.3 and 7.4.2

2018-11-20 Thread Leonid Gibiansky

Never seen it.

This will not solve the problem, but just for diagnostics, have you 
found out what is "damaged" in the created data files: is the number of 
subjects (and number of data records) the same in both versions 
(reported in the output file)? Among columns used in the base model (ID, 
TIME, AMT, RATE, DV, EVID, MDV), which are different? (can be checked if 
printed out to .tab file)? And which of the data file versions is 
interpreted correctly by the nonmem code, with or without WIDE option?


Thanks
Leonid


On 11/20/2018 6:45 AM, Lindauer, Andreas (Barcelona) wrote:

Dear all,

I would like to share with the group an issue that I encountered using NONMEM 
and which appears to me to be an undesired behavior. Since it is confidential 
matter I can't unfortunately share code or data.

I have run a simple PK model with 39 data items in $INPUT. After a successful 
run I started a covariate search using PsN. To my surprise the OFVs when 
including covariates in the forward step turned out to be all higher than the 
OFV of the base model. I mean higher by ~180 units.
I realized that PsN in the scm routine adds =DROP to some variables in $INPUT 
that are not used in a given covariate test run.
I then ran the base model again with DROPPING some variables from $INPUT. And 
indeed the run with 3 or more variables dropped (using DROP or SKIP) resulted 
in a higher OFV (~180 units), otherwise being the same model.
In the lst files of both models I noticed a difference in the line saying "0FORMAT 
FOR DATA" and in fact when looking at the temporarily created FDATA files, it is 
obvious that the format of the file from the model with DROPped items is different.
In my concrete case the issue only happens when dropping 3 or more variables. I 
get the same behavior with NM 7.3 and 7.4.2. Both on Windows 10 and in a linux 
environment.
The problem is fixed by using the WIDE option in $DATE.
I'm not aware of any recommendation or advise to use the WIDE option when using 
DROP statements in the dataset. But am happy to learn about it in case I missed 
it.

Would be great to hear if anyone else had a similar problem in the past.

Best regards, Andreas.

Andreas Lindauer, PhD
Agriculture, Food and Life
Life Science Services - Exprimo
Senior Consultant

Information in this email and any attachments is confidential and intended 
solely for the use of the individual(s) to whom it is addressed or otherwise 
directed. Please note that any views or opinions presented in this email are 
solely those of the author and do not necessarily represent those of the 
Company. Finally, the recipient should check this email and any attachments for 
the presence of viruses. The Company accepts no liability for any damage caused 
by any virus transmitted by this email. All SGS services are rendered in 
accordance with the applicable SGS conditions of service available on request 
and accessible at http://www.sgs.com/en/Terms-and-Conditions.aspx





Re: [NMusers] $ERROR block TMDD

2018-11-16 Thread Leonid Gibiansky
You should not expect dramatic differences between the model with 
central binding and peripheral binding. So if the model failed 
(completely failed ? or just not good enough?), this is for some other 
reasons. You may want to start with the simple linear model, find 
parameters, then add Michaelis-Menten part, and only then move to the 
QSS model. On each stage, start with the parameters of the best model of 
the previous stage.


To answer your specific question, equations and error block are two 
independent parts of the model, so you just use


Y=A(1)/V1*(1+EPS) ; measured: free drug in central compartment

(if proportional error model is used).

Note that KM is not defined but used instead of KSS in some places, 
could it be the reason for failure? TVKSS is not defined but used...

Regards,
Leonid


On 11/16/2018 12:17 PM, Amaranth Star wrote:

Hello NONMEM Users,

I’m relatively new NONMEM user. I have a drug which has showed TMDD
(QSS) elimination in a previous study. In that study the drug binds to
its target in central compartment. I only have the time course of the
drug concentration.  I tried to test the same model as well as
Michaelis-Menten(MM)approach in my study but I failed.
Reading some papers (Wan-Su Park et al, 2017Doi: 10./bcpt.12675; P
Dua et al, 2015 doi:10.1002/psp4.41 ), I found that the drug can only
binds to its target in peripheral compartment or in both compartments
at the same time. I want to try these models but I have trouble
writing the $ERROR block. The code I have written for the model in
which the drug binds to its target in peripheral compartment is given
as following:

$INPUT  ID  TIME  AMT TINF RATE DV TAD MDV EVID
$DATA  ADPKD_100918.csv
$SUBROUTINE ADVAN13 TOL=9
$MODEL  COMP=(CENTRAL) COMP=(PERIPH1) COMP(PERIPH2)
$PK
TVCL=THETA (1)  ;Linear elimination constant from the central Comp
TVV1=THETA(2); Volume of Central Comp
TVQ = THETA(3); Distributional clearance
TVV2 = THETA(4)   ; tissue distribution volumes
TVKM= THETA (5)   ; MM constant
TVVM= THETA(6); Vmax
TVKINT = THETA (7); Internalization constant
TVKSYN = THETA(8); Synthesis rate constant
TVKDEG = THETA(9); Degradation rate constant

CL = TVCL*EXP(ETA(1))
V1 = TVV1*EXP(ETA(2))
Q  = TVQ ;*EXP(ETA(3))
V2 = TVV2;*EXP(ETA(4))
KSS = TVKSS;*EXP(ETA(5))
KINT = TVKINT
KSYN = TVKSYN
KDEG = TVKDEG
K = CL/V1; elimination rate constant
K12 = Q/V1  ; central-tissue rate constant
K21 = Q/V2  ;tissue-central rate constant
S1 = V1
BASE = KSYN/KDEG  ; baseline for target
A_0(3)  = BASE

$DES
CONC=0.5*(A(2)/V2-A(3)-KM)+0.5*SQRT((A(2)/V2-A(3)-KM)**2+4*KSS*A(2)/V2)
DADT(1) = -(K+K12)*A(1)+K21*CONC*V2
DADT(2) = K12*A(1)- K21*CONC*V2 - KINT*A(3)*CONC*V2/(KSS+CONC)
DADT(3) = KSYN - KDEG*A(3) - (KINT-KDEG)*CONC*A(3)/(KSS+CONC)

; CONC = Concentration of free drug in peripheral comp (not measured)
; A1 = Free drug in Central Compart (not measured)
;A2 = Free Drug second compartment amount (not measured)
; A3 = Target (not measured)

Although I have written a differential equation for the total drug in
peripheral compartment, but I have only measured the free drug
concentration in central compartment. I’m not sure how can I write
that in the $ERROR block.

Any suggestion or help will be gratefully received

Regards,
Niurys de Castro Suárez





Re: [NMusers] NONMEM 2 Compartment Zero Order Absorption

2018-10-01 Thread Leonid Gibiansky

you need to use ADVAN3, or
with ADAN 4 put the dose to CMT=2, define D2 (not D1) and assign and FIX KA

Leonid

On 10/1/2018 11:12 AM, Bishoy Hanna wrote:

Hello all,


I am currently trying to model data for a compound that is orally administered. 
A two compartment model with FIRST order oral absorption is not describing the 
data well (in my opinion). Consequently, I am attempting to model the data 
using the same two compartment model except with ZERO order absorption. I am 
under the impression that I am able to use ADVAN 4 TRANS 4 as well as a D1 = 
theta (1). I have included a rate = - 2 for my dosing records in my data set.

My control stream is written as such:
$INPUT ID DOSE DAY TIME=TAD NTPD DV AMT COH ADDL II MDV RATE EVID CMT LDV
$SUBROUTINES ADVAN4 TRANS4
$PK
TVD1 = THETA(1)
D1 = TVD1*EXP(ETA(1))
TVCL=THETA(2)
CL=TVCL*EXP(ETA(2))
TVV2=THETA(3)
V2=TVV2*EXP(ETA(3))
TVQ = THETA(4)
Q=TVQ*EXP(ETA(4))
TVV3=THETA(5)
V3=TVV3*EXP(ETA(5))


However, I get the following error:

WARNINGS AND ERRORS (IF ANY) FOR PROBLEM1
(WARNING  2) NM-TRAN INFERS THAT THE DATA ARE POPULATION.
AN ERROR WAS FOUND IN THE CONTROL STATEMENTS.

THE CHARACTERS IN ERROR ARE:
KA
196  $PK: NO VALUE ASSIGNED TO A BASIC PK PARAMETER.
NMtran failed. There is no output for model 1. Contents of FMSG:

WARNINGS AND ERRORS (IF ANY) FOR PROBLEM1
(WARNING  2) NM-TRAN INFERS THAT THE DATA ARE POPULATION.
AN ERROR WAS FOUND IN THE CONTROL STATEMENTS.
THE CHARACTERS IN ERROR ARE:
KA
196  $PK: NO VALUE ASSIGNED TO A BASIC PK PARAMETER.



My question is whether I can utilize ADVAN 4 TRANS 4 to model zero order 
absorption for an oral administered drug (if so, how do I address the error 
listed above) OR if I have to utilize ADVAN 6?



Thanks in advance for taking time out of your busy day to help!

Regards,
Bishoy Hanna
Scientist
Celgene Corporation
179 Passaic Avenue, S11- 2645A, Summit, NJ, 07901
Phone: (908) 897 - 7733

*
THIS ELECTRONIC MAIL MESSAGE AND ANY ATTACHMENT IS
CONFIDENTIAL AND MAY CONTAIN LEGALLY PRIVILEGED
INFORMATION INTENDED ONLY FOR THE USE OF THE INDIVIDUAL
OR INDIVIDUALS NAMED ABOVE.
If the reader is not the intended recipient, or the
employee or agent responsible to deliver it to the
intended recipient, you are hereby notified that any
dissemination, distribution or copying of this
communication is strictly prohibited. If you have
received this communication in error, please reply to the
sender to notify us of the error and delete the original
message. Thank You.





Re: [NMusers] IMPMAP behavior question

2018-08-23 Thread Leonid Gibiansky
yes, that is it, I've seen timeout results in OF drop, just increase the 
times that I mentioned, it should/could be OK after that.



On 8/23/2018 2:36 PM, Mark Sale wrote:

thanks Leonid,

I looked at that, and ossilation/sampling doesn't seem to be the issue, mean 
was 20804.0136597062, SD = 2.19872089635881.

And the OBJ is very stable in the minimzation part, up until the last 
iteration/covariance iteration.

It was run parallel, and I guess that your comment about not waiting for all 
the workers concerns me. There is a timeout event in the log file:

ITERATION   70
  STARTING SUBJECTS  1 TO8 ON MANAGER: OK
  STARTING SUBJECTS  9 TO   17 ON WORKER1: OK
  STARTING SUBJECTS 18 TO   32 ON WORKER2: OK
  STARTING SUBJECTS 33 TO   85 ON WORKER3: OK
  COLLECTING SUBJECTS1 TO8 ON MANAGER
  COLLECTING SUBJECTS   18 TO   32 ON WORKER2
  COLLECTING SUBJECTS9 TO   17 ON WORKER1
  COLLECTING SUBJECTS   33 TO   85 ON WORKER3
  ITERATION   70
  STARTING SUBJECTS  1 TO8 ON MANAGER: OK
  STARTING SUBJECTS  9 TO   17 ON WORKER1: OK
  STARTING SUBJECTS 18 TO   32 ON WORKER2: OK
  STARTING SUBJECTS 33 TO   85 ON WORKER3: OK
  COLLECTING SUBJECTS1 TO8 ON MANAGER
  TIMEOUT FROM WORKER1
  RESUBMITTING JOB TO LOCAL
  STARTING SUBJECTS  9 TO   17 ON MANAGER: OK
  COLLECTING SUBJECTS   18 TO   32 ON WORKER2

and no mention of collecting subjects 33 to 85 on worker 3, or subjects 9 to 17 
on worker 1.

so, that could be  the problem.
Bob - thoughts?





Mark Sale M.D.
Senior Vice President, Pharmacometrics
Nuventra Pharma Sciences, Inc.
2525 Meridian Parkway, Suite 200
Durham, NC 27713
Phone (919)-973-0383
ms...@nuventra.com

CONFIDENTIALITY NOTICE The information in this transmittal (including 
attachments, if any) may be privileged and confidential and is intended only 
for the recipient(s) listed above. Any review, use, disclosure, distribution or 
copying of this transmittal, in any form, is prohibited except by or on behalf 
of the intended recipient(s). If you have received this transmittal in error, 
please notify me immediately by reply email and destroy all copies of the 
transmittal.

From: Leonid Gibiansky 
Sent: Thursday, August 23, 2018 11:14:51 AM
To: Mark Sale; nmusers@globomaxnm.com
Subject: Re: [NMusers] IMPMAP behavior question

Mark,

IMPMAP procedure produces run.cnv file. There you can find mean and SD
of OF (over the last few iterations that were considered for convergence
stop). I use these numbers for covariate assessment as
iteration-to-iteration numbers oscillate and cannot be reliably compared.

Concerning the last iteration OF drop, cannot tell for sure but I've
seen OF drops in some cases when the main manager do not wait for the
slaves to return OF of their portion of the data. prn file has
parameters TIMEOUTI and TIMEOUT, and I would try to increase them and
see whether this fixes the problem

Thanks
Leonid




On 8/23/2018 1:54 PM, Mark Sale wrote:

I have a model that seems to be behaving strangely, looking for interpretation 
help


in model building, the OBJ is usually ~20900. Until this model, where, on the 
covariance step (IMPMAP method) the OBJ drops 9000  points (20798 to 11837), 
monitoring from output file below



iteration   70 OBJ=   20798.6782833867 eff.=5530. Smpl.=   1. 
Fit.= 0.99524
   Convergence achieved
   iteration   70 OBJ=   11837.9045704476 eff.=5475. Smpl.=   
1. Fit.= 0.99522

Parameters don't change much (edited .ext file below).

50 1.35E+01 9.96E-01 4.42E-02 9.41E-01 3.05E+01 1.29E-01 20799.68932
60 1.35E+01 9.67E-01 4.45E-02 9.43E-01 3.05E+01 1.29E-01 20792.90665
70 1.35E+01 9.73E-01 4.44E-02 9.44E-01 3.05E+01 1.29E-01 20798.67828
70 1.35E+01 9.73E-01 4.44E-02 9.44E-01 3.05E+01 1.29E-01 11837.90457


Plots don't look particularly different than other model (and look pretty 
good), p values for ETAs are very reasonable, it converges, condition # is 
good. Only two issues:
RSE for 2 OMEGAs is a little large (0.5)
an interoccasion variability term (on V) is very large (~4, exponential). This 
is, I think, related to many subjects with data only at steady state.

Further, when I advance this model, add another covariate, or another IOV on 
CL, to address the issue with SS data, cannot identify Volume uniquely (using 
the final parameter from this model as the initial in the next model), I cannot 
reproduce these results - the OBJ goes back to ~20,800, with essentially the 
same parameter estimates. So I end up rejected all additional covariates in 
this model  (at least by LRT).


other details, running on Windows, 64 bit, Intel compiler, NONMEM version 7.3.


Can I believe this OBJ value? Should I base an additional hypotheses on the 
SEE, rather than the LRT?

But, basically, why is this happening

Re: [NMusers] IMPMAP behavior question

2018-08-23 Thread Leonid Gibiansky

Mark,

IMPMAP procedure produces run.cnv file. There you can find mean and SD 
of OF (over the last few iterations that were considered for convergence 
stop). I use these numbers for covariate assessment as 
iteration-to-iteration numbers oscillate and cannot be reliably compared.


Concerning the last iteration OF drop, cannot tell for sure but I've 
seen OF drops in some cases when the main manager do not wait for the 
slaves to return OF of their portion of the data. prn file has 
parameters TIMEOUTI and TIMEOUT, and I would try to increase them and 
see whether this fixes the problem


Thanks
Leonid




On 8/23/2018 1:54 PM, Mark Sale wrote:

I have a model that seems to be behaving strangely, looking for interpretation 
help


in model building, the OBJ is usually ~20900. Until this model, where, on the 
covariance step (IMPMAP method) the OBJ drops 9000  points (20798 to 11837), 
monitoring from output file below



iteration   70 OBJ=   20798.6782833867 eff.=5530. Smpl.=   1. 
Fit.= 0.99524
  Convergence achieved
  iteration   70 OBJ=   11837.9045704476 eff.=5475. Smpl.=   1. 
Fit.= 0.99522

Parameters don't change much (edited .ext file below).

50 1.35E+01 9.96E-01 4.42E-02 9.41E-01 3.05E+01 1.29E-01 20799.68932
60 1.35E+01 9.67E-01 4.45E-02 9.43E-01 3.05E+01 1.29E-01 20792.90665
70 1.35E+01 9.73E-01 4.44E-02 9.44E-01 3.05E+01 1.29E-01 20798.67828
70 1.35E+01 9.73E-01 4.44E-02 9.44E-01 3.05E+01 1.29E-01 11837.90457


Plots don't look particularly different than other model (and look pretty 
good), p values for ETAs are very reasonable, it converges, condition # is 
good. Only two issues:
RSE for 2 OMEGAs is a little large (0.5)
an interoccasion variability term (on V) is very large (~4, exponential). This 
is, I think, related to many subjects with data only at steady state.

Further, when I advance this model, add another covariate, or another IOV on 
CL, to address the issue with SS data, cannot identify Volume uniquely (using 
the final parameter from this model as the initial in the next model), I cannot 
reproduce these results - the OBJ goes back to ~20,800, with essentially the 
same parameter estimates. So I end up rejected all additional covariates in 
this model  (at least by LRT).


other details, running on Windows, 64 bit, Intel compiler, NONMEM version 7.3.


Can I believe this OBJ value? Should I base an additional hypotheses on the 
SEE, rather than the LRT?

But, basically, why is this happening?

thanks



Mark Sale M.D.
Senior Vice President, Pharmacometrics
Nuventra Pharma Sciences, Inc.
2525 Meridian Parkway, Suite 200
Durham, NC 27713
Phone (919)-973-0383
ms...@nuventra.com

CONFIDENTIALITY NOTICE The information in this transmittal (including 
attachments, if any) may be privileged and confidential and is intended only 
for the recipient(s) listed above. Any review, use, disclosure, distribution or 
copying of this transmittal, in any form, is prohibited except by or on behalf 
of the intended recipient(s). If you have received this transmittal in error, 
please notify me immediately by reply email and destroy all copies of the 
transmittal.







[NMusers] East Coast Workshop: Modeling Biologics with Target-Mediated Disposition, September 14th 2018

2018-07-10 Thread Leonid Gibiansky



QuantPharm is collaborating with ICON to present a workshop on

“Modeling Biologics with Target-Mediated Disposition”

It will be presented on the East Coast, near Baltimore-Washington 
International Airport, at 6751 Columbia Gateway Drive, Columbia MD 21046


14th September 2018

The workshop is intended for PK scientists with or without prior 
population PK modeling experience. It will provide an overview of the PK 
of biologics, introduce target-mediated drug disposition (TMDD) 
concepts, and discuss various applications of TMDD to drug development 
of biologics. Applications of modeling will include population PK and 
PK-PD, immunogenicity, antibody-drug conjugates, pre-clinical to 
clinical translation, covariate analysis, drug-drug interactions, and 
other topics. Use of different Nonmem estimation methods and 
parallelization for TMDD models will be reviewed. NONMEM codes, inputs 
and outputs will be provided.


More details can be found at http://www.quantpharm.com/Workshop.html

or

https://www.iconplc.com/news-events/events/workshops/modeling-biologics/index.xml

You can register for this workshop and/or the ICON NONMEM / PDxPoP 
beginners/intermediate/advanced methods workshops (September 11-13, 
2018, same location) by sending email to Lisa R. Wilhelm-Lear


lisa.wilh...@iconplc.com
Phone:  301-944-6771
Fax:215-789-9549

Thanks!
Leonid


Below are the links for more information about Nonmem workshops:

Introductory/Intermediate NONMEM & PDxPoP Workshop (Days 1 & 2)

https://www.iconplc.com/news-events/events/workshops/introductory-intermediate-nonmem/index.xml

New and Advanced Features of NONMEM 7 Workshop (Day 3):

https://www.iconplc.com/news-events/events/workshops/new-and-advanced-features/index.xml



Re: [NMusers] Cmax/Tmax in the DES block

2018-05-05 Thread Leonid Gibiansky

Thanks!
Looking on ADVAN6 code I cannot figure out whether it has the same 
problem (with over-shooting the time interval). I guess not, could you 
confirm?

Thanks
Leonid

On 5/4/2018 2:57 PM, Bauer, Robert wrote:

Leonid and Kyle:

The code of ..\pr\ADVAN13.f90 (LSODA) is viewable and modifiable to the 
user, and it shows that ITASK=1 is always set.  While for the short term 
you can change this setting and recompile ADVAN13, for the long term,  I 
will look into adding an option at the control stream level, where the 
user may be able to set a different behavior (ITASK value) for a 
particular time interval.  I suspect it might be inefficient for ITASK 
to always be set to 4.


Robert J. Bauer, Ph.D.

Senior Director

Pharmacometrics R

ICON Early Phase

820 W. Diamond Avenue

Suite 100

Gaithersburg, MD 20878

Office: (215) 616-6428

Mobile: (925) 286-0769

robert.ba...@iconplc.com <mailto:robert.ba...@iconplc.com>

www.iconplc.com <http://www.iconplc.com/>

*From:*owner-nmus...@globomaxnm.com 
[mailto:owner-nmus...@globomaxnm.com] *On Behalf Of *Leonid Gibiansky

*Sent:* Friday, May 04, 2018 10:16 AM
*To:* Kyle Baron; Paolo Denti
*Cc:* Bob Leary; nmusers@globomaxnm.com
*Subject:* Re: [NMusers] Cmax/Tmax in the DES block

Interesting links.

So looks like one can provide tcrit=TIME (where TIME is the next EVENT
time) to LSODA, and this will fix the over-shoot problem (as in
ITASK=4). I wonder whether ADVAN13 and ADVAN6 use different settings for
tcrit (or the difference in the results is just the difference in the
amount of over-shooting)

Leonid



On 5/4/2018 12:58 PM, Kyle Baron wrote:
 > I agree with Paolo's explanation, but I suspect it isn't something
 > specific to NONMEM.  When LSODA integrates up to the time of the
 > end of the infusion, there is no guarantee that it will never go
 > past that time.  Rather, it's likely that it will "overshoot" that
 > time and interpolate back to the place where it was supposed to
 > stop (the end of the infusion). That might be why Leonid is seeing the
 > bias.
 >
 > I couldn't find the docs that specifically discuss this, but maybe
 > NONMEM uses ITASK 1 when it advances the system?
 >
 > 
https://github.com/metrumresearchgroup/mrgsolve/blob/master/src/opk_dlsoda_mrg.f#L378-L381

 >
 > You can also see the overshoot discussed in the deSolve docs:
 >
 > 
https://www.rdocumentation.org/packages/deSolve/versions/1.20/topics/lsoda

 >
 > (see the tcrit argument)
 >
 > Kyle
 >
 > On Fri, May 4, 2018 at 11:03 AM, Paolo Denti <paolo.de...@uct.ac.za
<mailto:paolo.de...@uct.ac.za%20%0b>> <mailto:paolo.de...@uct.ac.za>> wrote:
 >
 > Dear all,
 > Very interesting, just adding my two cents, but not sure it's 100%
 > relevant.
 >
 > When I played with ADVAN13 before and asked NONMEM to print out all
 > the steps in a file, I could see that the time (T) was not always
 > going forward,  but sometimes NONMEM was taking some steps back in
 > time and then proceeding again.
 >
 > Not sure if this is because of how LSODA is implemented in NONMEM. I
 > remember - but I am happy to stand corrected - that some DES work in
 > such a way that they rework the size of the time steps dynamically
 > when they solve the ODEs and if the TOL (precision) criterion is not
 > met, they go back and retry with a small step size. So I was
 > thinking that maybe the difference in Cmax could be from one of
 > those "faux pas" when NONMEM has overshot the solution and then it
 > would take a step back?
 >
 > Just an idea on something to check. But I guess the NONMEM
 > developers may have a quick answer to this one (hint hint).
 >
 > Paolo
 >
 >
 > On 2018/05/04 17:32, Leonid Gibiansky wrote:
 >> The procedure described in the original post is working without extra
 >> points. It is working fine, just have a small bias, and the bias
 >> seems
 >> to be zero with ADVAN6. For all the practical purposes it can be used
 >> without extra points. I was just surprised that it is not exact in
 >> some
 >> cases, so extra check is warranted each time when it is used (may
 >> be we
 >> can switch to ADVAN6 rather than ADVAN13 when computing Cmax/Cmin
 >> in the
 >> DES block).
 >>
 >> Latest NONMEM versions have "finedata" Utility Program that can be
 >> used
 >> to add extra points to the dataset (nm741.pdf, page 237).
 >>
 >> Leonid
 >>
 >>
 >> On 5/4/2018 11:01 AM, Bob Leary wrote:
 >> > One of the problems with all of this is that the user must
 >> manually enter artificial time points (or at least in 2007 had to
 >> do this - I don't know if this has been fixed in
 >> > The latest NM versions) in the data set in order to evaluate the
 >> f

Re: [NMusers] Cmax/Tmax in the DES block

2018-05-04 Thread Leonid Gibiansky
I am not sure that this example is similar. There is no over-shoot in 
the solution in Nonmem case. This is only visible/relevant when we hack 
inside of the $DES block, and look over the extremes over the 
integration interval that it happens to be larger than the interval of 
interest for the solution. In effect, we are catching max/min over 
intermediate computations instead of the final solution.

Leonid


On 5/4/2018 1:56 PM, Kevin Feng wrote:
This is interesting overshoot problem. As the infusion on/off causes the 
solver discontinuity problem, the discontinuity will produce the overshoot.


Please find a good report from MIT in the link: 
https://ocw.mit.edu/courses/mathematics/18-03-differential-equations-spring-2010/readings/supp_notes/MIT18_03S10_sup.pdf 



“Notice the “overshoot” near the discontinuities. “ at Page 77

We have nice solutions from Phoenix to handle the discontinuity issues.

Kevin

*From:*owner-nmus...@globomaxnm.com 
[mailto:owner-nmus...@globomaxnm.com] *On Behalf Of *Kyle Baron

*Sent:* Friday, May 4, 2018 1:21 PM
*To:* Leonid Gibiansky <lgibian...@quantpharm.com>
*Cc:* Paolo Denti <paolo.de...@uct.ac.za>; Bob Leary 
<bob.le...@certara.com>; nmusers@globomaxnm.com

*Subject:* Re: [NMusers] Cmax/Tmax in the DES block

Here is some code to reproduce / demonstrate the observation in R:

https://github.com/metrumresearchgroup/mrgsolve/issues/369

I think it captures the essence of what you were doing, Leonid.  The 
comparison against the analytical solution just shows that the ODEs are 
still giving the right answer (the overshoot isn't wrong; just a feature 
of the solver).


Kyle

On Fri, May 4, 2018 at 12:15 PM, Leonid Gibiansky 
<lgibian...@quantpharm.com <mailto:lgibian...@quantpharm.com>> wrote:


Interesting links.

So looks like one can provide  tcrit=TIME (where TIME is the next
EVENT time) to LSODA, and this will fix the over-shoot problem (as
in ITASK=4). I wonder whether ADVAN13 and ADVAN6 use different
settings for tcrit (or the difference in the results is just the
difference in the amount of over-shooting)

Leonid



On 5/4/2018 12:58 PM, Kyle Baron wrote:

I agree with Paolo's explanation, but I suspect it isn't something
specific to NONMEM.  When LSODA integrates up to the time of the
end of the infusion, there is no guarantee that it will never go
past that time.  Rather, it's likely that it will "overshoot" that
time and interpolate back to the place where it was supposed to
stop (the end of the infusion). That might be why Leonid is
seeing the
bias.

I couldn't find the docs that specifically discuss this, but maybe
NONMEM uses ITASK 1 when it advances the system?


https://github.com/metrumresearchgroup/mrgsolve/blob/master/src/opk_dlsoda_mrg.f#L378-L381

You can also see the overshoot discussed in the deSolve docs:


https://www.rdocumentation.org/packages/deSolve/versions/1.20/topics/lsoda

(see the tcrit argument)

Kyle

On Fri, May 4, 2018 at 11:03 AM, Paolo Denti
<paolo.de...@uct.ac.za <mailto:paolo.de...@uct.ac.za>
<mailto:paolo.de...@uct.ac.za <mailto:paolo.de...@uct.ac.za>>>
wrote:

     Dear all,
     Very interesting, just adding my two cents, but not sure
it's 100%
     relevant.

     When I played with ADVAN13 before and asked NONMEM to print
out all
     the steps in a file, I could see that the time (T) was not
always
     going forward,  but sometimes NONMEM was taking some steps
back in
     time and then proceeding again.

     Not sure if this is because of how LSODA is implemented in
NONMEM. I
     remember - but I am happy to stand corrected - that some
DES work in
     such a way that they rework the size of the time steps
dynamically
     when they solve the ODEs and if the TOL (precision)
criterion is not
     met, they go back and retry with a small step size. So I was
     thinking that maybe the difference in Cmax could be from one of
     those "faux pas" when NONMEM has overshot the solution and
then it
     would take a step back?

     Just an idea on something to check. But I guess the NONMEM
     developers may have a quick answer to this one (hint hint).

     Paolo


     On 2018/05/04 17:32, Leonid Gibiansky wrote:

     The procedure described in the original post is working
without extra
     points. It is working fine, just have a small bias, and
the bias
     seems
     to be zero with ADVAN6. For all the practical purposes
it can be used
     w

Re: [NMusers] Cmax/Tmax in the DES block

2018-05-04 Thread Leonid Gibiansky

Interesting links.

So looks like one can provide  tcrit=TIME (where TIME is the next EVENT 
time) to LSODA, and this will fix the over-shoot problem (as in 
ITASK=4). I wonder whether ADVAN13 and ADVAN6 use different settings for 
tcrit (or the difference in the results is just the difference in the 
amount of over-shooting)


Leonid



On 5/4/2018 12:58 PM, Kyle Baron wrote:

I agree with Paolo's explanation, but I suspect it isn't something
specific to NONMEM.  When LSODA integrates up to the time of the
end of the infusion, there is no guarantee that it will never go
past that time.  Rather, it's likely that it will "overshoot" that
time and interpolate back to the place where it was supposed to
stop (the end of the infusion). That might be why Leonid is seeing the
bias.

I couldn't find the docs that specifically discuss this, but maybe
NONMEM uses ITASK 1 when it advances the system?

https://github.com/metrumresearchgroup/mrgsolve/blob/master/src/opk_dlsoda_mrg.f#L378-L381

You can also see the overshoot discussed in the deSolve docs:

https://www.rdocumentation.org/packages/deSolve/versions/1.20/topics/lsoda

(see the tcrit argument)

Kyle

On Fri, May 4, 2018 at 11:03 AM, Paolo Denti <paolo.de...@uct.ac.za 
<mailto:paolo.de...@uct.ac.za>> wrote:


Dear all,
Very interesting, just adding my two cents, but not sure it's 100%
relevant.

When I played with ADVAN13 before and asked NONMEM to print out all
the steps in a file, I could see that the time (T) was not always
going forward,  but sometimes NONMEM was taking some steps back in
time and then proceeding again.

Not sure if this is because of how LSODA is implemented in NONMEM. I
remember - but I am happy to stand corrected - that some DES work in
such a way that they rework the size of the time steps dynamically
when they solve the ODEs and if the TOL (precision) criterion is not
met, they go back and retry with a small step size. So I was
thinking that maybe the difference in Cmax could be from one of
those "faux pas" when NONMEM has overshot the solution and then it
would take a step back?

Just an idea on something to check. But I guess the NONMEM
developers may have a quick answer to this one (hint hint).

Paolo


On 2018/05/04 17:32, Leonid Gibiansky wrote:

The procedure described in the original post is working without extra
points. It is working fine, just have a small bias, and the bias
seems
to be zero with ADVAN6. For all the practical purposes it can be used
without extra points. I was just surprised that it is not exact in
some
cases, so extra check is warranted each time when it is used (may
be we
can switch to ADVAN6 rather than ADVAN13 when computing Cmax/Cmin
in the
DES block).

Latest NONMEM versions have "finedata" Utility Program that can be
used
to add extra points to the dataset (nm741.pdf, page 237).

Leonid


On 5/4/2018 11:01 AM, Bob Leary wrote:
> One of the problems with all of this is that the user must
manually enter artificial time points (or at least in 2007 had to
do this - I don't know if this has been fixed in
> The latest NM versions) in the data set in order to evaluate the
fitted model over more grid points than are in the original data.
> To get a fine grid and good resolution on Cmax and Tmax
> You have to enter a lot of extra time points., which is a pain
in the neck. The various ODE routines are also remarkably
sensitive to how the grid is set up.
>
> Much better would be to have a grid generator within NMTRAN that
lets you just specify beginning and end points and number of
points in the grid.
> I would point out that Phoenix NLME PML has always had this
capability.
> Bob Leary
>
> -Original Message-
> From: owner-nmus...@globomaxnm.com
<mailto:owner-nmus...@globomaxnm.com>
<owner-nmus...@globomaxnm.com>
<mailto:owner-nmus...@globomaxnm.com> On Behalf Of Leonid Gibiansky
> Sent: Thursday, May 3, 2018 7:59 PM
> To: nmusers@globomaxnm.com <mailto:nmusers@globomaxnm.com>
> Subject: [NMusers] Cmax/Tmax in the DES block
>
> Interesting experience concerning computation of Cmax and Tmax
(and probably other stats) in the DES block. We used to use this way:
>
> http://cognigencorp.com/nonmem/current/2007-December/4125.html
<https://protect-za.mimecast.com/s/L8T-CAnX51ilA2ops83Si8>
>
> Specifically, reserved the place in the memory:
>
> $ABB COMRES=2
>
> Set these values to zero for each new subject:
> $PK
> IF(NEWIND.LE.1) THEN
> COM(1)=0
> COM(2)=0
> ENDIF
>
> and computed Cmax/TMAX as
> $DES
> IF(CONC.GT.CO

Re: [NMusers] Cmax/Tmax in the DES block

2018-05-04 Thread Leonid Gibiansky
The procedure described in the original post is working without extra 
points. It is working fine, just have a small bias, and the bias seems 
to be zero with ADVAN6. For all the practical purposes it can be used 
without extra points. I was just surprised that it is not exact in some 
cases, so extra check is warranted each time when it is used (may be we 
can switch to ADVAN6 rather than ADVAN13 when computing Cmax/Cmin in the 
DES block).


Latest NONMEM versions have "finedata" Utility Program that can be used 
to add extra points to the dataset (nm741.pdf, page 237).


Leonid


On 5/4/2018 11:01 AM, Bob Leary wrote:

One of the problems with all of this is that the user must manually enter 
artificial time points  (or at least in 2007 had to do this - I don't know if 
this has been fixed in
The latest NM versions)  in the data set in order to evaluate the fitted model 
over more grid points than are in the original data.
To get a fine grid and good resolution on Cmax and Tmax
You have to enter a lot of extra time points., which is a pain in the neck. The 
various ODE routines are also remarkably sensitive to how the grid is set up.

Much better would be to have a grid generator within NMTRAN that lets you just 
specify beginning and end points and number of points in the grid.
  I would point out that Phoenix NLME PML has always had this capability.
Bob Leary

-Original Message-
From: owner-nmus...@globomaxnm.com <owner-nmus...@globomaxnm.com> On Behalf Of 
Leonid Gibiansky
Sent: Thursday, May 3, 2018 7:59 PM
To: nmusers@globomaxnm.com
Subject: [NMusers] Cmax/Tmax in the DES block

Interesting experience concerning computation of Cmax and Tmax (and probably 
other stats) in the DES block. We used to use this way:

http://cognigencorp.com/nonmem/current/2007-December/4125.html

Specifically, reserved the place in the memory:

$ABB COMRES=2

Set these values to zero for each new subject:
$PK
 IF(NEWIND.LE.1) THEN
   COM(1)=0
   COM(2)=0
ENDIF

and computed Cmax/TMAX as
$DES
IF(CONC.GT.COM(1)) THEN
  COM(1)=CONC
  COM(2)=T
ENDIF

$ERROR
CMAX=COM(1)
TMAX=COM(2)

Recently I applied the same procedure to compute Cmax following 1 hr IV infusion. 
Unexpectedly, Tmax was estimated at times > 1 hr, and Cmax was higher than 1-hr 
concentration (true Cmax is at 1 hr).

After some experiments, the explanation was that Nonmem computes concentration-time 
course (with infusion ON) for longer than 1 hr, and resulting Cmax/Tmax are at the end of 
the "computation window" rather than at 1 hr.

Turns out that the results also depend on ADVAN routine. The largest deviation 
(still small, 1-3 percents) was for ADVAN8, ADVAN9, and ADVAN13. ADVAN15 was 
better but still off. ADVAN14 was almost perfect but still slightly (0.01%) 
off. ADVAN6 provided correct answer (up to the precision of the output). So, 
the discrepancy is small but if 1-2% difference is important, one has to be 
careful when using DES block computations.

Thanks
Leonid


NOTICE: The information contained in this electronic mail message is intended 
only for the personal and confidential
use of the designated recipient(s) named above. This message may be an 
attorney-client communication, may be protected
by the work product doctrine, and may be subject to a protective order. As 
such, this message is privileged and
confidential. If the reader of this message is not the intended recipient or an 
agent responsible for delivering it to
the intended recipient, you are hereby notified that you have received this 
message in error and that any review,
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this
communication in error, please notify us immediately by telephone and e-mail 
and destroy any and all copies of this
message in your possession (whether hard copies or electronically stored 
copies). Thank you.

buSp9xeMeKEbrUze





[NMusers] Cmax/Tmax in the DES block

2018-05-03 Thread Leonid Gibiansky
Interesting experience concerning computation of Cmax and Tmax (and 
probably other stats) in the DES block. We used to use this way:


http://cognigencorp.com/nonmem/current/2007-December/4125.html

Specifically, reserved the place in the memory:

$ABB COMRES=2

Set these values to zero for each new subject:
$PK
   IF(NEWIND.LE.1) THEN
 COM(1)=0
 COM(2)=0
  ENDIF

and computed Cmax/TMAX as
$DES
  IF(CONC.GT.COM(1)) THEN
COM(1)=CONC
COM(2)=T
  ENDIF

$ERROR
CMAX=COM(1)
TMAX=COM(2)

Recently I applied the same procedure to compute Cmax following 1 hr IV 
infusion. Unexpectedly, Tmax was estimated at times > 1 hr, and Cmax was 
higher than 1-hr concentration (true Cmax is at 1 hr).


After some experiments, the explanation was that Nonmem computes 
concentration-time course (with infusion ON) for longer than 1 hr, and 
resulting Cmax/Tmax are at the end of the "computation window" rather 
than at 1 hr.


Turns out that the results also depend on ADVAN routine. The largest 
deviation (still small, 1-3 percents) was for ADVAN8, ADVAN9, and 
ADVAN13. ADVAN15 was better but still off. ADVAN14 was almost perfect 
but still slightly (0.01%) off. ADVAN6 provided correct answer (up to 
the precision of the output). So, the discrepancy is small but if 1-2% 
difference is important, one has to be careful when using DES block 
computations.


Thanks
Leonid




Re: [NMusers] ETAs & SIGMA in external validation

2018-04-06 Thread Leonid Gibiansky

It would be better to use

$EST METHOD=1 INTERACTION MAXEVAL=0

(at least if the original model was fit with INTERACTION option and 
residual error model is not additive).


One option is to use Para = THETA * EXP(ETA)
You would be changing the model, but the model is not too good any way 
if you need to restrict Para > 0 artificially.


SIGMA should be taken from the model.

Leonid


On 4/6/2018 12:32 PM, Tingjie Guo wrote:

Dear NMusers,

I have two questions regarding the statistical model when performing 
external validation. I have a dataset and would like to validate a 
published model with POSTHOC method i.e. $EST METHOD=0 POSTHOC MAXEVAL=0.


1. The model added etas in proportional way, i.e. Para = THETA * (1+ETA) 
and this made the posthoc estimation fail due to the negative individual 
parameter estimate in some subjects. I constrained it to be positive by 
adding ABS function i.e. Para = THETA * ABS(1+ETA), and the estimation 
can be successfully running. I was wondering if there is better workaround?


2. OMEGA value influences individual ETAs in POSTHOC estimation. Should 
we assign $SIGMA with model value or lab (where external data was 
determined) assay error value? If we use model value, it's 
understandable that $SIGMA contains unexplained variability and thus it 
is a part of the model. However, I may also understand it as that model 
value contains the unexplained variability for original data (in which 
the model was created) but not for external data. I'm a little confused 
about it. Can someone help me out?


I would appreciate any response! Many thanks in advance!

Your sincerely,

Tingjie Guo





Re: [NMusers] Time-Varying Bioavailability on Zero-Order Infusion

2018-03-13 Thread Leonid Gibiansky

Hi Bill,

I think the proposed original solution is the only one if you would like 
to implement it exactly. May be it can be approximated somehow? What is 
the real reason for this questions? What is the biology behind the 
time-variant IV bioavailability? Or what is the model mis-fit that you 
are trying to fix?


Leonid




On 3/13/2018 9:16 PM, Sebastien Bihorel wrote:

Hi,

I would suggest the following solution which should also work if you 
want to apply some covariate effect on bioavailability:
* On the dataset side, set your RATE variable to -1 and store the actual 
infusion rates into another variable, eg IVRATE

* On the model side:
$PK
...

; assuming the IV infusion are made in compartment 1
F1 = 
R1 = F1*IVRATE

Voila, NONMEM should take care of the dosing in the background as usual.

Sebastien


*From: *"Bill Denney" 
*To: *"NMUsers" 
*Sent: *Tuesday, March 13, 2018 8:58:41 PM
*Subject: *[NMusers] Time-Varying Bioavailability on Zero-Order Infusion

Hi NONMEMers,

Is there a good way to assign a time-varying bioavailabilty on a 
zero-order rate of infusion in NONMEM?  The best I’ve been able to come 
up with is something like the below.  It seems like something that 
should be easier than what I’m doing below (I adjusted it from the real 
example as I was typing it into the email—I could have introduced a bug 
in the process).  And importantly, -9998 is well before any time in my 
database.


(dosing into CMT=1 with an IV infusion)

$MODEL

COMP=(CENTRAL DEFDOSE DEFOBS)    ; central

COMP=(P1)    ; peripheral 1

COMP=(P2)    ; peripheral 2

$PK

   ; Normal stuff and ...

   ; Record the dosing time

   IF (NEWIND.LT.2) THEN

     TDOSE = -

     DOSEEND = -9998

     DOSE = -999

     DOSERATE = 0

   ENDIF

   IF ((EVID.EQ.1 .OR. EVID.EQ.4) .AND. RATE.GT.0) THEN

     TDOSE = TIME

     DOSEEND = TIME + AMT/RATE

     DOSERATE=RATE

     MTDIFF=1

   ENDIF

   MTIME(1)=TDOSE

   MTIME(2)=DOSEEND

   F1 = 0 ; Bioavailability is zero so that the $DES block has full 
control over the rate.


   RATEADJTAU=THETA(10)

   RATEADJMAX=THETA(11)

$DES

   ; Manually control the infusion

   RATEIN = 0

   IF (MTIME(1).LE.T .AND. T.LE.MTIME(2)) THEN

     RATEADJCALC = RATEADJMAX * EXP(-(T – MTIME(1)) * RATEADJTAU)

     RATEIN = DOSERATE - RATEADJCALC

   ENDIF

   DADT(1) = RATEIN - K10*A(1) - K12*A(1) + K21*A(2) - K13*A(1) + K31*A(3)

   DADT(2) = K12*A(1) - K21*A(2)

   DADT(3) =   K13*A(1) - K31*A(3)

Thanks,

Bill






Re: [NMusers] unidentified cause of "F OR DERIVATIVE RETURNED BY PRED IS INFINITE (INF) OR NOT A NUMBER (NAN)"

2018-02-02 Thread Leonid Gibiansky
I think this is overkill to have 10 occasions, and this could be a 
problem. Try to start first with 2 occasions (e.g, first 5 doses and the 
rest). If this works it means that it is just a numerical problem.

Also, just to check, try

Y=LOG(F+0.0001)+EPS_EXP

to remove the possibility of F=0.
Thanks
Leonid

On 2/2/2018 4:17 AM, HUI, Ka Ho wrote:

Dear nmusers,

I have trouble figuring out the cause of the error I am encountering:

0INDIVIDUAL NO.   1   ID= 2.00E+00   (WITHIN-INDIVIDUAL) 
DATA REC NO.   2


...

OCCURS DURING SEARCH FOR ETA AT INITIAL VALUE, ETA=0

F OR DERIVATIVE RETURNED BY PRED IS INFINITE (INF) OR NOT A NUMBER (NAN).

To my own experience, this error is usually due to some mistakes in the 
coding, such as the div/0 error occurring at the dosing record. However, 
for this time, after I eliminated a few other possible causes, it 
appears that the error will occur whenever I remove the occasional 
variability from the model (by fixing variability to ZERO), and this 
makes no sense to me.


I have attached a minimal reproducible example at the bottom. Any help 
is appreciated and thank you for your attention.


Best Regards,

Matthew

$SIZES      DIMTMP=1000

$PROBLEM    MODEL

$INPUT  C ID DOSE_N AMT=DOSE_UM RATE=RATE_UM TIME CP_UM DV=CP_LN_UM 
MDV EVID


$DATA   DATAFILE_TEST.csv IGNORE=@

$SUBROUTINE ADVAN3 TRANS4

$PK

;SETTING LLOQ

     IF (NEWIND.EQ.0) THEN

     LN_LLOQ=-2.

     ENDIF

;INTEROCCASION VARIABILITY

     OCC1=0

     IF (DOSE_N.EQ.1) OCC1=1

     OCC2=0

     IF (DOSE_N.EQ.2) OCC2=1

     OCC3=0

     IF (DOSE_N.EQ.3) OCC3=1

     OCC4=0

     IF (DOSE_N.EQ.4) OCC4=1

     OCC5=0

     IF (DOSE_N.EQ.5) OCC5=1

     OCC6=0

     IF (DOSE_N.EQ.6) OCC6=1

     OCC7=0

     IF (DOSE_N.EQ.7) OCC7=1

     OCC8=0

     IF (DOSE_N.EQ.8) OCC8=1

     OCC9=0

     IF (DOSE_N.EQ.9) OCC9=1

     OCC10=0

     IF (DOSE_N.EQ.10) OCC10=1

     OCC11=0

     IF (DOSE_N.EQ.11) OCC11=1

     OCC12=0

     IF (DOSE_N.EQ.12) OCC12=1

;THETA DEFINITION

     TVCL=THETA(1)

     TVV1=THETA(2)

     TVQ=THETA(3)

     TVV2=THETA(4)

     OMG_CL=THETA(5)

     OMG_V1=THETA(6)

     OMG_Q=THETA(7)

     OMG_V2=THETA(8)

     SGM_EXP=THETA(9)

     OMG_CL_OCC=THETA(10)

;ETA DEFINITION

     ETA_CL=OMG_CL*ETA(1)

     ETA_V1=OMG_V1*ETA(2)

     ETA_Q=OMG_Q*ETA(3)

     ETA_V2=OMG_V2*ETA(4)

 
ETA_CL_OCC=OMG_CL_OCC*(OCC1*ETA(5)+OCC2*ETA(6)+OCC3*ETA(7)+OCC4*ETA(8)+OCC5*ETA(9)+OCC6*ETA(10)+OCC7*ETA(11)+OCC8*ETA(12)+OCC9*ETA(13)+OCC10*ETA(14)+OCC11*ETA(15)+OCC12*ETA(16))


;EFFECT DEFINITION

;CLEARANCE

     POP_CL=TVCL

     CL=POP_CL*EXP(ETA_CL+ETA_CL_OCC)

;CENTRAL VOLUME

     POP_V1=TVV1

     V1=POP_V1*EXP(ETA_V1)

;INTERCOMPARTMENTAL CLEARANCE

     POP_Q=TVQ

     Q=POP_Q*EXP(ETA_Q)

;PERIPHERAL VOLUME

     POP_V2=TVV2

     V2=POP_V2*EXP(ETA_V2)

;REQUIRED CONSTANT

     S1=V1

$THETA

(0,9,100) ;TVCL

(0,18,100) ;TVV1

(0,0.75,100) ;TVQ

(0,7,100) ;TVV2

(0,0.3,100) ;OMG_CL

(0,0.3,100) ;OMG_V1

(0,0.3,100) ;OMG_Q

(0,0.3,100) ;OMG_V2

(0,0.3,100) ;SGM_EXP

(0,0.3,100) ;OMG_CL_OCC ;Change to 0 FIX to remove occasional 
variability


$ERROR

EPS_EXP=SGM_EXP*EPS(1)

Y=LOG(F)+EPS_EXP

$OMEGA BLOCK(1) 1  FIX  ;ETA_CL

$OMEGA BLOCK(1) 1  FIX  ;ETA_V1

$OMEGA BLOCK(1) 1  FIX  ;ETA_Q

$OMEGA BLOCK(1) 1  FIX  ;ETA_V2

$OMEGA BLOCK(1) 1  FIX  ;ETA_CL_OCC1    ;Change to 0 FIX to remove 
occasional variability


$OMEGA BLOCK(1) SAME  ;ETA_CL_OCC2

$OMEGA BLOCK(1) SAME  ;ETA_CL_OCC3

$OMEGA BLOCK(1) SAME  ;ETA_CL_OCC4

$OMEGA BLOCK(1) SAME  ;ETA_CL_OCC5

$OMEGA BLOCK(1) SAME  ;ETA_CL_OCC6

$OMEGA BLOCK(1) SAME  ;ETA_CL_OCC7

$OMEGA BLOCK(1) SAME  ;ETA_CL_OCC8

$OMEGA BLOCK(1) SAME  ;ETA_CL_OCC9

$OMEGA BLOCK(1) SAME  ;ETA_CL_OCC10

$OMEGA BLOCK(1) SAME  ;ETA_CL_OCC11

$OMEGA BLOCK(1) SAME  ;ETA_CL_OCC12

$SIGMA

1  FIX  ;EPS_EXP

$ESTIMATION MAXEVAL= PRINT=1 METHOD=1 INTERACTION NUMERICAL SLOW 
LAPLACIAN NOABORT


$COVARIANCE PRINT=E

C,ID,DOSE_N,AMT=DOSE_UM,RATE=RATE_UM,TIME,CP_UM,DV=CP_LN_UM,MDV,EVID

.,2,1,32743.59651,5457.27,0,0,0,1,1

.,2,1,0,0,6,730,6.593044534,0,0

.,2,1,0,0,24,140,4.941642423,0,0

.,2,1,0,0,48,20,2.995732274,0,0

.,2,1,0,0,72,5.1,1.62924054,0,0

.,2,1,0,0,96,1.7,0.530628251,0,0

.,2,1,0,0,120,0.97,-0.030459207,0,0

.,2,1,0,0,144,0.005,-3,0,0

.,2,2,27286.33043,4547.72,0,0,0,1,4

.,2,2,0,0,6,920,6.82437367,0,0

.,2,2,0,0,24,46,3.828641396,0,0

.,2,2,0,0,48,0.93,-0.072570693,0,0

.,2,2,0,0,72,0.23,-1.46967597,0,0

c,2,2,0,0,96,14,2.63905733,0,0

.,2,2,0,0,120,0.06,-2.813410717,0,0

.,2,3,27286.33043,4547.72,0,0,0,1,4

.,2,3,0,0,6,910,6.8134446,0,0

.,2,3,0,0,24,16,2.772588722,0,0

.,2,3,0,0,48,0.28,-1.272965676,0,0

.,2,3,0,0,72,0.09,-2.407945609,0,0

.,2,3,0,0,96,0,0,1,2

.,2,4,27286.33043,4547.72,0,0,0,1,4

.,2,4,0,0,6,820,6.70930434,0,0

.,2,4,0,0,24,9.7,2.272125886,0,0

.,2,4,0,0,48,1.7,0.530628251,0,0

.,2,4,0,0,72,0.14,-1.966112856,0,0

.,2,4,0,0,96,0.005,-3,0,0


Re: [NMusers] CMT Remapping

2018-01-23 Thread Leonid Gibiansky

Hi Bill,

I do not think this is possible, but you are making things more 
complicated than needed. You can slice the data set as needed using 
IGNORE/ACCEPT options. For analytical ADVAN models, you often do not 
need CMT item, and if needed, you can introduce extra columns CMT1, 
CMT2, CMT3, and CMT4, for each of the 4 drugs, and CMTALL for the 
combined model. So the only thing that would be needed is to re-code 
parameter names and theta-eta-eps indices when you combine 4 models in 
one. This step would be needed any way.


Best
Leonid




On 1/23/2018 2:26 PM, Bill Denney wrote:

Hi,

In a follow-up to the previous email, I’m still working on the same 
model where I need to track 4 drugs simultaneously.


For this, I’m working from a single data file, and my initial modeling 
is drug 1 (first control stream), drug 2 (second control stream), drug 3 
(third control stream), drug 4 (fourth control stream).  As I progress 
through modeling, I intend to combine to a single control stream with 
all 4 drugs.


I’d like to be able to speed up my modeling by taking advantage of the 
algebraic ADVAN/TRANS combinations while feasible, but I would also like 
to keep working within the same data file.  Is there any method to remap 
compartment numbers in a model file to an ADVAN-expected compartment?  
I’d like to do something like the following (which doesn’t work):


$SUBROUTINES ADVAN4 TRANS4 CMTMAP=6,11,12,13

Where CMTMAP would remap compartment number 6=1, 11=2, 12=3, and 13=4 
for the expected ADVAN numbering.


Thanks,

Bill



Human Predictions Logo 
*William 
S. Denney, PhD*

Chief Scientist, Human Predictions LLC
+1-617-899-8123
wden...@humanpredictions.com 

This e-mail communication is confidential and is intended only for the 
individual(s) or entity named above and others who have been 
specifically authorized to receive it. If you are not the intended 
recipient, please do not read, copy, use or disclose the contents of 
this communication to others. Please notify the sender that you have 
received this e-mail in error by replying to the e-mail or by calling 
+1-617-899-8123. Please then delete the e-mail and any copies of it. 
Thank you.






Re: [NMusers] Numerical integreaion and change in clearance over time

2017-12-19 Thread Leonid Gibiansky
One option is to run all analyses in analytical ADVANs and then re-run 
the final model in differential equations. If the results are close, it 
could be used as a justification of the final model. If not, 
differential equations solution should be preferred, I guess. In our 
experience, analytical solution is usually pretty close to the 
differential equations but this may depend on the data (time step of 
records). One option to increase precision is to place extra EVID=2 
records, so that the time step is smaller. Another option is to use 
MTIME for that, similar to:


https://www.page-meeting.org/default.asp?abstract=1361

Leonid

On 12/19/2017 6:15 PM, Pavel Belo wrote:

Hello NONMEM Users,

When clearance changes over time, the classic analytical solution for 
the 2-compartment linear model is not correct.  Nevertheless, if the 
change is only ~30%, one can say it is still acceptable.  Time required 
to run an analytical and numerical models is drastically different, 
which can be important for decision making.  Did someone  had an 
experience submitting using analytical solution when clearance changes 
over time to an agency or a journal?  Can it be acceptable?


Thanks,

Paul





[NMusers] West Coast Workshop: Modeling Biologics with Target-Mediated Disposition

2017-12-19 Thread Leonid Gibiansky

QuantPharm is collaborating with ICON to present a workshop on

“Modeling Biologics with Target-Mediated Disposition”

It will be presented on the West Coast, at San Francisco Airport 
DoubleTree by Hilton, 835 Airport Boulevard, Burlingame, CA on


16th February 2018

The workshop is intended for PK scientists with or without prior 
population PK modeling experience. It will provide an overview of the PK 
of biologics, introduce target-mediated drug disposition (TMDD) 
concepts, and discuss various applications of TMDD to drug development 
of biologics. Applications of modeling will include population PK and 
PK-PD, immunogenicity, antibody-drug conjugates, pre-clinical to 
clinical translation, covariate analysis, drug-drug interactions, and 
other topics. Use of different Nonmem estimation methods and 
parallelization for TMDD models will be reviewed. NONMEM codes, inputs 
and outputs will be provided.


More details can be found at http://www.quantpharm.com/Workshop.html

or

http://www.iconplc.com/news-events/events/workshops/modeling-biologics-with-t/

You can register for this workshop and/or the ICON NONMEM / PDxPoP 
beginners/intermediate/advanced methods workshops (February 13-15, 2018, 
same location) by sending email to Lisa R. Wilhelm-Lear


lisa.wilh...@iconplc.com
Phone:  301-944-6771
Fax:215-789-9549

Thanks!
Leonid


Below are the links for more information about Nonmem workshops:

Introductory/Intermediate NONMEM & PDxPoP Workshop (Days 1 & 2)

http://www.iconplc.com/news-events/events/workshops/introductoryintermediate--1/

New and Advanced Features of NONMEM 7 Workshop (Day 3):

http://www.iconplc.com/news-events/events/workshops/new-and-advanced-features-9/



Re: [NMusers] Use of ACCEPT in $DATA

2017-08-24 Thread Leonid Gibiansky

from the manual:

"When the IGNORE option is used to filter records from the input file, 
the .EQ., =, .NE., and /= symbols perform literal string comparisons. To 
provide a numerical equality comparison, use .EQN. for numerical equals, 
and .NEN. for numerical not equals."


May be there is a space there or something, try .EQN.

Leonid


On 8/24/2017 6:16 PM, Dennis Fisher wrote:

NONMEM 7.4.1

Colleagues

I am trying to use the ACCEPT option in $DATA in order to select a 
subset of records (to evaluate the impact of the # of samples/subject on 
confidence intervals).


I used the following code:
ACCEPT=(TIME=0, TIME=1, TIME=2, TIME=4, TIME=6, TIME=24)

NMTRAN then creates a dataset but — to my surprise — TIME=6 is not in 
the dataset (all the others are).


I am copying the first few rows of the input dataset so that you can see 
what is being provided to NMTRAN:


ID,AGE,MONTHS,SEX,WT,AMT,RATE,_TIME_,EVID,MDV,REPLICATE,IPRED,CWRES,DV,PRED,RES,WRES
1101,12,144,1,30.054,210.38,841.51,0,1,1,1,0,0,0,0,0,0
1101,12,144,1,30.054,0,0,1,0,0,1,187.42,0,179.28,199.26,-19.979,0
1101,12,144,1,30.054,0,0,2,0,0,1,180.92,0,187.92,194.09,-6.1659,0
1101,12,144,1,30.054,0,0,4,0,0,1,169.84,0,177.66,184.37,-6.712,0
1101,12,144,1,30.054,0,0,_6_,0,0,1,160.61,0,153.43,175.39,-21.96,0

The underlined / boldfaced value (6) in the final row is the problem.

I assume that NMTRAN is reading that value as something other than 6.0 
(e.g., 6.01) and thereby omitting it.


I have reviewed NMHELP to see if there is some other way to accomplish 
this.  Ideally, there would be something like:

TIME.GT.5.9.AND.TIME.LT.6.1
but that does not appear to be supported.

The alternative is to modify the dataset to include many possible 
MDV/EVID columns.  However, it would be more elegant to do this in the 
control stream.
Or, if there is some way to find out the exact value that NMTRAN sees, I 
could specify that value.


Any help would be appreciated.

Dennis

Dennis Fisher MD
P < (The "P Less Than" Company)
Phone / Fax: 1-866-PLessThan (1-866-753-7784)
www.PLessThan.com 








[NMusers] Intel AVX and AVX2 set of instructions for compiling Nonmem

2017-07-30 Thread Leonid Gibiansky

Dear All,

Could you share if you have any experience with Intel AVX (Advanced 
Vector Extensions) and AVX2 set of instructions for Intel compiler on 
recent processors: does it work with Nonmem, does it allow to speed up 
the fitting, and by how much. Compiler versions/options and processor 
versions would help.


Thanks
Leonid




Re: [NMusers] RE: question about random seed for simulation

2017-03-09 Thread Leonid Gibiansky
One simple option is to stack simulation data files together, with 
EVID=4 for the first (or the only) dose of the second file.

Leonid



On 3/9/2017 6:11 PM, Faelens, Ruben (Belgium) wrote:

Hi Penny,



Nonmem indeed calculates each subject one after the other. The random
values will therefore change. Maybe you can set the random seed every
time you simulate t=0, based on the subject ID?

This may also depend on your data file; have you tried ordering on time
(so the first 50 rows are all t=0 for subject 1 to 50) ?



This largely depends on the simulation software and its design:

As an example: Simulo samples all subjects together at simulation start,
after which it runs the trial design; so the same subjects are sampled
independent of subsequent trial design.



I do not know about other tools (TS.2, simulx, mrgsolve), maybe the
authors of these tools can specify?



Kind regards,

Ruben Faelens



*From:*owner-nmus...@globomaxnm.com
[mailto:owner-nmus...@globomaxnm.com] *On Behalf Of *Zhu, Penny
*Sent:* donderdag 9 maart 2017 19:19
*To:* nmusers@globomaxnm.com
*Subject:* [NMusers] question about random seed for simulation



Dear All

I have finished a multiple dose simulation for 600 subjects and want to
perform a single dose simulation (different sampling time) on the same
subjects (same ETA as the first simulation).  I used the same seed for
the simulation step, it turned out the first subject was the same and
the rest of the subjects are not and I am not sure whether this was due
to the fact that the two simulation has different number TIME records.
If so, I wonder what is the proper way to set the simulation seed so
that the ETAs for the second simulation will be identical to the first
one.



I know that I could output the individual parameter estimate from the
first simulation and import them into the second one.  But I was
thinking if the random seed can be synchronized between the two
simulation, it could be an easier solution.



Your help is very much appreciated!



Thank you very much and best regards!

* *

*Penny (Peijuan) Zhu, Ph.D.*

Associate Director Clinical Pharmacology



Cell: 862-926-9079



PD Bio-Pharma CDMA

Sandoz

1N025, 100 College Road West

Princeton, NJ 08540







Information in this email and any attachments is confidential and
intended solely for the use of the individual(s) to whom it is addressed
or otherwise directed. Please note that any views or opinions presented
in this email are solely those of the author and do not necessarily
represent those of the Company. Finally, the recipient should check this
email and any attachments for the presence of viruses. The Company
accepts no liability for any damage caused by any virus transmitted by
this email. All SGS services are rendered in accordance with the
applicable SGS conditions of service available on request and accessible
at http://www.sgs.com/en/Terms-and-Conditions.aspx




Re: FW: [NMusers] Parameter uncertainty

2017-02-15 Thread Leonid Gibiansky

One of the tools available for simulations is Metrum R package

metrumrg

install.packages("metrumrg", repos="http://R-Forge.R-project.org;)

Example of applications can be found here:

http://www.page-meeting.org/page/page2006/P2006III_11.pdf

Since the time it was written (2005-2006), Nonmem enhanced the 
simulations options, so now you can simulate from the model-estimated 
uncertainty directly from Nonmem. The R package could be useful if you 
do it from the bootstrap results.


Leonid



*From:*owner-nmus...@globomaxnm.com
[mailto:owner-nmus...@globomaxnm.com] *On Behalf Of *Fanny Gallais
*Sent:* Wednesday, February 15, 2017 5:55 AM
*To:* nmusers@globomaxnm.com
*Subject:* [NMusers] Parameter uncertainty

Dear NM users,



I would like to perform a simulation (on R) incorporating parameter
uncertainty. For now I'm working on a simple PK model. Parameters were
estimated with NONMEM. I'm trying to figure out what is the best way to
assess parameter uncertainty. I've read about using the standard errors
reported by NONMEM and assume a normal distribution. The main problem is
this can lead to negative values. Another approach would be a more
computational non-parametric method like bootstrap. Do you know other
methods to assess parameter uncertainty?





Best regards



F. Gallais









*NOTICE: *The information contained in this electronic mail message is
intended only for the personal and confidential use of the designated
recipient(s) named above. This message may be an attorney-client
communication, may be protected by the work product doctrine, and may be
subject to a protective order. As such, this message is privileged and
confidential. If the reader of this message is not the intended
recipient or an agent responsible for delivering it to the intended
recipient, you are hereby notified that you have received this message
in error and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately by telephone and
e-mail and destroy any and all copies of this message in your possession
(whether hard copies or electronically stored copies). Thank you.

Personal data may be transferred to the United States of America and, if
this occurs, it is possible that US governmental authorities may access
such personal data.


buSp9xeMeKEbrUze




Re: [NMusers] Objective function

2017-01-16 Thread Leonid Gibiansky
no, you have to run some other method (e.g, FOCEI or IMP in evaluation 
mode) to get OF suitable for the tests.


On 1/16/2017 4:24 PM, sbihorel wrote:

Thanks a lot Robert, Leonid, and Nick for your responses.

One follow-up question based on Leonid's quotes (~ the objective
functions that are displayed ***during*** SAEM and BAYES analyses are
not valid for assessing minimization or for hypothesis testing): can a
user rely on the ***final*** value of the objective function for these
methods to assess minimization or hypothesis testing (emphasis on the
terms between ***)?


On 1/16/2017 1:33 PM, Nick Holford wrote:

Hi,

Note also that the NONMEM objective function is not based on the full
log-likelihood. It is missing a constant factor which means it is not
simple to compare to -2LL using the full log-likelihood reported by
other software (e.g. SAS).

The next release of NONMEM is expected to include both the current
objective function value and the value based on the full
log-likelihood e.g. using METHOD=CONDITIONAL

 TOTAL DATA POINTS NORMALLY DISTRIBUTED (N):  574
 N*LOG(2PI) CONSTANT TO OBJECTIVE FUNCTION: 1054.9414361189642
 OBJECTIVE FUNCTION VALUE WITHOUT CONSTANT: 5801.3214723745577
 OBJECTIVE FUNCTION VALUE WITH CONSTANT: 6856.2629084935215
 REPORTED OBJECTIVE FUNCTION DOES NOT CONTAIN CONSTANT

Best wishes,

Nick

On 17-Jan-17 07:18, Leonid Gibiansky wrote:

Note however that (from Nonmem 7.3 guide):

-
The objective function SAEMOBJ that is displayed during SAEM analysis
is not valid for assessing minimization or for hypothesis testing. It
is highly stochastic, and does not represent a marginal likelihood
that is integrated over all possible eta, but rather, is the
likelihood for a given set of etas.
---
Full Markov Chain Monte Carlo (MCMC) Bayesian Analysis Method:

A maximum likelihood objective function is also not obtained,...

As mentioned earlier, the objective function (MCMCOBJ) that is
displayed during BAYES analysis is not valid for assessing
minimization or for hypothesis testing in the usual manner. It does
not represent a likelihood that is integrated over all possible eta
(marginal density), but the likelihood at a given set of etas.
-

while of the other methods (ITS, IMP, IMPMAP) the OF value can be
used similar to FO, FOCEI, LAPLACE.

Leonid


On 1/16/2017 12:12 PM, Bauer, Robert wrote:

Sebastien:
All methods in NONMEM are -2LL based.

Robert J. Bauer, Ph.D.
Pharmacometrics R
ICON Early Phase
820 W. Diamond Avenue
Suite 100
Gaithersburg, MD 20878
Office: (215) 616-6428
Mobile: (925) 286-0769
robert.ba...@iconplc.com <mailto:robert.ba...@iconplc.com>
www.iconplc.com <http://www.iconplc.com>

-Original Message-
From: owner-nmus...@globomaxnm.com
<mailto:owner-nmus...@globomaxnm.com>
[mailto:owner-nmus...@globomaxnm.com] On Behalf Of sbihorel
Sent: Monday, January 16, 2017 3:13 AM
To: nmusers@globomaxnm.com <mailto:nmusers@globomaxnm.com>
Subject: [NMusers] Objective function

Hi,

This might appear like a very naive question but I could not find the
information in the NONMEM use guide: what is the value of the objective
function for ITS, IMP, SAEM, and BAYES estimation methods in
relation to
the log likelihood? Is it the standard minus 2 times the log likelihood
like for FO, FOCE, FOCEI, or LAPLACE methods?

Thank you

Sebastien


ICON plc made the following annotations.
--


This e-mail transmission may contain confidential or legally privileged
information that is intended only for the individual or entity named in
the e-mail address. If you are not the intended recipient, you are
hereby notified that any disclosure, copying, distribution, or reliance
upon the contents of this e-mail is strictly prohibited. If you have
received this e-mail transmission in error, please reply to the sender,
so that ICON plc can arrange for proper delivery, and then please
delete
the message.

Thank You,

ICON plc
South County Business Park
Leopardstown
Dublin 18
Ireland
Registered number: 145835








Re: [NMusers] Objective function

2017-01-16 Thread Leonid Gibiansky

Note however that (from Nonmem 7.3 guide):

-
The objective function SAEMOBJ that is displayed during SAEM analysis is 
not valid for assessing minimization or for hypothesis testing. It is 
highly stochastic, and does not represent a marginal likelihood that is 
integrated over all possible eta, but rather, is the likelihood for a 
given set of etas.

---
Full Markov Chain Monte Carlo (MCMC) Bayesian Analysis Method:

A maximum likelihood objective function is also not obtained,...

As mentioned earlier, the objective function (MCMCOBJ) that is displayed 
during BAYES analysis is not valid for assessing minimization or for 
hypothesis testing in the usual manner. It does not represent a 
likelihood that is integrated over all possible eta (marginal density), 
but the likelihood at a given set of etas.

-

while of the other methods (ITS, IMP, IMPMAP) the OF value can be used 
similar to FO, FOCEI, LAPLACE.


Leonid


On 1/16/2017 12:12 PM, Bauer, Robert wrote:

Sebastien:
All methods in NONMEM are -2LL based.

Robert J. Bauer, Ph.D.
Pharmacometrics R
ICON Early Phase
820 W. Diamond Avenue
Suite 100
Gaithersburg, MD 20878
Office: (215) 616-6428
Mobile: (925) 286-0769
robert.ba...@iconplc.com 
www.iconplc.com 

-Original Message-
From: owner-nmus...@globomaxnm.com 
[mailto:owner-nmus...@globomaxnm.com] On Behalf Of sbihorel
Sent: Monday, January 16, 2017 3:13 AM
To: nmusers@globomaxnm.com 
Subject: [NMusers] Objective function

Hi,

This might appear like a very naive question but I could not find the
information in the NONMEM use guide: what is the value of the objective
function for ITS, IMP, SAEM, and BAYES estimation methods in relation to
the log likelihood? Is it the standard minus 2 times the log likelihood
like for FO, FOCE, FOCEI, or LAPLACE methods?

Thank you

Sebastien


ICON plc made the following annotations.
--

This e-mail transmission may contain confidential or legally privileged
information that is intended only for the individual or entity named in
the e-mail address. If you are not the intended recipient, you are
hereby notified that any disclosure, copying, distribution, or reliance
upon the contents of this e-mail is strictly prohibited. If you have
received this e-mail transmission in error, please reply to the sender,
so that ICON plc can arrange for proper delivery, and then please delete
the message.

Thank You,

ICON plc
South County Business Park
Leopardstown
Dublin 18
Ireland
Registered number: 145835



Re: [NMusers] A problem with numerical integration difficulties for a complex model

2017-01-05 Thread Leonid Gibiansky
I think it would help if you provide the code and the reference and the 
sample data set: some published papers contain typos, so it is hard to 
give a general advice based on limited information

Thanks
Leonid


On 1/5/2017 10:44 AM, Zhu, Penny wrote:

Dear Ankekatrin, Jeroen and Sam

Many thanks for your kind suggestions.



I tried ADVAN9, ADVAN13 with different TOL numbers.  They both returned
errors as well.



For ADVAN13

ERROR IN LSODI1: CODE 205

ERROR IN LSODI1: CODE 205





For ADVAN9

ERROR IN LSODA -1

ERROR IN LSODA -5



I also tried adding more simulation timepoints.  I don’t think it made a
difference.



Does anybody have other suggestion?  Should I try Monolix given that
NONMEM has numerical difficulties



Thank you very much and best regards!

* *

*Penny (Peijuan) Zhu, Ph.D.*

Associate Director Clinical Pharmacology



Cell: 862-926-9079



PD Bio-Pharma CDMA

Sandoz

1N025, 100 College Road West

Princeton, NJ 08540







*From:*ankekatrin.v...@uni-saarland.de
[mailto:ankekatrin.v...@uni-saarland.de]
*Sent:* Wednesday, January 04, 2017 3:33 PM
*To:* Zhu, Penny 
*Subject:* Re: [NMusers] A problem with numerical integration
difficulties for a complex model



Hey Penny,
From my little experience with the TMDD modeling ADVAN 13 or 9 works
better.
Good luck
Katrin

Von: Sam Liao
Gesendet: Mittwoch, 4. Januar 21:28
Betreff: Re: [NMusers] A problem with numerical integration difficulties
for a complex model
An: Zhu, Penny, nmusers@globomaxnm.com 

Hi Penny,
You could try other ODE solver, such as ADVAN13 to see if you can get
around this problem.
But, If the data are too sparse, the time gap between two data points
might be too long to solve the ODE, you may want to consider adding some
dummy data points with EVID=2.  Good luck with your TMDD modeling.

Best regards,
Sam Liao, PhD.
Pharmax Research

On 1/4/2017 8:53 AM, Zhu, Penny wrote:



Dear all

Sorry to bother you.



I have encountered a problem with a complex model, involve a full TMDD
PK model lined to a complicated multi-compartment PD model. I was only
doing a simulation based on a published paper so I would prefer not to
alter or simplify the model.



The model kept having numerical difficulties no matter how I modify the
TOL value (I have changed it from 0 to 6).  And I am using ADVAN 8.



Does someone have any idea what else I should try to do to resolve it?
Here is the error message.  The nonmem code and data are quite long and
I could send you the code/data as files if requested.  Many many thanks
in advance for your kind help.



 NUMERICAL DIFFICULTIES WITH INTEGRATION ROUTINE.
 NO. OF REQUIRED SIGNIFICANT DIGITS IN SOLUTION VECTOR
 TO DIFFERENTIAL EQUATIONS,   6, MAY BE TOO LARGE.
 MESSAGE ISSUED FROM SIMULATION STEP





Best





Thank you very much and best regards!



*Penny (Peijuan) Zhu, Ph.D.*



PD Bio-Pharma CDMA

Sandoz

1N025, 100 College Road West

Princeton, NJ 08540







-- Sam Liao Pharmax Research Inc. 14 Upland, Irvine, CA 92602 Phone:
201-9882043 efax: 720-2946783 www.pharmaxresearch.com




Re: [NMusers] residual variability

2016-09-29 Thread Leonid Gibiansky
What I meant was that after you remove the random effect on the lag time 
(and stabilize the model) you may introduce inter-individual variability 
on delay by using transit compartment with random effect or zero-order 
absorption with random effect on duration of infusion (followed by the 
first-order).

Leonid


On 9/29/2016 12:53 AM, Sultan,Abdullah S wrote:

Hi Dr. Gibiansky


Thanks, removing the random effect on the lag time help stabilize the model.


I used a transit compartment and sequential and it did not help, I still
get very large parameter estimates.


I am using Monolix for the modeling


Thanks,

Abdullah



*From:* Leonid Gibiansky <lgibian...@quantpharm.com>
*Sent:* Tuesday, September 27, 2016 5:49:00 PM
*To:* Sultan,Abdullah S; nmusers@globomaxnm.com
*Subject:* Re: [NMusers] residual variability

Abdullah,
Do you have random effect on the lag time? Models with random effects on
the lag time are very difficult to work with, try to remove the lag and
use the transit compartment(s) to describe the delay. Make sure you have
INTERACTION option on the estimation step, use METHOD=1. Sometimes
models with sequential 0-order and 1-st order absorption describe delay
better (with estimated D1 of infusion to the depot compartment).
Leonid





On 9/27/2016 1:12 PM, Sultan,Abdullah S wrote:

Hi everyone,


I have a rich data set for a drug administered orally. The drug has slow
absorption (Tmax 4 hours) and rapid elimination (2 hours half life). A
tlag model was sufficient to describe the data but I ran
into difficulties with the error model.


If I use a proportional or combined error model, the model is unstable
and I get unrealistic estimates (very large Vd, Cl and residual
variability) . It is only stable if:

1) I use a constant error model

2) Use a combined error model and fix the a part


When I use a constant error model, the diagnostic plots clearly show the
error is not constant


Not sure what the cause for this is, I tried several things to fix it
like changing initial estimates or structural model (transit
compartment, zero order,), deleting outliers or low concentrations
near the BLQ but the problem still persists.


Any suggestions


Thanks,

Abdullah Sultan



Re: [NMusers] residual variability

2016-09-27 Thread Leonid Gibiansky

Abdullah,
Do you have random effect on the lag time? Models with random effects on 
the lag time are very difficult to work with, try to remove the lag and 
use the transit compartment(s) to describe the delay. Make sure you have 
INTERACTION option on the estimation step, use METHOD=1. Sometimes 
models with sequential 0-order and 1-st order absorption describe delay 
better (with estimated D1 of infusion to the depot compartment).

Leonid





On 9/27/2016 1:12 PM, Sultan,Abdullah S wrote:

Hi everyone,


I have a rich data set for a drug administered orally. The drug has slow
absorption (Tmax 4 hours) and rapid elimination (2 hours half life). A
tlag model was sufficient to describe the data but I ran
into difficulties with the error model.


If I use a proportional or combined error model, the model is unstable
and I get unrealistic estimates (very large Vd, Cl and residual
variability) . It is only stable if:

1) I use a constant error model

2) Use a combined error model and fix the a part


When I use a constant error model, the diagnostic plots clearly show the
error is not constant


Not sure what the cause for this is, I tried several things to fix it
like changing initial estimates or structural model (transit
compartment, zero order,), deleting outliers or low concentrations
near the BLQ but the problem still persists.


Any suggestions


Thanks,

Abdullah Sultan



Re: AW: [NMusers] IMP and parallelisation

2016-09-20 Thread Leonid Gibiansky

Hi Dirk,
What do you mean "does not solve the issue"? Were the results identical 
with different number of nodes or not?

Thanks
Leonid

On 9/20/2016 9:47 AM, Dirk Garmann wrote:

Thank you Leonid,
We have tried RANMETHOD=P, which is an interesting possibility.
Unfortunately this does not solve the issue. We will further evaluate if the 
information from all nodes is used for the population update.
Any further hints are highly welcome


Best
Dirk

-Ursprüngliche Nachricht-
Von: Leonid Gibiansky [mailto:lgibian...@quantpharm.com]
Gesendet: Montag, 19. September 2016 22:26
An: Dirk Garmann; nmusers@globomaxnm.com
Betreff: Re: [NMusers] IMP and parallelisation

It is a good idea to use RANMETHOD=P at estimation step; then the
results should be identical independently of the number of nodes and
computer load.

Concerning specific behavior .. looks strange. I would try to start from
the initial values of the model with the lowest OF and see what happens.

Thanks
Leonid


On 9/19/2016 1:29 PM, Dirk Garmann wrote:

Dear nmusers.

During a popPK analysis using the M3 method and IMP we observed an
unexpected behavior and would be interested if anyone else observed the
same and can provide guidance/explanations.



The IMP produces "strange" results in cases requiring a parallelization.

We observed  a general (and strong)  trend that with increasing number
of nodes the OBF increases (!) which in my opinion is unexpected  if the
number of samples in MC is sufficiently large.



The initial settings have been as follows:

Parse Type 1



$EST METHOD=IMP INTERACTION LAPLACIAN EONLY=0 ISAMPLE=300 NITER=1000
CTYPE=3 NOABORT GRD=SN(1,2) NOTHETABOUNDTEST PRINT=1

$EST METHOD=IMP INTERACTION NOABORT GRD=SN(1,2) EONLY=1 ISAMPLE=3000
NITER=30 PRINT=1



With 1 node the OBF decreased to ~- 1400

Using 16 nodes the OBF stabilized at ~ 1000

In both cases the OBF does not fluctuate much after 100 interations
(monitoring of EM step) and seems to be stable (no clear hint for a
local minima).

Interestingly the estimated residual error is higher using 1 node. With
16 nodes the variability seems to be shifted to the ETAS.



This behavior might be a concern for a covariate analysis using IMP

Our first assumption was that we need to increase iSAMPLE in the EM
step, since a different seed might be used for each node. However even
increasing ISAMPLE to 3000 in the first step did not change the results
much.

My guess is that it points in the direction of how population values are
updated, but I am not an expert in the implementation of IMP in NONMEM



We would be highly interested in any guidance and explanation.



Many thanks in advance



Dirk



Freundliche Grüße / Best regards,



Dirk Garmann

Head Quantitative Pharmacology





Bayer Pharma Aktiengesellschaft

BPH-DD-CS-CP-QP, Quantitative Pharmacology

Building 0431, 322

51368 Leverkusen, Germany



Tel:+49 202 365577

Fax:

Mobile: +49 175 3109407

E-mail:   _dirk.garmann@bayer.com_

Web:  _http://www.bayer.com_



Vorstand: Dieter Weinand, Vorsitzender | Christoph Bertram

Vorsitzender des Aufsichtsrats: Hartmut Klusik

Sitz der Gesellschaft: Berlin | Amtsgericht Charlottenburg, HRB 283 B











Re: [NMusers] Simultaneous pk model of 2 drugs

2016-09-02 Thread Leonid Gibiansky
I think there is no difference between simultaneous administration or 
not as the dose is directed to the appropriate compartment using CMT 
variable. In any case two dosing records (one for each drug) are needed, 
and the relative times of these records are not important. So it is 
exactly like two models (one for each drug) written in one control 
stream (with different compartments used for each drug). CMT is used to 
direct the dose. DVID or CMT can be used to identify observations for 
each drug. Interactions can be studied using correlations of random 
effects, or using joint parameters, or directly specifying how 
concentration of one drug influences the parameters of the other drug.

Regards,
Leonid

On 9/2/2016 3:01 PM, William Denney wrote:

Hi Chris,

I think that the most straight-forward way to handle this is to have two
sets of compartments and write the $DES block manually (or writing the
algebraic equations if it's a one- or two-compartment model).

It wouldn't be straight-forward to model if the subjects receive the
drugs at the same time.  If the drugs are received at separate times
(like different periods of a study or even different studies), then the
DVID flag idea would work, too.

There are only five EVID values as far as I know, and there's not a
subtle way to use them for two doses, I don't think:

• 0= observation
• 1= dose
• 2= other (I usually use it to reset the compartment)
• 3= reset the subject
• 4= reset and dose at the same time

Thanks,

Bill

On Sep 2, 2016, at 1:22 PM, Penland, Chris
>
wrote:


Greetings NMusers,



Does nonmem have the capacity, unbeknownst to me, for modeling two
simultaneous drugs?



I would like some suggestions about how to define the dataset and
model for a subcutaneous drug and oral drug being administered on
different schedules. I would use DVID = 1 and 2 for the two plasma pk
observations.  I figure this soft of thing had to be dealt with in the
past when trying to model dynamic DDIs (vs, just taking one of the
drugs as a covariate on the other’s parameters).



One approach is to specify the compartments for each to be dosed into
then have those feed the central, but I’m curious to see if there is
something more subtle in the nonmem syntax. Is there something about
EVID, that I don’t know that would help (beyond EVID=1 for dosing)



What if you had two oral drugs? Would you treat the two dosing
compartments as separate and possibly link them together at the
parameter/covariance level?



Thanks,

Chris





Chris Penland, PhD

ECD / Quantitative Clinical Pharmacology

Waltham, MA USA





*Confidentiality Notice: *This message is private and may contain
confidential and proprietary information. If you have received this
message in error, please notify us and remove it from your system and
note that you must not copy, distribute or take any action in reliance
on it. Any unauthorized use or disclosure of the contents of this
message is not permitted and may be unlawful.



Re: [NMusers] Additive plus proportional error model for log-transform data

2016-06-02 Thread Leonid Gibiansky
I also like this version:

  W = SDL-(SDL-SDH)*TY/(SD50+TY)
  Y=LTY+W*EPS(1)

Here SDL is the standard deviation (in logs) at low concentrations, SDH is
the standard deviation at high concentrations, TY is the individual
prediction, LTY is LOG(TY). SIGMA should be fixed at 1

Leonid


On Wed, Jun 1, 2016 at 10:27 PM, Abu Helwa, Ahmad Yousef Mohammad -
abuay010 <ahmad.abuhe...@mymail.unisa.edu.au> wrote:

> Dear NMusers,
>
>
>
> I am developing a PK model using log-transformed single-dose oral data. My
> question relates to using combined error model for log-transform data.
>
>
>
> I have read few previous discussions on NMusers regarding this, which were
> really helpful, and I came across two suggested formulas (below) that I
> tested in my PK models.  Both formulas had similar model fits in terms of
> OFV (OFV using Formula 2 was one unit less than OFV using Formula1) with
> slightly changed PK parameter estimates. My issue with these formulas is
> that the model simulates very extreme concentrations (e.g. upon generating
> VPCs) at the early time points (when drug concentrations are low) and at
> later time points when the concentrations are troughs. These simulated
> extreme concentrations are not representative of the model but a result of
> the residual error model structure.
>
>
>
> My questions:
>
> 1.   Is there a way to solve this problem for the indicated formulas?
>
> 2.   Are the two formulas below equally valid?
>
> 3.   Is there an alternative formula that I can use which does not
> have this numerical problem?
>
> 4.   Any reference paper that discusses this subject?
>
>
>
> Here are the two formulas:
>
> 1.   Formula 1: suggested by Mats Karlsson with fixing SIGMA to 1:
>
> W=SQRT(THETA(16)**2+THETA(17)**2/EXP(IPRE)**2)
>
>
>
> 2.   Formula 2: suggested by Leonid Gibiansky with fixing SIGMA to 1:
>
> W = SQRT(THETA(16)+ (THETA(17)/EXP(IPRE))**2  )
>
>
>
> The way I apply it in my model is this:
>
>
>
> FLAG=0 ;TO AVOID ANY CALCULATIONS OF LOG (0)
>
> IF (F.EQ.0) FLAG=1
>
> IPRE=LOG(F+FLAG)
>
>
>
> W=SQRT(THETA(16)**2+THETA(17)**2/EXP(IPRE)**2) ;FORMULA 1
>
>
>
> IRES=DV-IPRE
>
> IWRES=IRES/W
>
> Y=(1-FLAG)*IPRE + W*EPS(1)
>
>
>
> $SIGMA
>
> 1. FIX
>
>
>
> Best regards,
>
>
>
> Ahmad Abuhelwa
>
> School of Pharmacy and Medical Sciences
>
> University of South Australia- City East Campus
>
> Adelaide, South Australia
>
> Australia
>
>
>



-- 
--
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web:www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel:(301) 767 5566


Re: [NMusers] Rounding (error=134) or obj. func. is infinite (error=136)

2016-04-22 Thread Leonid Gibiansky

 In $DES block I this expression

 DSIZE = LOG(BSL*(EXP(-DEC*TIME) + EXP(GRO*TIME) - 1)+1)

 I would use T rather than TIME

You may also try tol=12 nsig=4 sign=12

Leonid


> On Apr 22, 2016, at 1:37 PM, Mai, Tu [MED]  
> wrote:
> 
> Dear NMUsers,
>  
> Our model keeps having the rounding error (error=134). I tried to fix it 
> using the following methods or their combinations, however, sometimes I then 
> got the message error=136 instead of error=134. But minimization still 
> unsuccessful. Can you please help pointing out how I can solve this problem? 
> I really appreciate it!
>  
> Thank you
>  
>  
>  
> 1) Use the estimates for THETA that are ~10% deviated from the estimates 
> provided after scm run.
> 2) Set TOL=9, NSIG=2, SIGL=6
>  
> I attached here the code:
>  
> $PROBLEMIRM code
>  
> $INPUT  ID TIME EX DV DVID ECOG AGE HT WT RACE SITE DIAG=DROP HEM
>  
> ALK ALB LAC TES STUDY DRUG PRIORDOC
>  
> $DATA   survival.csv
>  
> $SUBROUTINE ADVAN13 TOL=9
>  
> $MODEL  NCOMP=1 COMP(HAZ)
>  
> $PK
> ;;; DECECOG-DEFINITION START
> IF(ECOG.EQ.1) DECECOG = 1  ; Most common
> IF(ECOG.EQ.0) DECECOG = ( 1 + THETA(27))
> IF(ECOG.EQ.2) DECECOG = ( 1 + THETA(28))
> ;;; DECECOG-DEFINITION END
> ;;; GROPRIORDOC-DEFINITION START
> IF(PRIORDOC.EQ.0) GROPRIORDOC = 1  ; Most common
> IF(PRIORDOC.EQ.1) GROPRIORDOC = ( 1 + THETA(26))
> ;;; GROPRIORDOC-DEFINITION END
> ;;; GROECOG-DEFINITION START
> IF(ECOG.EQ.1) GROECOG = 1  ; Most common
> IF(ECOG.EQ.0) GROECOG = ( 1 + THETA(24))
> IF(ECOG.EQ.2) GROECOG = ( 1 + THETA(25))
> ;;; GROECOG-DEFINITION END
> ;;; GROALK-DEFINITION START
>  
> IF(ALK.EQ.-99) THEN
>  
>GROALK = 1
>  
> ELSE
>  
>GROALK = ( 1 + THETA(23)*(ALK - 94.00))
>  
> ENDIF
>  
> ;;; GROALK-DEFINITION END
> ;;; GRO-RELATION START
>  
> GROCOV=GROALK*GROECOG*GROPRIORDOC
>  
> ;;; GRO-RELATION END
>  
> ;;; DECDRUG-DEFINITION START
>  
> IF(DRUG.EQ.3) DECDRUG = 1  ; Most common
>  
> IF(DRUG.EQ.2) DECDRUG = ( 1 + THETA(21))
>  
> IF(DRUG.EQ.1) DECDRUG = ( 1 + THETA(22))
>  
> ;;; DECDRUG-DEFINITION END
>  
> ;;; DECAGE-DEFINITION START
>  
> DECAGE = ( 1 + THETA(20)*(AGE - 64.00))
>  
> ;;; DECAGE-DEFINITION END
> ;;; DEC-RELATION START
>  
> DECCOV=DECAGE*DECDRUG*DECECOG
>  
> ;;; DEC-RELATION END
> ;;; BSLHEM-DEFINITION START
>  
> IF(HEM.LE.124.00) BSLHEM = ( 1 + THETA(18)*(HEM - 124.00))
>  
> IF(HEM.GT.124.00) BSLHEM = ( 1 + THETA(19)*(HEM - 124.00))
>  
> IF(HEM.EQ.-99)   BSLHEM = 1
>  
> ;;; BSLHEM-DEFINITION END
> ;;; BSLECOG-DEFINITION START
>  
> IF(ECOG.EQ.1) BSLECOG = 1  ; Most common
>  
> IF(ECOG.EQ.0) BSLECOG = ( 1 + THETA(16))
>  
> IF(ECOG.EQ.2) BSLECOG = ( 1 + THETA(17))
>  
> ;;; BSLECOG-DEFINITION END
> ;;; BSLALK-DEFINITION START
>  
> IF(ALK.EQ.-99) THEN
>  
>BSLALK = 1
>  
> ELSE
>  
>BSLALK = ( 1 + THETA(15)*(ALK - 94.00))
>  
> ENDIF
>  
> ;;; BSLALK-DEFINITION END
> ;;; BSL-RELATION START
>  
> BSLCOV=BSLALK*BSLECOG*BSLHEM
>  
> ;;; BSL-RELATION END
> ;;; BSHZPRIORDOC-DEFINITION START
>  
> IF(PRIORDOC.EQ.0) BSHZPRIORDOC = 1  ; Most common
>  
> IF(PRIORDOC.EQ.1) BSHZPRIORDOC = ( 1 + THETA(14))
>  
> ;;; BSHZPRIORDOC-DEFINITION END
> ;;; BSHZALK-DEFINITION START
>  
> IF(ALK.EQ.-99) THEN
>  
>BSHZALK = 1
>  
> ELSE
>  
>BSHZALK = ( 1 + THETA(13)*(ALK - 94.00))
>  
> ENDIF
>  
> ;;; BSHZALK-DEFINITION END
> ;;; BSHZ-RELATION START
>  
> BSHZCOV=BSHZALK*BSHZPRIORDOC
>  
> ;;; BSHZ-RELATION END
> ;;; BETAHEM-DEFINITION START
>  
> IF(HEM.EQ.-99) THEN
>  
>BETAHEM = 1
>  
> ELSE
>  
>BETAHEM = ( 1 + THETA(12)*(HEM - 124.00))
>  
> ENDIF
>  
> ;;; BETAHEM-DEFINITION END
> ;;; BETAECOG-DEFINITION START
>  
> IF(ECOG.EQ.1) BETAECOG = 1  ; Most common
>  
> IF(ECOG.EQ.0) BETAECOG = ( 1 + THETA(10))
>  
> IF(ECOG.EQ.2) BETAECOG = ( 1 + THETA(11))
>  
> ;;; BETAECOG-DEFINITION END
> ;;; BETADRUG-DEFINITION START
>  
> IF(DRUG.EQ.3) BETADRUG = 1  ; Most common
>  
> IF(DRUG.EQ.2) BETADRUG = ( 1 + THETA(8))
>  
> IF(DRUG.EQ.1) BETADRUG = ( 1 + THETA(9))
>  
> ;;; BETADRUG-DEFINITION END
> ;;; BETAAGE-DEFINITION START
>  
> BETAAGE = ( 1 + THETA(7)*(AGE - 64.00))
>  
> ;;; BETAAGE-DEFINITION END
> ;;; BETA-RELATION START
>  
> BETACOV=BETAAGE*BETADRUG*BETAECOG*BETAHEM
>  
> ;;; BETA-RELATION END
>  
> IF (NEWIND.LE.1) THEN
>  
>SRVZ=1 ; Survivor function at TIME=0 
>  
> ENDIF
>  
> ;---BASELINE PSA--
> TVBSL = THETA(1) 
>  
> TVBSL = BSLCOV*TVBSL
>  
> BSL = TVBSL*EXP(ETA(1))
>  
> ;---PSA PARAMETERS--
> TVGRO = THETA(2)
>  
> TVGRO = GROCOV*TVGRO
>  
> TVDEC = THETA(3)
>  
> TVDEC = DECCOV*TVDEC
>  
> GRO   = TVGRO*EXP(ETA(2))
>  
> DEC   = TVDEC*EXP(ETA(3))
>  
> ;---SURVIVAl MODEL PARAMETERS--
>  
> TVBSHZ = THETA(5) ; Baseline Hazard
>  
> TVBSHZ = BSHZCOV*TVBSHZ
>  
> BSHZ = TVBSHZ
>  
> TVBETA = THETA(6) ; Parameter relating dropout hazard to PSA estimate  
> TVBETA = BETACOV*TVBETA
> BETA = TVBETA
>  
> $DES   
>  
>   

[NMusers] Workshop on modeling of biologics: first time in the Bay Area (SFO)

2016-02-03 Thread Leonid Gibiansky

Dear All

(Sorry for cross-posting if you got it more than once)

QuantPharm LLC (Leonid Gibiansky and Ekaterina Gibiansky) will present 
the workshop on Modeling of Biologics with Target-Mediated Disposition 
as a part of a 4-day series organized by ICON Development Solutions 
(registration for each of the 3 workshops is separate; you can register 
for any one or for all of them or in any combination). The TMDD workshop 
will be hosted in California (near SFO), March 4th.

The link below contains registration details:

http://www.iconplc.com/news-events/events/workshops/tmdd-workshop/index.xml

Please contact Lisa Wilhelm for assistance with registration 
(lisa.wilh...@iconplc.com)


Please contact me (lgibian...@quantpharm.com) if you have any questions 
concerning the program.


The detailed program can be found here
   http://www.quantpharm.com/Workshop.html

Cost: Industry - $800
  Academia - $700
  Students -  $600

Nonmem (two-days beginners and one-day advanced) ICON workshops will be 
offered the same week/same place, March 1-3, 2016.


Thanks!
Leonid

--
--
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web:www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel:(301) 767 5566




Re: [NMusers] unbalanced data set

2016-01-06 Thread Leonid Gibiansky
I recently found out that FDA approved digestible sensor that can be 
given with the tablet (any tablet) and inform the patient (and the 
company if needed) whether and when the tablet was taken


http://www.proteus.com/press-releases/first-medical-device-cleared-by-fda-with-adherence-claim/

If used in the trials, it would end the guessing game about dose times, 
compliance, etc., providing the exact times of doses for the analysis.


I am wondering whether anybody has an experience with this type of data? 
It would be interesting to see the difference between diary-based 
analysis and sensor-based analysis.


Thanks
Leonid


--
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web:www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel:(301) 767 5566



On 1/6/2016 9:55 AM, Michael Fossler wrote:

At the risk of being tiresome about this topic, absent specific
differences between Phase 1 and Phase 2/3 data , e.g., renal function
due to age or disease states, etc., I’d argue that most of the
differences seen between Phase 1 and Phase 2/3 data are due to
adherence. In a sense, then, much of the differences in PK between these
two groups is artificial, and due to the fact that patients do not
reliably take their medication as prescribed, as opposed to Phase 1
volunteers, where adherence is near 100%. Bernard Vrijens has published
a lot on this topic as it relates to PPK analyses. We, as a discipline,
need to start pushing hard for adherence measures in clinical trials.

As an n=1 case study , a few years ago, I was involved with an analysis
of a large Phase 2 study which consisted of an in-house phase, followed
by discharge to home and an out-patient phase. The patients were
significantly older and sicker than Phase 1 volunteers, so one might
expect some PK differences. When we analyzed the data from the in-house
portion of the study, we got results nearly identical to Phase 1.
However, when we added in the out-patient phase, IIV on many of the
parameters increased dramatically, and the residual error became
extremely large. Clearly, patients were not taking their medication as
prescribed ( and as they wrote in their patient diaries). We ended up
not using the out-patient portion of the data, which represents a huge
waste of resources.

This irritates people when I say this, but we as a discipline are so
enamored of finding that magical covariate(s) which will explain
variability, but we neglect the most important one of all: Did they take
the medicine when they say they did? No biological covariate can have as
big of an effect as adherence. Accounting for adherence routinely
results in up to a 50% decrease in residual variability – few standard
covariates have this effect.

*Fossler M.J.*Commentary: Patient Adherence: Clinical Pharmacology’s
Embarrassing Relative. /Journal of Clinical Pharmacology/ (2015) 55(4):
365-367.

Mike

Michael J. Fossler, Pharm. D., Ph. D., F.C.P.

VP, Quantitative Sciences

Trevena, Inc

mfoss...@trevenainc.com <mailto:mfoss...@trevenainc.com>

Office: 610-354-8840, ext. 249

Cell: 610-329-6636

*From:*owner-nmus...@globomaxnm.com
[mailto:owner-nmus...@globomaxnm.com] *On Behalf Of *Denney, William S.
*Sent:* Wednesday, January 06, 2016 8:33 AM
*To:* <jgre...@btconnect.com>
*Cc:* Zheng Liu; nmusers@globomaxnm.com
*Subject:* Re: [NMusers] unbalanced data set

Hi Zheng,

I'll take an intermediate view between Joachim and Nick.

The rich data from Phase 1 provides the ability to define the structural
model and a few of the important covariates.  The control of Phase 1
gives precision that cannot be achieved in Phase 2 or 3 studies.  But,
there are usually important differences between Phase 1 and later phase
populations that makes the later phase separately important.

With later phase trials, the range of covariates is expanded [1].  On
top of the expanded covariate range, sometimes late-phase patient
populations are categorically different than early phase [2].

In practice, this means that I fit a single model to all data.  The
model will allow for the dense data from Phase 1 with more
inter-individual variability (IIV) terms (fix the IIV to 0 for sparse
data) and the expanded covariate range with a richer set of fixed
effects as the model is expanded for later phase.  Finally, due to
typical differences in data quality, I will often include a different
residual error structure for sparse data.  This approach allows the
complexity of the Phase 1 structural model to carry into the richness of
the late phase covariate model.

[1] A specific example is that typically renal function is allowed to be
lower especially when Phase 1 is in healthy subjects.

[2] My true belief is that there may be unobserved covariates causing
what appears to be a categorical difference.  The functional impact of
that belief is semantic only.  In practice, the model would include a
categorical parameter.

Thanks,

Bill


On Jan 6, 2016, at 4:

Re: [NMusers] Problem of STS in NONMEM

2015-12-22 Thread Leonid Gibiansky
You may try to remove eta3. As you set it now, you allow very large residual 
error, and nonmem returns initial estimates as final
Leonid



> On Dec 22, 2015, at 5:32 AM, Anyue Yin  wrote:
> 
> Dear all,
> 
> Thanks for replying me.
> 
> I am trying to carry out the first stage of Standard Two Stage estimation 
> method in NONMEM (i.e. estimate individual parameter by ID). For the sake of 
> argument, let's assume that I have a data file which includes 10 subjects (ID 
> from 1 to 10). Now I wish to estimate individual parameter by fitting each 
> individual data, so I will be able to get 10 individual parameter estimates. 
> But I found that these 10 individual parameter estimates were all around the 
> initial value what I assigned to the THETA. For example, if I assign 20 to 
> the initial value of THETA, then these 10 parameter estimates are all around 
> 20. If I assign 30, then all around 30... It looks like NONMEM uses initial 
> value of THETA as prior to estimate individual parameters. What I expect is 
> to estimate individual parameters by ID, as if these 10 subjects are 
> separated into 10 data file and estimate 10 times to get each result. So the 
> key point of my question is individual parameter estimation by the first 
> stage of STS in NONMEM. Thank you very much.
> 
> Sincerely,
> Anyue
> 
>> On Tue, Dec 22, 2015 at 5:33 PM, Mats Karlsson  
>> wrote:
>> Dear Anyue,
>> 
>>  
>> 
>> I don’t know what you mean by “the individual parameter estimate changed  if 
>> I change the initial value of THETA”
>> 
>>  
>> 
>> If you mean that individual ETA estimates change, that is expected when you 
>> change THETA. For CL and V to be the same, ETA need to change when THETA 
>> change. It may be that you are at local minima for EBEs. You may want to add 
>> MCETA=1000 on the $EST line in order to test more initial estimates. I would 
>> use MAXEVAL=0, not MAXEVAL=. Possibly I would use MAXEVAL= after 
>> having fixed $OMEGA parameters to the high values you use now.
>> 
>>  
>> 
>> Best regards,
>> 
>> Mats
>> 
>>  
>> 
>>  
>> 
>> Mats Karlsson, PhD
>> 
>> Professor of Pharmacometrics
>> 
>>  
>> 
>> Dept of Pharmaceutical Biosciences
>> 
>> Faculty of Pharmacy
>> 
>> Uppsala University
>> 
>> Box 591
>> 
>> 75124 Uppsala
>> 
>>  
>> 
>> Phone: +46 18 4714105
>> 
>> Fax + 46 18 4714003
>> 
>> www.farmbio.uu.se/research/researchgroups/pharmacometrics/
>> 
>>  
>> 
>> From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
>> Behalf Of Mills, Richard
>> Sent: Tuesday, December 22, 2015 8:54 AM
>> To: Anyue Yin; nmusers@globomaxnm.com
>> Subject: RE: [NMusers] Problem of STS in NONMEM
>> 
>>  
>> 
>> Hi Anyue,
>> 
>>  
>> 
>> You need to amend MAXEVAL=0 in $EST (I suggest MAXEVAL=) in order to 
>> allow estimation.
>> 
>>  
>> 
>> Kind regards,
>> 
>> Richard
>> 
>> Richard Mills PhD
>> Senior Scientist, PKPDM
>> 
>>  
>> 
>>  
>> 
>> ICON plc made the following annotations.
>> 
>> --
>> 
>> This e-mail transmission may contain confidential or legally privileged 
>> information
>> 
>> that is intended only for the individual or entity named in the e-mail 
>> address. If you
>> 
>> are not the intended recipient, you are hereby notified that any disclosure, 
>> copying,
>> 
>> distribution, or reliance upon the contents of this e-mail is strictly 
>> prohibited. If
>> 
>> you have received this e-mail transmission in error, please reply to the 
>> sender, so that
>> 
>> ICON plc can arrange for proper delivery, and then please delete the message.
>> 
>> Thank You,
>> 
>> ICON plc
>> 
>> South County Business Park
>> 
>> Leopardstown
>> 
>> Dublin 18
>> 
>> Ireland
>> 
>> Registered number: 145835
>> 
>>  
>> 
>>  
>> 
>>  
>> 
>> From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On 
>> Behalf Of Anyue Yin
>> Sent: 22 December 2015 06:36
>> To: nmusers@globomaxnm.com
>> Subject: [NMusers] Problem of STS in NONMEM
>> 
>>  
>> 
>> Dear all,
>> 
>>  
>> 
>> I got a problem when using NONMEM with STS method. My aim is to get 
>> individual parameter estimates. I would like to let NONMEM estimate 
>> individual parameter one subject by one subject. My control stream is 
>> partially listed below, which is edited according to this thread 
>> http://www.cognigencorp.com/nonmem/nmo/topic035.html
>> 
>>  
>> 
>> $SUBROUTINES ADVAN1 TRANS2
>> 
>>  
>> 
>> $PK
>> 
>> CL = THETA(1) * EXP(ETA(1))
>> 
>> V  = THETA(2) * EXP(ETA(2))
>> 
>> S1 = V
>> 
>>  
>> 
>> $ERROR
>> 
>> IPRED = F
>> 
>> W = F
>> 
>> Y = IPRED*(1+EXP(ETA(3))*EPS(1))
>> 
>>  IRES = DV-IPRED
>> 
>> IWRES = IRES/W
>> 
>>  
>> 
>> $THETA
>> 
>> (10,20,30)   ; CL
>> 
>> (10,80,100)  ; V
>> 
>>  
>> 
>> $OMEGA
>> 
>> 100 ; IIV CL
>> 
>> 100 ; IIV V
>> 
>> 100 ; IIV SIGMA
>> 
>>  
>> 
>> $SIGMA
>> 
>> 1 FIXED ; PRO
>> 
>> $EST METHOD=1 INTER MAXEVAL=0 NOABORT 

[NMusers] West Coast Workshop: Modeling Biologics with Target-Mediated Disposition

2015-12-12 Thread Leonid Gibiansky


QuantPharm is collaborating with ICON to present a workshop on

“Modeling Biologics with Target-Mediated Disposition”

It will be presented on the West Coast, at San Francisco Airport 
DoubleTree by Hilton, 835 Airport Boulevard, Burlingame, CA on


4th March 2016

The workshop is intended for PK scientists with or without prior 
population PK modeling experience. It will provide an overview of the PK 
of biologics, introduce target-mediated drug disposition (TMDD) 
concepts, and discuss various applications of TMDD to drug development 
of biologics. Applications of modeling will include population PK and 
PK-PD, immunogenicity, antibody-drug conjugates, pre-clinical to 
clinical translation, covariate analysis, drug-drug interactions, and 
other topics. Use of different Nonmem estimation methods and 
parallelization for TMDD models will be reviewed. NONMEM codes, inputs 
and outputs will be provided.


More details can be found at http://www.quantpharm.com/Workshop.html

You can register for this workshop and/or the ICON NONMEM / PDxPoP 
beginners/intermediate/advanced methods workshops (March 1-3, 2016, same 
location) by sending email to Lisa R. Wilhelm-Lear


lisa.wilh...@iconplc.com

In a week or so the registration link will also be ready on the ICON 
Workshops web site

(http://www.iconplc.com/news-events/events/workshops/)

Thanks!
Leonid

--
Leonid Gibiansky, Ph.D.
President, QuantPharm LLC
web:www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel:(301) 767 5566



  1   2   3   >