[NMusers] PopPK modelling of DDIs
Dear colleagues, We are working on a model to describe a pharmacokinetic drug-drug interaction between two drugs. I recall that in the past there has been a discussion on the forum about simultaneously modelling two different drugs for the sake of sharing covariates or covariate models between them. Beyond those posts and a paper by van der Laan (AAC 2018) on a lopinavir-ritonavir DDI, I cannot find published (coding) examples of how to approach DDIs in a joint PK model. Hopefully some of you are willing to point us into the right direction? Best regards, Pieter Colin Pharm.D., Ph.D. Department of Anesthesiology (EB32) University Medical Center Groningen, The Netherlands
[NMusers] Pharmacometrics Network Benelux Fall Meeting
Save the date: 22 November-Pharmacometrics Network Benelux Fall meeting We are happy to pre-announce the next Pharmacometrics Network Benelux, to be held in Breda, Netherlands on Thursday 22 November 2018. The topic of the meeting will be: "Quantitative Systems Pharmacology". A keynote lecture/tutorial will be delivered by Piet Hein van der Graaf. This will be an afternoon symposium, starting with lunch at noon. Over the next couple of weeks we will further define the agenda of this meeting. The formal announcement will follow. We would like to ask you to bring this workshop to the attention of your colleagues who would be interested. PNB Steering committee: Stefaan Rossenu Thomas Dorlo Sven van Dijkman Pieter Colin Anthe Zandvliet Wilbert De Witte Local organizers: Suruchi Bakshi Eline van Maanen
[NMusers] Problem with fpi in NONMEM 7.3
Dear NM users, I've been encountering a problem when using NONMEM 7.3 and the file passing interface for parallel computing. The run I'm trying to get going consists of 5 problems within a single NONMEM run. Here is a short extract of the script: $PROBLEMFit cohort 1 $INPUT ... $DATA data.csv IGNORE=@ IGNORE(COHORT.EQ.1) REWIND ... $ESTIMATION ... MSFO=run1.msf --- $PROBLEMPost hoc predictions $INPUT ... $DATA data.csv IGNORE=@ IGNORE(COHORT.NE.1) REWIND $MSFIrun1.msf ... $ESTIMATION ... MAX=0 --- $PROBLEMFit cohort 2 $INPUT ... $DATA data.csv IGNORE=@ IGNORE(COHORT.EQ.2) REWIND ... $ESTIMATION ... MSFO=run1.msf --- $PROBLEMPost hoc predictions $INPUT ... $DATA data.csv IGNORE=@ IGNORE(COHORT.NE.2) REWIND $MSFIrun1.msf ... $ESTIMATION ... MAX=0 --- $PROBLEMFit all data $INPUT ... $DATA data.csv IGNORE=@ REWIND ... $ESTIMATION ... The script works perfectly fine without the parallel computing option. When using the fpi I get the following error message: At line 169 of file Fortran runtime error: End of File The run consistently fails when initiating problem 5 (i.e. the initial OFV evaluation). I've searched the NONMEM guides, and tried looking for some information online on gfortran but I was not able to identify the problem. Hopefully someone on this forum can shed some light on this behavior. Warm regards, Pieter Colin Department of Anesthesiology University Medical Center Groningen
RE: [NMusers] Parameter uncertainty
Hi Fanny, As I understand it, you’re looking for ways to produce predictions according to your model taking into account parameter uncertainty. We’ve recently published on the importance of parameter uncertainty when considering probability of target attainment for antibiotic dosing regimens. (Colin et al. J Antimicrob Chemother (2016) 71 (9): 2502-2508) The online supplement to this paper holds an R-script which you can use to simulate (and calculate PTA, if relevant) taking into account parameter uncertainty. For this, the script uses the variance-covariance matrix that is produced by the $COV step in NONMEM. Of course other techniques which generate a var-cov matrix could be used as input for the script as well. Kind regards, Pieter Colin From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On Behalf Of Fanny Gallais Sent: woensdag 15 februari 2017 11:55 To: nmusers@globomaxnm.com Subject: [NMusers] Parameter uncertainty Dear NM users, I would like to perform a simulation (on R) incorporating parameter uncertainty. For now I'm working on a simple PK model. Parameters were estimated with NONMEM. I'm trying to figure out what is the best way to assess parameter uncertainty. I've read about using the standard errors reported by NONMEM and assume a normal distribution. The main problem is this can lead to negative values. Another approach would be a more computational non-parametric method like bootstrap. Do you know other methods to assess parameter uncertainty? Best regards F. Gallais
[NMusers] RE: No TABLE output with 1500 measurements per individual
Dear Jean-Marie Martinez, We've encountered a similar problem in the past. Per my understanding calculation of the CWRES might be the problem here. By adding WRESCHOL as option in your first table, after the "FILE="-statement, this problem should be resolved. Kind regards, Pieter Colin University Medical Center Groningen Department of Anesthesiology (EB32) From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On Behalf Of jean-marie.marti...@sanofi.com Sent: woensdag 23 november 2016 9:16 To: nmusers@globomaxnm.com Subject: [NMusers] RE: No TABLE output with 1500 measurements per individual Dear NM-Users, We are trying to model with NONMEM 7.3 (FOCE.I) a dataset containing one measurement per minute for 68 individuals, for a total of 1440 measurements per individual. The dataset is therefore composed of ~100,000 rows. After having defined $SIZES adequately (LIM6 statement), the (successful) minimization of the algorithm takes few minutes, without any warning or error messages. The problem is that the TABLE output step is nearly impossible to obtain. Even with the FIRSTONLY statement, no table can be obtained in a reasonable time period. Trying to limit the table contents to ID & IPRED only does not solve the issue. We performed some additional tests with MSF files as input. When applying MSF to a dataset composed of only ONE individual at a time, the table takes 24 hours to be generated. This would therefore take 68 days (if time increases proportionally... ?) to obtain PRED & IPRED for all individuals. When implementing the model in an alternative commercial software, the table is output in less than 20 minutes. A solution would be to use MSF on a reduced dataset i.e., with a (randomly) decreased number of measurements per individual, but we want to avoid this. What other solution do we have ? Can anyone provide some input on it ? Thanks ! Jean-Marie Martinez Modeling & Simulation Group Sanofi Montpellier
RE: [NMusers] Cross-validation script in NM
Dear Kajsa and Dennis Thank you for your thoughts on this. I know of (and have used several times in the past) the mentioned functionalities in PsN and PLT-tools. However, due to the specific nature of my problem, I'm afraid these will not work for me. Allow me to further clarify my problem. (For clarity, I've included a piece of my control stream at the bottom of this message.) As Dennis pointed out, I'm fitting a training group and use the final parameter estimates in a subsequent run to predict the plasmaconcentrations of the validation group. I failed to clarify this in my previous message, but I'm predicting the plasmaconcentrations for the validation group according to a TDM setting. This means that for the validation group MAXEVAL=0 and only the first through sample per ID is included in the dataset as an observation event(EVID=0 and MDV=0). It goes without saying that the objective is to accurately predict the other plasmaconcentrations (EVID=2 and MDV=1) for the IDs in the validation group. Now to get to the problem. I tried this approach with two separate control streams and it works. I.e. plasmaconcentrations are predicted for the validation group based on the post-hoc corrected final parameter estimates of the training group. However, when I combine these in a single control stream (as shown below) the time-varying covariates are not taken into account for the validation group. More specifically, the following statement (under $PK) is not evaluated for the IDs in the validation group (statement used to switch on/off an additional CL due to hemodialysis). CL_DIA = 0 IF(DIALYSIS.EQ.1) CL_DIA = THETA(6) IND=0 IF(IND_DIA.EQ.1) IND=1 This causes the hemodialysis moments to be ignored by NM in the validation group when using the control stream as shown below. Since it worked for me using separate control streams, it seems that the problem is associated with the use of MSFO=... and $MSFI in the training and validation set, respectively. Do any of you have a specific solution for this problem or could shed some light on specific behavior of the $MSFI option in NM which might be causing this? Kind regards, Pieter Colin $PROBLEMNo covariates ;; 1. Based on: ;; COMMENT: ;- ;--- FIT XVAL ;- $INPUT ID TIME DV CMT AMT RATE EVID MDV UVOL EXTRA IND_DIA OCC DIALYSIS ANALYSIS BV MISSING AGE WGT HGT BMI BSA SOFA M1F2 GFR XVAL $DATA RawdataCFP_cov_ext.csv IGNORE=@ IGNORE(MISSING.EQ.1) ;Exclude missing values IGNORE(CMT.GT.3) ;Exclude CSF sample IGNORE(XVAL.EQ.1) REWIND $SUBROUTINE ADVAN13 TOL=12 $MODEL COMP(CENTRAL,DEFOBS,DEFDOSE) COMP(PERIPH) COMP(URINE,INITIALOFF) $PK ;- Calculation of Time After Dose IF (EVID.EQ.1.OR.EVID.EQ.4) THEN TDOS=TIME TAD=0.0 ENDIF IF (EVID.NE.1.AND.EVID.NE.4) TAD=TIME-TDOS TVCLOTHER =THETA(1) CLOTHER =TVCLOTHER*EXP(ETA(4)) TVCL = THETA(2) CL = TVCL*EXP(ETA(1)) TVV1 = THETA(3) V1 =TVV1*EXP(ETA(2)) TVV2 =THETA(4) V2 =TVV2*EXP(ETA(3)) TVQ =THETA(5) Q=TVQ ;- Dialysis submodel CL_DIA = 0 IF(DIALYSIS.EQ.1) CL_DIA = THETA(6) IND=0 IF(IND_DIA.EQ.1) IND=1 S1=V1 S3=UVOL K10=CLOTHER/V1 K12=Q/V1 K21=Q/V2 K13=CL/V1 K11=CL_DIA/V1 $DES DADT(1)=-K12*A(1)+K21*A(2)-K10*A(1)-K13*A(1)-K11*A(1)*IND DADT(2)=K12*A(1)-K21*A(2) DADT(3)=K13*A(1) $ERROR IPRED = 1E-3 IF(F.GT.0) IPRED=F Y = IPRED*(1+EPS(1)) IRES = DV-IPRED IWRES = IRES/(IPRED*SQRT(SIGMA(1,1))) IF(CMT.EQ.3) THEN Y = IPRED*(1+EPS(2)) IRES = DV-IPRED IWRES = IRES/SQRT(IPRED*IPRED*SIGMA(2,2)) ENDIF $THETA (1E-9,1.097450) ; CLOTHER; L/h (1E-9,2.124530) ; CL; L/h (1E-9,8.640870) ; V1; L (1E-9,18.58180) ; V2; L (1E-9,34.13580) ; Q; L/h (1E-9,4.046690) ; CL_DIA; L/h $OMEGA 1.265890 ; IIV_CL 0.387112 ; IIV_V1 0.186287 ; IIV_V2 0.371892 ; IIV_CLOTHER $SIGMA 0.090199 ; Proportional plasma 0.106711 ; Proportional urine $ESTIMATION SIG=2 MAX= METHOD=1 SORT INTERACTION POSTHOC PRINT=1 MSFO=run61.msf ;- ;--- POST HOC ;- $PROBLEMPREDICT XVAL1 $INPUT ID TIME DV CP CMT AMT RATE EVID MDV UVOL EXTRA IND_DIA OCC DIALYSIS ANALYSIS BV MISSING AGE WGT HGT BMI BSA SOFA M1F2 GFR TDM XVAL $DATA RawdataCFP_xval_ext.csv IGNORE=@ IGNORE(MISSING.EQ.1) ;Exclude missing values IGNORE(CMT.GT.3) ;Exclude CSF sample IGNORE(XVAL.NE.1) REWIND $MSFI run61.msf $ESTIMATION SIG=2
[NMusers] Cross-validation script in NM
Dear nm-users, I'm trying to construct a NONMEM control file to be used in a cross-validation study. In a first problem statement I run an estimation step on a subset of my data. In a subsequent problem statement (within the same control file) I am trying to predict the PK of the subset that was not included in part 1. I managed to do this by use of the MSFO option (in the first part of the control file) and the $MSFI in de second part. However, it appears that time-varying covariates (defined under $PK in the first problem statement) are not evaluated when performing the predictions for the second problem statement. Does anyone know of a workaround for this or is there another way of combining a fit and predict action (both on different data) within the same control-file? Kind regards, Pieter -- Pieter Colin, Pharm.D., Ph.D. Post-Doctoral researcher (Faculty of Pharmaceutical Sciences - Ghent University) Associate Professor (Department of Anesthesiology - UMCG)
[NMusers] intratumoural PK modeling
Dear NMusers, I'm working on a PK-PD model to describe paclitaxel intra-tumoural PK and PD post intraperitoneal administration. Hereto, we collected a.o. tumour tissue at different time-points post dosing. After tissue collection we divided the tumour tissue specimens in different portions according to depth from tumour surface. As part of the PK model I'm trying to model the concentration-decay over time as well as the concentration decay over depth. However, I'm currently facing some problems. At the moment, my model deals with the concentration-time profile under $DES and then corrects for depth under $ERROR. $DES ... DADT(5)=A(1)*VM/(KM+A(1))-K50*A(5) $ERROR ... INT=-7 IF(F.GT.0) INT = LOG(F) IF(CMT.EQ.5) THEN IPRED=INT+SLOPE*DEPTH Y = IPRED+EPS(2) ENDIF ... This works out fine. However, since I'm correcting for depth in a post-hoc fashion, I'm wondering which information NM is using during the integration step. Is it using the average of the DVs sharing the same TIME value (without taking into account the DEPTH variable), is it only using the first DV value from the ones sharing a TIME value? Secondly, I was wondering whether it is possible to apply the Depth correction within the $DES statement? Or would this require the use of partial differential equations rather than ODE's? My dataset looks like this: ID TIME DV CMT AM0 DOSE RAT0 EV0 MV0 DEPTH SIZE 17 0 0 1 3 3 0 1 1 0 0 17 0.75 0.912189 5 0 3 0 0 0 1.25 7.5 17 0.75 1.150486 5 0 3 0 0 0 1.25 7.5 17 0.75 0.202403 5 0 3 0 0 0 3.75 7.5 17 0.75 2.187764 6 0 3 0 0 0 1.25 7.5 17 0.75 1.641103 6 0 3 0 0 0 1.25 7.5 17 0.75 1.495206 6 0 3 0 0 0 3.75 7.5 Kind regards, Pieter Colin, Pharmacist Ph.D. Student (Pre-) Clinical PK/PD Modelling & Simulation Laboratory of Medical Biochemistry and Clinical Analysis Faculty of Pharmaceutical Sciences Ghent University Harelbekestraat 72 B-9000 Gent Belgium Tel.: +32-9-264-81-14 Fax: +32-9-264-81-97 E-mail: pieter.co...@ugent.be<mailto:pieter.co...@ugent.be>