RE: [NMusers] Potential bug in NM 7.3 and 7.4.2

2018-11-20 Thread Lindauer, Andreas (Barcelona)
@Franziska
It is not an scm issue. The scm routine just made me aware of this problem by 
including the DROP statements. I then manually tested it out side of scm with 
the same result.

@Leonid
With the 'bad' model indeed incorrect DV's (with decimals cut off) are output 
to the tab file.

@all
I have investigated further and think the problem is related to the length of 
the lines in the datafile, as Katya suspected.
Turns out that when preparing the dataset in R I did not round derived 
variables (eg. LNDV, some covariates) and as a result, some variables have up 
to 15 decimal places. May not be a problem from a computational point of view, 
but results in some lines in the datafile (when opening in a text editor) are 
as long as 160 characters. If I round my variables to say 3 decimals, the issue 
is gone.
I suspect, that there is a problem in the way NONMEM generates the FDATA file 
when there are data records exceeding a specific number of characters.

I always thought the limit was 300, but apparently it may be less.

Again, thanks to all for thinking through this.


-Original Message-
From: Leonid Gibiansky 
Sent: Dienstag, 20. November 2018 20:13
To: Lindauer, Andreas (Barcelona) ; 
nmusers@globomaxnm.com
Cc: Ekaterina Gibiansky 
Subject: Re: [NMusers] Potential bug in NM 7.3 and 7.4.2

Thanks!
One more question: in the "bad" model, if you look on output (tab) file, can 
you detect the differences, or it is different inside but output correct DV 
values to tab file? I think you describe it in the email below (that output is 
"bad") but I just wanted to be 100% sure. Then at least we can compare output 
with the true data (say, in R, read the tab file and csv file and compare) and 
detect the problem not looking on FDATA.
Thanks
Leonid

On 11/20/2018 1:53 PM, Lindauer, Andreas (Barcelona) wrote:
> @Leonid
> It is the very DV column that is damaged.
> In the 'good' model, the one with less than 3 variables dropped or when using 
> the WIDE option, DVs show up in sdtab as they are in the input file. While 
> the 'bad' model cuts off the decimals, e.g.
> 3.17, 3.19, 3.74 in the input data file (and the good sdtab) become
> 3.0, 3.0, 3.0 with the bad model
>
> @Katya
> Yes, originally I did have lines longer than 80 characters but not longer 
> than 300. I just did a quick test with keeping all lines <80 chars and the 
> issue remains.
>
> @Alejandro
> No I don't have spaces in my variables. Neither in the name nor in the
> record itself
>
> @Luann
> Yes I'm using a csv file. As far as I can see all my variables are numeric, 
> and do not contain special characters. The datafile is correctly opened in 
> Excel and R. But I will double check.
>
> Thanks to all to help detecting the problem. I will try to make a 
> reproducible example with dummy data that can be shared.
>
> Regards, Andreas.
>
> -Original Message-
> From: Ekaterina Gibiansky 
> Sent: Dienstag, 20. November 2018 16:29
> To: Leonid Gibiansky ; Lindauer, Andreas
> (Barcelona) ; nmusers@globomaxnm.com
> Subject: Re: [NMusers] Potential bug in NM 7.3 and 7.4.2
>
> And one more question, do you have long lines - compared to 80 and to
> 300 characters that become shorter than these thresholds when you drop the 
> third variable?
>
> Regards,
>
> Katya
>
> On 11/20/2018 10:01 AM, Leonid Gibiansky wrote:
>> Never seen it.
>>
>> This will not solve the problem, but just for diagnostics, have you
>> found out what is "damaged" in the created data files: is the number
>> of subjects (and number of data records) the same in both versions
>> (reported in the output file)? Among columns used in the base model
>> (ID, TIME, AMT, RATE, DV, EVID, MDV), which are different? (can be
>> checked if printed out to .tab file)? And which of the data file
>> versions is interpreted correctly by the nonmem code, with or without
>> WIDE option?
>>
>> Thanks
>> Leonid
>>
>>
>> On 11/20/2018 6:45 AM, Lindauer, Andreas (Barcelona) wrote:
>>> Dear all,
>>>
>>> I would like to share with the group an issue that I encountered
>>> using NONMEM and which appears to me to be an undesired behavior.
>>> Since it is confidential matter I can't unfortunately share code or
>>> data.
>>>
>>> I have run a simple PK model with 39 data items in $INPUT. After a
>>> successful run I started a covariate search using PsN. To my
>>> surprise the OFVs when including covariates in the forward step
>>> turned out to be all higher than the OFV of the base model. I mean
>>> higher by ~180 units.
>>> I realized that PsN in the scm routine adds =DROP to some variables
>>> in $INPUT that are not used in a given covariate test run.
>>> I then ran the base model again with DROPPING some variables from
>>> $INPUT. And indeed the run with 3 or more variables dropped (using
>>> DROP or SKIP) resulted in a higher OFV (~180 units), otherwise being
>>> the same model.
>>> In the lst files of both models I noticed a difference in the line
>>> saying "0FORMAT FOR DATA" and in fact when 

Re: [NMusers] Potential bug in NM 7.3 and 7.4.2

2018-11-20 Thread Schaedeli Stark, Franziska
Dear Andreas

I think your issue needs to be addressed through the PsN configuration, I
don't think it is a NONMEM issue.
When running scm with PsN you need to specify in the command file the
columns that you want/need to keep with a command like
do_not_drop = C,RACE,WGT

Please refer to the PsN users guides or the Uppsala Pharmacometric Group
for more information.

Kind Regards,
Franziska


*Franziska Schaedeli Stark, PhD*Senior Pharmacometrician
Senior Principal Scientist
Pharmaceutical Sciences - Clinical Pharmacology
Roche Pharma Research & Early Development (pRED)

Roche Innovation Center Basel

F. Hoffmann-La Roche Ltd
Grenzacherstrasse 124
Bldg 1 - Floor 17 - Office N661

4070 Basel

Phone:  +41 61 688 5819
Mob +41 79 773 12 61
mailto: franziska.schaedeli_st...@roche.com


On Tue, Nov 20, 2018 at 8:21 PM Leonid Gibiansky 
wrote:

> Thanks!
> One more question: in the "bad" model, if you look on output (tab) file,
> can you detect the differences, or it is different inside but output
> correct DV values to tab file? I think you describe it in the email
> below (that output is "bad") but I just wanted to be 100% sure. Then at
> least we can compare output with the true data (say, in R, read the tab
> file and csv file and compare) and detect the problem not looking on FDATA.
> Thanks
> Leonid
>
> On 11/20/2018 1:53 PM, Lindauer, Andreas (Barcelona) wrote:
> > @Leonid
> > It is the very DV column that is damaged.
> > In the 'good' model, the one with less than 3 variables dropped or when
> using the WIDE option, DVs show up in sdtab as they are in the input file.
> While the 'bad' model cuts off the decimals, e.g.
> > 3.17, 3.19, 3.74 in the input data file (and the good sdtab) become 3.0,
> 3.0, 3.0 with the bad model
> >
> > @Katya
> > Yes, originally I did have lines longer than 80 characters but not
> longer than 300. I just did a quick test with keeping all lines <80 chars
> and the issue remains.
> >
> > @Alejandro
> > No I don't have spaces in my variables. Neither in the name nor in the
> record itself
> >
> > @Luann
> > Yes I'm using a csv file. As far as I can see all my variables are
> numeric, and do not contain special characters. The datafile is correctly
> opened in Excel and R. But I will double check.
> >
> > Thanks to all to help detecting the problem. I will try to make a
> reproducible example with dummy data that can be shared.
> >
> > Regards, Andreas.
> >
> > -Original Message-
> > From: Ekaterina Gibiansky 
> > Sent: Dienstag, 20. November 2018 16:29
> > To: Leonid Gibiansky ; Lindauer, Andreas
> (Barcelona) ; nmusers@globomaxnm.com
> > Subject: Re: [NMusers] Potential bug in NM 7.3 and 7.4.2
> >
> > And one more question, do you have long lines - compared to 80 and to
> > 300 characters that become shorter than these thresholds when you drop
> the third variable?
> >
> > Regards,
> >
> > Katya
> >
> > On 11/20/2018 10:01 AM, Leonid Gibiansky wrote:
> >> Never seen it.
> >>
> >> This will not solve the problem, but just for diagnostics, have you
> >> found out what is "damaged" in the created data files: is the number
> >> of subjects (and number of data records) the same in both versions
> >> (reported in the output file)? Among columns used in the base model
> >> (ID, TIME, AMT, RATE, DV, EVID, MDV), which are different? (can be
> >> checked if printed out to .tab file)? And which of the data file
> >> versions is interpreted correctly by the nonmem code, with or without
> >> WIDE option?
> >>
> >> Thanks
> >> Leonid
> >>
> >>
> >> On 11/20/2018 6:45 AM, Lindauer, Andreas (Barcelona) wrote:
> >>> Dear all,
> >>>
> >>> I would like to share with the group an issue that I encountered
> >>> using NONMEM and which appears to me to be an undesired behavior.
> >>> Since it is confidential matter I can't unfortunately share code or
> >>> data.
> >>>
> >>> I have run a simple PK model with 39 data items in $INPUT. After a
> >>> successful run I started a covariate search using PsN. To my surprise
> >>> the OFVs when including covariates in the forward step turned out to
> >>> be all higher than the OFV of the base model. I mean higher by ~180
> >>> units.
> >>> I realized that PsN in the scm routine adds =DROP to some variables
> >>> in $INPUT that are not used in a given covariate test run.
> >>> I then ran the base model again with DROPPING some variables from
> >>> $INPUT. And indeed the run with 3 or more variables dropped (using
> >>> DROP or SKIP) resulted in a higher OFV (~180 units), otherwise being
> >>> the same model.
> >>> In the lst files of both models I noticed a difference in the line
> >>> saying "0FORMAT FOR DATA" and in fact when looking at the temporarily
> >>> created FDATA files, it is obvious that the format of the file from
> >>> the model with DROPped items is different.
> >>> In my concrete case the issue only happens when dropping 3 or more
> >>> variables. I get the same behavior with NM 7.3 and 7.4.2. Both on
> >>> Windows 10 and in a linux 

Re: [NMusers] Potential bug in NM 7.3 and 7.4.2

2018-11-20 Thread Leonid Gibiansky

Thanks!
One more question: in the "bad" model, if you look on output (tab) file, 
can you detect the differences, or it is different inside but output 
correct DV values to tab file? I think you describe it in the email 
below (that output is "bad") but I just wanted to be 100% sure. Then at 
least we can compare output with the true data (say, in R, read the tab 
file and csv file and compare) and detect the problem not looking on FDATA.

Thanks
Leonid

On 11/20/2018 1:53 PM, Lindauer, Andreas (Barcelona) wrote:

@Leonid
It is the very DV column that is damaged.
In the 'good' model, the one with less than 3 variables dropped or when using 
the WIDE option, DVs show up in sdtab as they are in the input file. While the 
'bad' model cuts off the decimals, e.g.
3.17, 3.19, 3.74 in the input data file (and the good sdtab) become 3.0, 3.0, 
3.0 with the bad model

@Katya
Yes, originally I did have lines longer than 80 characters but not longer than 
300. I just did a quick test with keeping all lines <80 chars and the issue 
remains.

@Alejandro
No I don't have spaces in my variables. Neither in the name nor in the record 
itself

@Luann
Yes I'm using a csv file. As far as I can see all my variables are numeric, and 
do not contain special characters. The datafile is correctly opened in Excel 
and R. But I will double check.

Thanks to all to help detecting the problem. I will try to make a reproducible 
example with dummy data that can be shared.

Regards, Andreas.

-Original Message-
From: Ekaterina Gibiansky 
Sent: Dienstag, 20. November 2018 16:29
To: Leonid Gibiansky ; Lindauer, Andreas (Barcelona) 
; nmusers@globomaxnm.com
Subject: Re: [NMusers] Potential bug in NM 7.3 and 7.4.2

And one more question, do you have long lines - compared to 80 and to
300 characters that become shorter than these thresholds when you drop the 
third variable?

Regards,

Katya

On 11/20/2018 10:01 AM, Leonid Gibiansky wrote:

Never seen it.

This will not solve the problem, but just for diagnostics, have you
found out what is "damaged" in the created data files: is the number
of subjects (and number of data records) the same in both versions
(reported in the output file)? Among columns used in the base model
(ID, TIME, AMT, RATE, DV, EVID, MDV), which are different? (can be
checked if printed out to .tab file)? And which of the data file
versions is interpreted correctly by the nonmem code, with or without
WIDE option?

Thanks
Leonid


On 11/20/2018 6:45 AM, Lindauer, Andreas (Barcelona) wrote:

Dear all,

I would like to share with the group an issue that I encountered
using NONMEM and which appears to me to be an undesired behavior.
Since it is confidential matter I can't unfortunately share code or
data.

I have run a simple PK model with 39 data items in $INPUT. After a
successful run I started a covariate search using PsN. To my surprise
the OFVs when including covariates in the forward step turned out to
be all higher than the OFV of the base model. I mean higher by ~180
units.
I realized that PsN in the scm routine adds =DROP to some variables
in $INPUT that are not used in a given covariate test run.
I then ran the base model again with DROPPING some variables from
$INPUT. And indeed the run with 3 or more variables dropped (using
DROP or SKIP) resulted in a higher OFV (~180 units), otherwise being
the same model.
In the lst files of both models I noticed a difference in the line
saying "0FORMAT FOR DATA" and in fact when looking at the temporarily
created FDATA files, it is obvious that the format of the file from
the model with DROPped items is different.
In my concrete case the issue only happens when dropping 3 or more
variables. I get the same behavior with NM 7.3 and 7.4.2. Both on
Windows 10 and in a linux environment.
The problem is fixed by using the WIDE option in $DATE.
I'm not aware of any recommendation or advise to use the WIDE option
when using DROP statements in the dataset. But am happy to learn
about it in case I missed it.

Would be great to hear if anyone else had a similar problem in the past.

Best regards, Andreas.

Andreas Lindauer, PhD
Agriculture, Food and Life
Life Science Services - Exprimo
Senior Consultant

Information in this email and any attachments is confidential and
intended solely for the use of the individual(s) to whom it is
addressed or otherwise directed. Please note that any views or
opinions presented in this email are solely those of the author and
do not necessarily represent those of the Company. Finally, the
recipient should check this email and any attachments for the
presence of viruses. The Company accepts no liability for any damage
caused by any virus transmitted by this email. All SGS services are
rendered in accordance with the applicable SGS conditions of service
available on request and accessible at
http://www.sgs.com/en/Terms-and-Conditions.aspx





Information in this email and any attachments is confidential and intended 
solely for the use of the 

RE: [NMusers] Potential bug in NM 7.3 and 7.4.2

2018-11-20 Thread Lindauer, Andreas (Barcelona)
@Leonid
It is the very DV column that is damaged.
In the 'good' model, the one with less than 3 variables dropped or when using 
the WIDE option, DVs show up in sdtab as they are in the input file. While the 
'bad' model cuts off the decimals, e.g.
3.17, 3.19, 3.74 in the input data file (and the good sdtab) become 3.0, 3.0, 
3.0 with the bad model

@Katya
Yes, originally I did have lines longer than 80 characters but not longer than 
300. I just did a quick test with keeping all lines <80 chars and the issue 
remains.

@Alejandro
No I don't have spaces in my variables. Neither in the name nor in the record 
itself

@Luann
Yes I'm using a csv file. As far as I can see all my variables are numeric, and 
do not contain special characters. The datafile is correctly opened in Excel 
and R. But I will double check.

Thanks to all to help detecting the problem. I will try to make a reproducible 
example with dummy data that can be shared.

Regards, Andreas.

-Original Message-
From: Ekaterina Gibiansky 
Sent: Dienstag, 20. November 2018 16:29
To: Leonid Gibiansky ; Lindauer, Andreas (Barcelona) 
; nmusers@globomaxnm.com
Subject: Re: [NMusers] Potential bug in NM 7.3 and 7.4.2

And one more question, do you have long lines - compared to 80 and to
300 characters that become shorter than these thresholds when you drop the 
third variable?

Regards,

Katya

On 11/20/2018 10:01 AM, Leonid Gibiansky wrote:
> Never seen it.
>
> This will not solve the problem, but just for diagnostics, have you
> found out what is "damaged" in the created data files: is the number
> of subjects (and number of data records) the same in both versions
> (reported in the output file)? Among columns used in the base model
> (ID, TIME, AMT, RATE, DV, EVID, MDV), which are different? (can be
> checked if printed out to .tab file)? And which of the data file
> versions is interpreted correctly by the nonmem code, with or without
> WIDE option?
>
> Thanks
> Leonid
>
>
> On 11/20/2018 6:45 AM, Lindauer, Andreas (Barcelona) wrote:
>> Dear all,
>>
>> I would like to share with the group an issue that I encountered
>> using NONMEM and which appears to me to be an undesired behavior.
>> Since it is confidential matter I can't unfortunately share code or
>> data.
>>
>> I have run a simple PK model with 39 data items in $INPUT. After a
>> successful run I started a covariate search using PsN. To my surprise
>> the OFVs when including covariates in the forward step turned out to
>> be all higher than the OFV of the base model. I mean higher by ~180
>> units.
>> I realized that PsN in the scm routine adds =DROP to some variables
>> in $INPUT that are not used in a given covariate test run.
>> I then ran the base model again with DROPPING some variables from
>> $INPUT. And indeed the run with 3 or more variables dropped (using
>> DROP or SKIP) resulted in a higher OFV (~180 units), otherwise being
>> the same model.
>> In the lst files of both models I noticed a difference in the line
>> saying "0FORMAT FOR DATA" and in fact when looking at the temporarily
>> created FDATA files, it is obvious that the format of the file from
>> the model with DROPped items is different.
>> In my concrete case the issue only happens when dropping 3 or more
>> variables. I get the same behavior with NM 7.3 and 7.4.2. Both on
>> Windows 10 and in a linux environment.
>> The problem is fixed by using the WIDE option in $DATE.
>> I'm not aware of any recommendation or advise to use the WIDE option
>> when using DROP statements in the dataset. But am happy to learn
>> about it in case I missed it.
>>
>> Would be great to hear if anyone else had a similar problem in the past.
>>
>> Best regards, Andreas.
>>
>> Andreas Lindauer, PhD
>> Agriculture, Food and Life
>> Life Science Services - Exprimo
>> Senior Consultant
>>
>> Information in this email and any attachments is confidential and
>> intended solely for the use of the individual(s) to whom it is
>> addressed or otherwise directed. Please note that any views or
>> opinions presented in this email are solely those of the author and
>> do not necessarily represent those of the Company. Finally, the
>> recipient should check this email and any attachments for the
>> presence of viruses. The Company accepts no liability for any damage
>> caused by any virus transmitted by this email. All SGS services are
>> rendered in accordance with the applicable SGS conditions of service
>> available on request and accessible at
>> http://www.sgs.com/en/Terms-and-Conditions.aspx
>>
>
>
Information in this email and any attachments is confidential and intended 
solely for the use of the individual(s) to whom it is addressed or otherwise 
directed. Please note that any views or opinions presented in this email are 
solely those of the author and do not necessarily represent those of the 
Company. Finally, the recipient should check this email and any attachments for 
the presence of viruses. The Company accepts no liability for any 

Re: [NMusers] Potential bug in NM 7.3 and 7.4.2

2018-11-20 Thread Ekaterina Gibiansky
And one more question, do you have long lines - compared to 80 and to 
300 characters that become shorter than these thresholds when you drop 
the third variable?


Regards,

Katya

On 11/20/2018 10:01 AM, Leonid Gibiansky wrote:

Never seen it.

This will not solve the problem, but just for diagnostics, have you 
found out what is "damaged" in the created data files: is the number 
of subjects (and number of data records) the same in both versions 
(reported in the output file)? Among columns used in the base model 
(ID, TIME, AMT, RATE, DV, EVID, MDV), which are different? (can be 
checked if printed out to .tab file)? And which of the data file 
versions is interpreted correctly by the nonmem code, with or without 
WIDE option?


Thanks
Leonid


On 11/20/2018 6:45 AM, Lindauer, Andreas (Barcelona) wrote:

Dear all,

I would like to share with the group an issue that I encountered 
using NONMEM and which appears to me to be an undesired behavior. 
Since it is confidential matter I can't unfortunately share code or 
data.


I have run a simple PK model with 39 data items in $INPUT. After a 
successful run I started a covariate search using PsN. To my surprise 
the OFVs when including covariates in the forward step turned out to 
be all higher than the OFV of the base model. I mean higher by ~180 
units.
I realized that PsN in the scm routine adds =DROP to some variables 
in $INPUT that are not used in a given covariate test run.
I then ran the base model again with DROPPING some variables from 
$INPUT. And indeed the run with 3 or more variables dropped (using 
DROP or SKIP) resulted in a higher OFV (~180 units), otherwise being 
the same model.
In the lst files of both models I noticed a difference in the line 
saying "0FORMAT FOR DATA" and in fact when looking at the temporarily 
created FDATA files, it is obvious that the format of the file from 
the model with DROPped items is different.
In my concrete case the issue only happens when dropping 3 or more 
variables. I get the same behavior with NM 7.3 and 7.4.2. Both on 
Windows 10 and in a linux environment.

The problem is fixed by using the WIDE option in $DATE.
I'm not aware of any recommendation or advise to use the WIDE option 
when using DROP statements in the dataset. But am happy to learn 
about it in case I missed it.


Would be great to hear if anyone else had a similar problem in the past.

Best regards, Andreas.

Andreas Lindauer, PhD
Agriculture, Food and Life
Life Science Services - Exprimo
Senior Consultant

Information in this email and any attachments is confidential and 
intended solely for the use of the individual(s) to whom it is 
addressed or otherwise directed. Please note that any views or 
opinions presented in this email are solely those of the author and 
do not necessarily represent those of the Company. Finally, the 
recipient should check this email and any attachments for the 
presence of viruses. The Company accepts no liability for any damage 
caused by any virus transmitted by this email. All SGS services are 
rendered in accordance with the applicable SGS conditions of service 
available on request and accessible at 
http://www.sgs.com/en/Terms-and-Conditions.aspx









Re: [NMusers] Potential bug in NM 7.3 and 7.4.2

2018-11-20 Thread Alejandro Pérez Pitarch
Never seen it before. Do you have spaces in any of your variables? I guess
this is not the problem, but you might be dropping variables other than
those ones you want to drop if spaces are shifting your columns.

On Tue, Nov 20, 2018 at 10:11 AM Leonid Gibiansky 
wrote:

> Never seen it.
>
> This will not solve the problem, but just for diagnostics, have you
> found out what is "damaged" in the created data files: is the number of
> subjects (and number of data records) the same in both versions
> (reported in the output file)? Among columns used in the base model (ID,
> TIME, AMT, RATE, DV, EVID, MDV), which are different? (can be checked if
> printed out to .tab file)? And which of the data file versions is
> interpreted correctly by the nonmem code, with or without WIDE option?
>
> Thanks
> Leonid
>
>
> On 11/20/2018 6:45 AM, Lindauer, Andreas (Barcelona) wrote:
> > Dear all,
> >
> > I would like to share with the group an issue that I encountered using
> NONMEM and which appears to me to be an undesired behavior. Since it is
> confidential matter I can't unfortunately share code or data.
> >
> > I have run a simple PK model with 39 data items in $INPUT. After a
> successful run I started a covariate search using PsN. To my surprise the
> OFVs when including covariates in the forward step turned out to be all
> higher than the OFV of the base model. I mean higher by ~180 units.
> > I realized that PsN in the scm routine adds =DROP to some variables in
> $INPUT that are not used in a given covariate test run.
> > I then ran the base model again with DROPPING some variables from
> $INPUT. And indeed the run with 3 or more variables dropped (using DROP or
> SKIP) resulted in a higher OFV (~180 units), otherwise being the same model.
> > In the lst files of both models I noticed a difference in the line
> saying "0FORMAT FOR DATA" and in fact when looking at the temporarily
> created FDATA files, it is obvious that the format of the file from the
> model with DROPped items is different.
> > In my concrete case the issue only happens when dropping 3 or more
> variables. I get the same behavior with NM 7.3 and 7.4.2. Both on Windows
> 10 and in a linux environment.
> > The problem is fixed by using the WIDE option in $DATE.
> > I'm not aware of any recommendation or advise to use the WIDE option
> when using DROP statements in the dataset. But am happy to learn about it
> in case I missed it.
> >
> > Would be great to hear if anyone else had a similar problem in the past.
> >
> > Best regards, Andreas.
> >
> > Andreas Lindauer, PhD
> > Agriculture, Food and Life
> > Life Science Services - Exprimo
> > Senior Consultant
> >
> > Information in this email and any attachments is confidential and
> intended solely for the use of the individual(s) to whom it is addressed or
> otherwise directed. Please note that any views or opinions presented in
> this email are solely those of the author and do not necessarily represent
> those of the Company. Finally, the recipient should check this email and
> any attachments for the presence of viruses. The Company accepts no
> liability for any damage caused by any virus transmitted by this email. All
> SGS services are rendered in accordance with the applicable SGS conditions
> of service available on request and accessible at
> http://www.sgs.com/en/Terms-and-Conditions.aspx
> >
>
>


Re: [NMusers] Potential bug in NM 7.3 and 7.4.2

2018-11-20 Thread Leonid Gibiansky

Never seen it.

This will not solve the problem, but just for diagnostics, have you 
found out what is "damaged" in the created data files: is the number of 
subjects (and number of data records) the same in both versions 
(reported in the output file)? Among columns used in the base model (ID, 
TIME, AMT, RATE, DV, EVID, MDV), which are different? (can be checked if 
printed out to .tab file)? And which of the data file versions is 
interpreted correctly by the nonmem code, with or without WIDE option?


Thanks
Leonid


On 11/20/2018 6:45 AM, Lindauer, Andreas (Barcelona) wrote:

Dear all,

I would like to share with the group an issue that I encountered using NONMEM 
and which appears to me to be an undesired behavior. Since it is confidential 
matter I can't unfortunately share code or data.

I have run a simple PK model with 39 data items in $INPUT. After a successful 
run I started a covariate search using PsN. To my surprise the OFVs when 
including covariates in the forward step turned out to be all higher than the 
OFV of the base model. I mean higher by ~180 units.
I realized that PsN in the scm routine adds =DROP to some variables in $INPUT 
that are not used in a given covariate test run.
I then ran the base model again with DROPPING some variables from $INPUT. And 
indeed the run with 3 or more variables dropped (using DROP or SKIP) resulted 
in a higher OFV (~180 units), otherwise being the same model.
In the lst files of both models I noticed a difference in the line saying "0FORMAT 
FOR DATA" and in fact when looking at the temporarily created FDATA files, it is 
obvious that the format of the file from the model with DROPped items is different.
In my concrete case the issue only happens when dropping 3 or more variables. I 
get the same behavior with NM 7.3 and 7.4.2. Both on Windows 10 and in a linux 
environment.
The problem is fixed by using the WIDE option in $DATE.
I'm not aware of any recommendation or advise to use the WIDE option when using 
DROP statements in the dataset. But am happy to learn about it in case I missed 
it.

Would be great to hear if anyone else had a similar problem in the past.

Best regards, Andreas.

Andreas Lindauer, PhD
Agriculture, Food and Life
Life Science Services - Exprimo
Senior Consultant

Information in this email and any attachments is confidential and intended 
solely for the use of the individual(s) to whom it is addressed or otherwise 
directed. Please note that any views or opinions presented in this email are 
solely those of the author and do not necessarily represent those of the 
Company. Finally, the recipient should check this email and any attachments for 
the presence of viruses. The Company accepts no liability for any damage caused 
by any virus transmitted by this email. All SGS services are rendered in 
accordance with the applicable SGS conditions of service available on request 
and accessible at http://www.sgs.com/en/Terms-and-Conditions.aspx





RE: [EXTERNAL] [NMusers] Potential bug in NM 7.3 and 7.4.2

2018-11-20 Thread Lindauer, Andreas (Barcelona)
Dear Ana,
No, the variables that I dropped where not part of the model. In fact, in my 
case, the issue occurs with the 3rd variable that I drop, but it doesn't 
actually mater which one the third one is. My guess is that with 3 (or more) 
columns less, in this particular case, somehow NONMEM has trouble to find the 
right data format.

Regards, Andreas.

From: Ruiz, Ana (Clinical Pharmacology) 
Sent: Dienstag, 20. November 2018 15:35
To: Lindauer, Andreas (Barcelona) 
Subject: Re: [EXTERNAL] [NMusers] Potential bug in NM 7.3 and 7.4.2

Hi Andreas,

Do you have Body weight or any other variable in the base model other than DV 
?did you include in the config file that variable using do not drop?checking 
the easiest option

Ana

 Original Message 
From: owner-nmus...@globomaxnm.com on 
behalf of "Lindauer, Andreas (Barcelona)" 
mailto:andreas.linda...@sgs.com>>
Date: Tue, November 20, 2018 3:54 AM -0800
To: nmusers@globomaxnm.com
Subject: [EXTERNAL] [NMusers] Potential bug in NM 7.3 and 7.4.2
Dear all,

I would like to share with the group an issue that I encountered using NONMEM 
and which appears to me to be an undesired behavior. Since it is confidential 
matter I can't unfortunately share code or data.

I have run a simple PK model with 39 data items in $INPUT. After a successful 
run I started a covariate search using PsN. To my surprise the OFVs when 
including covariates in the forward step turned out to be all higher than the 
OFV of the base model. I mean higher by ~180 units.
I realized that PsN in the scm routine adds =DROP to some variables in $INPUT 
that are not used in a given covariate test run.
I then ran the base model again with DROPPING some variables from $INPUT. And 
indeed the run with 3 or more variables dropped (using DROP or SKIP) resulted 
in a higher OFV (~180 units), otherwise being the same model.
In the lst files of both models I noticed a difference in the line saying 
"0FORMAT FOR DATA" and in fact when looking at the temporarily created FDATA 
files, it is obvious that the format of the file from the model with DROPped 
items is different.
In my concrete case the issue only happens when dropping 3 or more variables. I 
get the same behavior with NM 7.3 and 7.4.2. Both on Windows 10 and in a linux 
environment.
The problem is fixed by using the WIDE option in $DATE.
I'm not aware of any recommendation or advise to use the WIDE option when using 
DROP statements in the dataset. But am happy to learn about it in case I missed 
it.

Would be great to hear if anyone else had a similar problem in the past.

Best regards, Andreas.

Andreas Lindauer, PhD
Agriculture, Food and Life
Life Science Services - Exprimo
Senior Consultant

Information in this email and any attachments is confidential and intended 
solely for the use of the individual(s) to whom it is addressed or otherwise 
directed. Please note that any views or opinions presented in this email are 
solely those of the author and do not necessarily represent those of the 
Company. Finally, the recipient should check this email and any attachments for 
the presence of viruses. The Company accepts no liability for any damage caused 
by any virus transmitted by this email. All SGS services are rendered in 
accordance with the applicable SGS conditions of service available on request 
and accessible at http://www.sgs.com/en/Terms-and-Conditions.aspx
Information in this email and any attachments is confidential and intended 
solely for the use of the individual(s) to whom it is addressed or otherwise 
directed. Please note that any views or opinions presented in this email are 
solely those of the author and do not necessarily represent those of the 
Company. Finally, the recipient should check this email and any attachments for 
the presence of viruses. The Company accepts no liability for any damage caused 
by any virus transmitted by this email. All SGS services are rendered in 
accordance with the applicable SGS conditions of service available on request 
and accessible at http://www.sgs.com/en/Terms-and-Conditions.aspx


[NMusers] Potential bug in NM 7.3 and 7.4.2

2018-11-20 Thread Lindauer, Andreas (Barcelona)
Dear all,

I would like to share with the group an issue that I encountered using NONMEM 
and which appears to me to be an undesired behavior. Since it is confidential 
matter I can't unfortunately share code or data.

I have run a simple PK model with 39 data items in $INPUT. After a successful 
run I started a covariate search using PsN. To my surprise the OFVs when 
including covariates in the forward step turned out to be all higher than the 
OFV of the base model. I mean higher by ~180 units.
I realized that PsN in the scm routine adds =DROP to some variables in $INPUT 
that are not used in a given covariate test run.
I then ran the base model again with DROPPING some variables from $INPUT. And 
indeed the run with 3 or more variables dropped (using DROP or SKIP) resulted 
in a higher OFV (~180 units), otherwise being the same model.
In the lst files of both models I noticed a difference in the line saying 
"0FORMAT FOR DATA" and in fact when looking at the temporarily created FDATA 
files, it is obvious that the format of the file from the model with DROPped 
items is different.
In my concrete case the issue only happens when dropping 3 or more variables. I 
get the same behavior with NM 7.3 and 7.4.2. Both on Windows 10 and in a linux 
environment.
The problem is fixed by using the WIDE option in $DATE.
I'm not aware of any recommendation or advise to use the WIDE option when using 
DROP statements in the dataset. But am happy to learn about it in case I missed 
it.

Would be great to hear if anyone else had a similar problem in the past.

Best regards, Andreas.

Andreas Lindauer, PhD
Agriculture, Food and Life
Life Science Services - Exprimo
Senior Consultant

Information in this email and any attachments is confidential and intended 
solely for the use of the individual(s) to whom it is addressed or otherwise 
directed. Please note that any views or opinions presented in this email are 
solely those of the author and do not necessarily represent those of the 
Company. Finally, the recipient should check this email and any attachments for 
the presence of viruses. The Company accepts no liability for any damage caused 
by any virus transmitted by this email. All SGS services are rendered in 
accordance with the applicable SGS conditions of service available on request 
and accessible at http://www.sgs.com/en/Terms-and-Conditions.aspx


[NMusers] Second DDMoRe repository challenge is now live : ACoP accepted abstracts can receive additional credit (Reminder) !

2018-11-20 Thread Céline Sarr
After the great success of the first challenge the DDMoRe community group
organized (https://www.ddmore.foundation/repository-challenge-2017/ ), the
volunteers wish to launch its second challenge:



 “Win a cash
prize by publishing models in the DDMoRe Model Repository”





*How does it work?*  (details at
https://www.ddmore.foundation/repository-challenge/ )



To be eligible for the prize, submission of the models to the DDMoRe model
repository should be performed by the applicant himself/herself (not on
behalf).



Models to be uploaded by the applicant can be of any kind with preference
for models applied to real data. The models should be proposed in advance
to the model repository community group (info@ddmore.foundation just to
avoid duplication), with one exception:  models presented for the first
time at ACoP 2018 and applied to real data, will be directly eligible for
the prize when uploaded.



To be considered eligible, the model should be published in agreement with
the submission guidelines of the DDMoRe model repository.



• A model uploaded in MDL/PharmML receives 3 credits.



• A model uploaded in original code receives 1 credit.



• A model not currently available online (i.e. not published in
a scientific paper) receives 1 additional credit.



• A model based on an accepted abstract of ACoP 2018 and
applied on real data receives 1 additional credit.



*Who will win?*



A panel committee will review the applications that satisfy the eligibility
criteria.



The 8 applicants with the highest credit will receive a prize.



*What to win?*



Prizes will be divided among the 8 applicants (100 euros each minus
transfer fees) and distributed in Feb 2019.



Winners will be advertised among the community distributions lists.



Model submission deadline: 31st December 2018



*Who are the current sponsors?*BAST, Pharmetheus, SGS exprimo, OCCAMS,
Calvagone, and Merck.



We need sponsors for the next challenges …. Interested? Please go to
https://www.ddmore.foundation/become-a-sponsor /



The DDMoRe repository group