You are right Susan: the Time values have to increase within a subject.
Using EVID=4 allows you to use a more convenient time scale
ex:
ID TIME EVID...
1 0 1
1 1 0
... ... ...
1 24 4
... ... ...
instead of
ID TIME EVID...
1 0 1
1 1 0
... ... ...
1 26546 1
... ... ...
Assuming you don't have any measurement after 24h during the first
period of your study.
Willavize, Susan A a écrit :
I have seen EVID=4 used, but don't remember the details. How does the
time variable look? I always convert Date and time to a time counting
variable that starts at 0 time (time of first dose) and always
increases within each subject. I recall that when EVID= 4 is used,
the time variable does NOT need to restart with the second dose. Do I
remember correctly?
Note I do not include DATE and clock TIME in my NONMEM input, since I
have seen NONMEM do nefarious things with these.
Susan
------------------------------------------------------------------------
*From:* [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] *On Behalf Of *Sébastien Bihorel
*Sent:* Wednesday, September 24, 2008 5:52 AM
*To:* Mark Gutierrez
*Cc:* [email protected]
*Subject:* Re: [NMusers] dataset codification
Dear Mark,
You may want to read about "EVID" in the html help. EVID = 4 is what
you mean. It is a dosing event but implies that the system is reset
before the dose is introduced into the system.
Sebastien
Mark Gutierrez a écrit :
Dear all,
I am modeling some PK data from one rabbit. And I have some doubts how
to codify the dataset. The rabbit received different single doses of a
drug at different times.
E.g,
First of May it received a 100 mg dose i.v. ,
Then, 2 months latter (enough washout period), the first of July, it
received and oral dose of 100mg..
Finally, after other two months the rabbit received the last oral dose
200mg.
I could codify in the TIME column the real times, including the jumps
of two months, but I think that there is an option in Nonmem to
"restart" the dose through the dataset.
Could somebody clarify me this point? and perhaps include a simple
example of dataset.
Thank you in advance
Mark