James,
The division in the expression for the error is not a typo.
The line of thoughts is:
Y=F*EXP(sqrt(theta^2+(theta/F)^2)eps) ;
F*(1+sqrt(theta^2+(theta/F)^2)eps) ; linearization
F+F* eps1 + F*eps2/F= ; rewiring as 2 epsilons
F(1+eps1)+ eps2 ; combined error model
Leonid
--------------------------------------
Leonid Gibiansky, President
QuantPharm LLC: www.quantpharm.com
e-mail: LGibiansky at quantpharm.com
tel: (301) 767 5566
James G Wright wrote:
If Y is the original observed data, then the log-transformed error model is
LOG (Y) = LOG (F) + EPS(1)
We can exponentiate both sides to get an approximately proportional
error model:-
Y = F * EXP( EPS(1) ).
The advantage of the above approach is that the mean and variance terms
are independent (if the data are log-transformed in the data file).
This avoids instabilities caused by NONMEM biasing the mean prediction
to get "better" variance terms - a known problem for ELS-type methods
since 1980. Unfortunately, we can't apply the same trick to the ETAs
because they are not directly observed.
However, the model proposed as "additive and proportional" by Nidal is
LOG (Y) = LOG (F) + W*EPS(1)
Exponentiating to get
Y = F*EXP( W*EPS(1) )
where W= SQRT (THETA(n-1)**2 + THETA(n)**2 * LOG(F)**2). I'm assuming
the division sign in the original email was a typo, as
THETA(n)**2/LOG(F)**2 goes to infinity when F approaches 1. Rewriting
with separate estimated epsilons instead of estimated thetas for clarity
gives:-
Y = F * EXP( EPS(1) + LOG(F)*EPS(2) )
= F * EXP( EPS(1) ) * EXP( LOG(F)*EPS(2) )
which is vaguely like having an error term proportional to LOG(F)
working multiplicatively with a standard proportional error model.
After linearization, you obtain something like
Y = F + F * EPS(1) + F * LOG(F) * EPS(2)
which gives a F * LOG(F) weighting term, as opposed to the constant
weighting term required for an additive model.
Incidentally, IF (F.EQ.0) "TY" should equal a very large negative number
(well, minus infinity). Either you replace zeroes in a log-proportional
model with a small number or you discard them, setting LOG (F) = 0 is
like setting F=1 if (F.EQ.0).
Best regards,
James G Wright PhD
Scientist
Wright Dose Ltd
Tel: 44 (0) 772 5636914
www.wright-dose.com <http://www.wright-dose.com/>
-----Original Message-----
*From:* [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] *On Behalf Of
[EMAIL PROTECTED]
*Sent:* 05 October 2007 08:13
*To:* navin goyal
*Cc:* nmusers
*Subject:* Re: [NMusers] Error model
Hi Navin,
You could try both additive and proportional error model
$ERROR
TY=F
IF(F.GT.0) THEN
TY=LOG(F)
ELSE
TY=0
ENDIF
IPRED=TY
W=SQRT(THETA(n-1)**2+THETA(n)**2/IPRED**2) ; log transformed data
Y=TY+W*EPS(1)
$SIGMA 1 FIX
Best,
Nidal
Nidal Al-Huniti, PhD
Strategic Consulting Services
Pharsight Corporation
[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
On 10/4/07, *navin goyal* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
Dear Nonmem users,
I am analysing a POPPK data with sparse sampling
The dosing is an IV infusion over one hour and we have data for
time points 0 (predose), 1 (end of infusion) and 2 (one hour
post infusion)
The drug has a half life of approx 4 hours. The dose is given
once every fourth day
When I ran my control stream and looked at the output table, I
got some IPREDs at time predose time points where the DV was 0
the event ID EVID for these time points was 4 (reset)
(almost 20 half lives)
I was wondering why did NONMEM predict concentrations at these
time points ?? there were a couple of time points like this.
I started with untransformed data and fitted my model.
but after bootstrapping the errors on etas and sigma were
very high.
I log transformed the data , which improved the etas but the
sigma shot upto more than 100%
( is it because the data is very sparse ??? or I need to use a
better error model ???)
Are there any other error models that could be used with the log
transformed data, apart from the
Y=Log(f)+EPS(1)
Any suggestions would be appreciated
thanks
--
--Navin