Nidal,

Another method that would give slightly different results is by using the EPS terms to code a model like (11) in Beals' paper

Using a first order taylors series expansion to the log equaivalant of Beal's (11) we have:

Y= F*exp(eps(1)) + m*exp(eps(2))

log Y = log(f*exp(eps(1))+m*exp(eps(2))

Using a Taylor Series expansion this becomes

log Y = log(f+m) + f/(f+m)*eps(1) + m/(f+m)*eps(2)

Using an eps allows unique points to get different values (thetas would allow a population estimate of the variability).

Since the expectation of log Y is not F, the PRED is miscalculated by NONMEM and should be calculated separately (as well as RES). Therefore a possibility for the error model discussed by Beal is:
$ABBREVIATED COMRES=1 COMSAV=1
...
$ERR
M=THETA(x)
IPRED=F+M ; Individual prediction (regular scale)
IF(COMACT.EQ.1)COM(1)=IPRED
PPRED=COM(1) ; Population prediction (regular scale)
PRED=DV-PPRED
IPRED=DV-IPRED
Y=LOG(IPRED)+F/IPRED*EPS(1)+M/IPRED*EPS(2)

IWRES can be calculated outsider or by using a ratio for EPS(1) and EPS(2), setting it to be a theta and then multiplying by a unfixed THETA. This requires an assumptions that:

   *  EPS(1) is positive, EPS(2) is positive as well (this is also true
     the additive+proprotional case but is used often to calculated IWRES.)
   * One of the error terms is better described at a population level


For example

$ABBREVIATED COMRES=1 COMSAV=1
...
$ERR
M=THETA(x)
S=THETA(x+1)
IPRED=F+M ; Individual prediction (regular scale)
IF(COMACT.EQ.1)COM(1)=IPRED
PPRED=COM(1) ; Population prediction (regular scale)
PRED=DV-PPRED
IPRED=DV-IPRED
Y=LOG(IPRED)+F/IPRED*S+M/IPRED*S*EPS(2)

IWRES=IPRED-log(F+M)/(sqrt(F^2/((F+M)^2)+M^2S^2/((F+M)^2)))
S=sqrt(var(EPS(2))/var(EPS(1)))

Nidal gives another model that assumes that both the error terms are explained at a population level and should be related to 11.

Matt Fidler (not that Mat refered to in Nidals' email I think its Mats K.)
[EMAIL PROTECTED] wrote:

Hi James,

We should divide by F (that is what happens when write an email at 3 AM). Anyway this error model was proposed by Mat and discussed in Beal 's paper ( /Ways to Fit a PK Model with Some Data Below the Qunatification Limit/ J. Pharmacokin.Pharamcodyn. 28, p. 481-504.)

So the code should be as in Eq. 11 (above paper)
Y=LOG(F)+SQRT(THETA(n-1)**2+THETA(n)**2/F**2)*EPS(1)
with
$SIGMA 1 FIX
Best,
Nidal


On 10/8/07, *James G Wright* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

    Hi Nidal,
As you have modified the code to prevent F going below 1, it looks
    like you did intend to divide by IPRED (set equal to LOG(F))  in
    the weighting expression..?
If so, this gives an F/LOG(F) weighting term, which is an
    innovative error model, never previously mentioned before on the
NMuser list to my knowledge. Best regards, James James G Wright PhD
    Scientist
    Wright Dose Ltd
    Tel: 44 (0) 772 5636914
    www.wright-dose.com <http://www.wright-dose.com/>

        -----Original Message-----
        *From:* [EMAIL PROTECTED]
        <mailto:[EMAIL PROTECTED]>
        [mailto:[EMAIL PROTECTED]
        <mailto:[EMAIL PROTECTED]>] *On Behalf Of *
        [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
        *Sent:* 05 October 2007 17:32
        *To:* James G Wright
        *Cc:* [email protected] <mailto:[email protected]>
        *Subject:* Re: [NMusers] Error model

        Hi James,

        This error model was discussed in the following NM threads.

        http://www.cognigencorp.com/nonmem/nm/99apr232002.html
        <http://www.cognigencorp.com/nonmem/nm/99apr232002.html>

        http://www.cognigencorp.com/nonmem/nm/98jun022003.html
        <http://www.cognigencorp.com/nonmem/nm/98jun022003.html>


        To prevent division by zero

        $ERROR

           TY=F

           IF(F.GT.1) THEN

           TY=LOG(F)

           ELSE

           TY=0.025

           ENDIF

           IPRED=TY

           W=SQRT(THETA(n-1)**2+THETA(n)**2/IPRED**2)   ; log
        transformed data

            Y=TY+W*EPS(1)

        $SIGMA 1 FIX

        Best,

        Nidal



        On 10/5/07, *James G Wright* <[EMAIL PROTECTED]
        <mailto:[EMAIL PROTECTED]>> wrote:

            Hi Leonid,

            In the original email, IPRED = LOG(F) and division by
            LOG(F) leads to a
            division by zero when F=1, hence there was definitely a typo
            somewhere...

            Of course, this isn't the case in your revised version,
            however you have
            introduced a dependence on F (as a reciprocal for the
            additive term)
            which reintroduces all of the ELS problems (where your
            variances can
            bias your means) that we were trying to avoid by going to
            the log-scale
            in the first place.  Because F is now entering as a
            reciprocal which
            leads to very big numbers when F is small, I expect this
            method would
            perform worse than working on the original scale.

            Best regards, James






            James G Wright PhD
            Scientist
            Wright Dose Ltd
            Tel: 44 (0) 772 5636914
            www.wright-dose.com <http://www.wright-dose.com/>


            -----Original Message-----
            From: Leonid Gibiansky [mailto:[EMAIL PROTECTED]
            <mailto:[EMAIL PROTECTED]>]
            Sent: 05 October 2007 13:33
            To: James G Wright
            Cc: [email protected] <mailto:[email protected]>
            Subject: Re: [NMusers] Error model


            James,
            The division in the expression for the error is not a typo.
            The line of thoughts is:

            Y=F*EXP(sqrt(theta^2+(theta/F)^2)eps) ;
              F*(1+sqrt(theta^2+(theta/F)^2)eps)  ; linearization
              F+F* eps1 + F*eps2/F=               ; rewiring as 2
            epsilons
              F(1+eps1)+ eps2                     ; combined error model

            Leonid


            --------------------------------------
            Leonid Gibiansky, President
            QuantPharm LLC: www.quantpharm.com
            <http://www.quantpharm.com/>
            e-mail: LGibiansky at quantpharm.com <http://quantpharm.com/>
            tel: (301) 767 5566


            James G Wright wrote:
If Y is the original observed data, then the
            log-transformed error
model is

LOG (Y) = LOG (F) + EPS(1)

We can exponentiate both sides to get an approximately
            proportional
error model:-

Y = F * EXP( EPS(1) ).

The advantage of the above approach is that the mean and
            variance
terms
are independent (if the data are log-transformed in the
            data file).
This avoids instabilities caused by NONMEM biasing the
            mean prediction

to get "better" variance terms - a known problem for
            ELS-type methods
since 1980.  Unfortunately, we can't apply the same trick
            to the ETAs
because they are not directly observed.

However, the model proposed as "additive and
            proportional" by Nidal is

LOG (Y) = LOG (F) + W*EPS(1)

Exponentiating to get

Y = F*EXP( W*EPS(1) )

where W= SQRT (THETA(n-1)**2 + THETA(n)**2 *
            LOG(F)**2).  I'm assuming
the division sign in the original email was a typo, as
THETA(n)**2/LOG(F)**2 goes to infinity when F approaches
            1.  Rewriting

with separate estimated epsilons instead of estimated
            thetas for
            clarity
gives:-

Y = F * EXP( EPS(1) + LOG(F)*EPS(2) )
   = F * EXP( EPS(1) ) * EXP( LOG(F)*EPS(2) )

which is vaguely like having an error term proportional
            to LOG(F)
working multiplicatively with a standard proportional
            error model.
After linearization, you obtain something like

Y = F + F * EPS(1) + F * LOG(F) * EPS(2)

which gives a F * LOG(F) weighting term, as opposed to
            the constant
weighting term required for an additive model.

Incidentally, IF ( F.EQ.0) "TY" should equal a very large
            negative
number
(well, minus infinity).  Either you replace zeroes in a
            log-proportional
model with a small number or you discard them, setting
            LOG (F) = 0 is
like setting F=1 if (F.EQ.0).

Best regards,


James G Wright PhD
Scientist
Wright Dose Ltd
Tel: 44 (0) 772 5636914
www.wright-dose.com <http://www.wright-dose.com/>
            <http://www.wright-dose.com/>

    -----Original Message-----
    *From:* [EMAIL PROTECTED]
            <mailto:[EMAIL PROTECTED]>
    [mailto: [EMAIL PROTECTED]
            <mailto:[EMAIL PROTECTED]>] *On Behalf Of
    [EMAIL PROTECTED]
            <mailto:[EMAIL PROTECTED]>
    *Sent:* 05 October 2007 08:13
    *To:* navin goyal
    *Cc:* nmusers
    *Subject:* Re: [NMusers] Error model

    Hi Navin,

    You could try both additive and proportional error model
    $ERROR

       TY=F

       IF(F.GT.0) THEN

       TY=LOG(F)

       ELSE

       TY=0

       ENDIF

       IPRED=TY

       W=SQRT(THETA(n-1)**2+THETA(n)**2/IPRED**2)  ; log
            transformed
            data
        Y=TY+W*EPS(1)



    $SIGMA 1 FIX

    Best,

    Nidal



    Nidal Al-Huniti, PhD

    Strategic Consulting Services

    Pharsight Corporation

    [EMAIL PROTECTED]
            <mailto:[EMAIL PROTECTED]><mailto:[EMAIL PROTECTED]
            <mailto:[EMAIL PROTECTED]>>





    On 10/4/07, *navin goyal* <[EMAIL PROTECTED]
            <mailto:[EMAIL PROTECTED]>
    <mailto: [EMAIL PROTECTED]
            <mailto:[EMAIL PROTECTED]>>> wrote:

        Dear Nonmem users,

        I am analysing a POPPK data with sparse sampling
        The dosing is an IV infusion over one hour and we
            have data
            for
        time points 0 (predose), 1 (end of infusion) and
            2 (one hour
        post infusion)
        The drug has a half life of approx 4 hours. The
            dose is given
        once every fourth day
        When I ran my control stream and looked at the
            output table, I
        got some IPREDs at time predose time points where
            the DV was 0
        the event ID  EVID for these time points was 4
            (reset)
        (almost 20 half lives)
        I was wondering why did NONMEM predict
            concentrations at these
        time points ?? there were a couple of time points
            like this.

        I started with untransformed data and fitted my
            model.
        but  after bootstrapping the  errors  on etas and
            sigma  were
        very high.
        I log transformed the data , which improved the
            etas but the
        sigma shot  upto more than 100%
        ( is it because the data is very sparse ??? or I
            need to use a
        better error model ???)
        Are there any other error models that could be
            used with the
            log
        transformed data, apart from the
        Y=Log(f)+EPS(1)


        Any suggestions would be appreciated
        thanks

        --
        --Navin





Reply via email to