Hi Nick,

I don't want to rehash old debates with you about the diagnostic value of the 
COV step.  However, your statement about SEs "they are not worth the electrons 
expended to compute them" seems hyperbolic to me.  I suspect that what Lewis 
agreed to was the general sentiment that we need to be cautious in how we use 
and interpret the SEs generated by NONMEM.  I doubt that he felt that they have 
absolutely no value.  Indeed, in many of Lewis' papers where he published 
modeling results, he reports the standard errors of these estimates from NONMEM.

It certainly was not my intent to assert that the SEs and the COV step in 
general, have no value.  I believe they still do, even if we may not be able to 
use them say to construct confidence intervals and expect them to have the 
proper coverage probabilities for purposes of statistical inference.

I do not think a non-parametric bootstrap with the parameter estimates produced 
after every iteration is going to tell us anything.  If for no other reason 
that the iteration search path itself is dependent on the starting values used. 
 That is, the parameter estimates after each iteration will depend on where you 
start.  Whereas the maximum likelihood estimates obtained at convergence to the 
global minimum OFV, should be somewhat invariant to the starting values 
provided the starting values are reasonable. The theory behind the 
non-parametric bootstrap standard errors still requires that you obtain the 
maximum likelihood estimates for each bootstrap dataset.

Best,

Ken

-----Original Message-----
From: Nick Holford <n.holf...@auckland.ac.nz> 
Sent: Monday, July 29, 2024 11:52 AM
To: Jeroen Elassaiss-Schaap (PD-value B.V.) <jer...@pd-value.com>; 
kgkowalsk...@gmail.com; 'Santosh' <santosh2...@gmail.com>; 
nmusers@globomaxnm.com
Cc: 'Alan Maloney' <al_in_swe...@hotmail.com>; Pyry Välitalo 
<pyry.valit...@gmail.com>
Subject: RE: [NMusers] Obtaining RSE%

Hi Jeroen,

A small correction. Please re-read my email to nmusers on 12 Feb 2015 which I 
quote here. Sorry I cannot show the original but the 1999 URL is not available 
to me anymore. 

=================  start quote =================== Nick Holford Thu, 12 Feb 
2015 11:54:59 -0800 Hi, The original quote about electrons comes from a remark 
I made in 1999 on nmusers.
http://www.cognigencorp.com/nonmem/nm/99nov121999.html
Lewis Sheiner agreed in the same thread. Thanks to the wonders of living on a 
sphere Lewis appears to agree with me the day before I made the comment :-) 
=================  end quote ===================

I had been meaning to add to Ken's great email which confirms my original 
assertion about electrons.

If Santosh really wanted to calculate SE's after every "iteration" (which I 
think was Ken's interpretation of  every "estimation") then this can be done by 
running a non-parametric bootstrap with the parameter estimates produced after 
every iteration. 

I wonder if Santosh would like to spend a few hours doing that and adding to 
the nmusers collection about standard errors by reporting the results to us?


Best wishes,
Nick


--
Nick Holford, Professor Emeritus Clinical Pharmacology, MBChB, FRACP
mobile: NZ+64(21) 46 23 53 ; FR+33(6) 62 32 46 72
email: n.holf...@auckland.ac.nz
web: http://holford.fmhs.auckland.ac.nz/

-----Original Message-----
From: owner-nmus...@globomaxnm.com <owner-nmus...@globomaxnm.com> On Behalf Of 
Jeroen Elassaiss-Schaap (PD-value B.V.)
Sent: Monday, July 29, 2024 3:37 PM
To: kgkowalsk...@gmail.com; 'Santosh' <santosh2...@gmail.com>; 
nmusers@globomaxnm.com
Cc: 'Alan Maloney' <al_in_swe...@hotmail.com>; Pyry Välitalo 
<pyry.valit...@gmail.com>
Subject: Re: [NMusers] Obtaining RSE%

[Some people who received this message don't often get email from 
jer...@pd-value.com. Learn why this is important at 
https://aka.ms/LearnAboutSenderIdentification ]

Dear NMusers,

This is a great reminder for us to consider the reliability of standard errors 
in our models, thanks Ken & Alan. The more non-linear the models become, the 
less reliable and the more important other perspectives on parameter values 
such as sensitivity analysis and prior knowledge.

The nmusers archive has many great threads on the topic that are available to 
review such as 
https://www.mail-archive.com/nmusers@globomaxnm.com/msg05423.html and related 
https://www.mail-archive.com/nmusers@globomaxnm.com/msg05419.html . In summary, 
log-transformation only can get you so far but can perhaps be seen as a sort of 
minimal effort.

To add to the Lewis's quote about SEs - "they are not worth the electrons used 
to compute them" (see the links), Pyry had some very interesting observations 
he shared with me about the SE of the CV of a log-normal omega: it inflates 
with higher values of omega compared to the SE of omega itself.

Best regards,

Jeroen

http://pd-value.com
jer...@pd-value.com
@PD_value
+31 6 23118438
-- More value out of your data!

On 29-07-2024 14:41, kgkowalsk...@gmail.com wrote:
>
> Dear NMusers,
>
> It was recently pointed out to me by a statistical colleague that my 
> recent NMusers post about the inverse Hessian (R matrix) evaluated at 
> the maximum likelihood estimates is a consistent estimator of the 
> covariance matrix (i.e., converges to the true value with large N) is 
> only true for linear models.  For nonlinear models, the standard 
> errors produced by NONMEM and other nonlinear estimation software are 
> not only asymptotic but also approximate.  Moreover, how well that 
> approximation works will also depend on the parameterization.  This I 
> believe is one of the motivations behind “mu referencing” in NONMEM 
> and the use of log transformations of the parameters to help improve 
> Wald-based approximations.  I thank Alan Maloney for pointing this out 
> to me.
>
> Kind regards,
>
> Ken
>
> *From:*kgkowalsk...@gmail.com <kgkowalsk...@gmail.com>
> *Sent:* Saturday, July 27, 2024 12:36 PM
> *To:* 'Santosh' <santosh2...@gmail.com>; nmusers@globomaxnm.com
> *Subject:* RE: [NMusers] Obtaining RSE%
>
> Dear Santosh,
>
> There is a good reason for this.  Wald (1943) has shown that the 
> inverse of the Hessian (R matrix) evaluated at the maximum likelihood 
> estimates is a consistent estimator of the covariance matrix.  It is 
> based on Wald’s approximation that the likelihood surface locally near 
> the maximum likelihood estimates can be approximated by a quadratic 
> function in the parameters.  This theory does not hold for any set of 
> parameter estimates along the algorithm’s search path prior to 
> convergence to the maximum likelihood estimates. Moreover,  inverting 
> the Hessian evaluated at an interim step prior to convergence would 
> likely be a poor approximation especially early in the search path 
> where the gradients are large (i.e., large changes in OFV for a given 
> change in the parameters would probably have substantial curvature and 
> not be well approximated by a quadratic model in the parameters).
>
> Thus, the COV step in NONMEM is only applied once convergence is 
> obtained during the EST step.
>
> Wald, A. “Tests of statistical hypotheses concerning several 
> parameters when the number of observations is large.” /Trans. Amer.
> Math. Soc./ 1943;54:426.
>
> Best,
>
> Ken
>
> Kenneth G. Kowalski
>
> President
>
> Kowalski PMetrics Consulting, LLC
>
> Email: kgkowalsk...@gmail.com <mailto:kgkowalsk...@gmail.com>
>
> Cell:  248-207-5082
>
> *From:*owner-nmus...@globomaxnm.com
> <mailto:owner-nmus...@globomaxnm.com><owner-nmus...@globomaxnm.com
> <mailto:owner-nmus...@globomaxnm.com>> *On Behalf Of *Santosh
> *Sent:* Friday, July 26, 2024 3:38 AM
> *To:* nmusers@globomaxnm.com <mailto:nmusers@globomaxnm.com>
> *Subject:* [NMusers] Obtaining RSE%
>
>  Dear esteemed experts!
>
> When using one or more estimation methods & covariance step in a 
> NONMEM control stream, the resulting ext file contains final estimate 
> (for all estimation steps)  & standard error (only for the last 
> estimation step).
>
> Is there a way that standard error is generated for every estimation step?
>
> TIA
>
> Santosh
>



Reply via email to