If the algorithm works properly, you should get exactly the same answer using a linear or a log scale for the parameters.

The bigger question is not bias but the accuracy of a normal approximation for confidence intervals and regions. I have evaluated this by making contour plots of the log(likelihood). I use "outer" to compute this over an appropriate grid of the parameters. Then I use "contour" [or "image" with "contour(..., add=TRUE)"] to see the result. After I get a picture, I may specify the levels, using, e.g., 2*log(likelihood ratio) is approximately chi-square with 2 degrees of freedom. The normality assumption says that the contours should be close to elliptical. I've also fit log(likelihood) to a parabola in the parameters, possibly after deleting points beyond the 0.001 level for chi-square(2). If I get a good fit, I'm happy. If not, I try a different parameterization.

When I've done this, I've found that I tend to get more nearly normal contours by throwing the constraint to (-Inf) than leaving it at 0, i.e., by

Bates and Watts (1988) Nonlinear Regression Analysis and Its Applications (Wiley) explain that parameter effects curvature seems to be vastly greater than the "intrinsic curvature" of the nonlinear manifold, onto which a response vector is projected by nonlinear least square. This is different from maximum likelihood, but I believe that this principle would still likely apply.

Does this make sense? spencer graves

p.s. I don't understand what you are saying about "0.41 3.70 1.00" below. You are giving me a set of three numbers when you are trying to estimate two parameters and getting NAs, Inf's and NaNs. I don't understand. Are you printing out "x" when the log(likelihood) is NA, NaN or Inf? If yes, is one component of "x" <= 0?

Eric Rescorla wrote:

Spencer Graves <[EMAIL PROTECTED]> writes:


     I have not used "nlm", but that happens routinely with function
     minimizers trying to test negative values for one or more
     component of x.  My standard approach to something like this is
     to parameterize "llfunc" in terms of log(shape) and log(scale),
     as follows: llfunc <- function (x) {
     -sum(dweibull(AM,shape=exp(x[1]),scale=exp(x[2]), log=TRUE))}

Have you tried this? If no, I suspect the warnings will
disappear when you try this.



This works. I've got some more questions, though:


(1) Does it introduce bias to work with the logs like this?

(2) My original data set had zero values. I added .5 experimentally,
   which is how I got to this data set. This procedure doesn't work
   on the original data set.

   Instead I get (with the numbers below being the values
   that caused problems):

[1] 0.41 3.70 1.00
[1] 0.41 3.70 1.00
[1] 0.410001 3.700000 1.000000
[1] 0.410000 3.700004 1.000000
[1] 0.410000 3.700000 1.000001
Warning messages: 1: NA/Inf replaced by maximum positive value 2: NA/Inf replaced by maximum positive value 3: NA/Inf replaced by maximum positive value 4: NA/Inf replaced by maximum positive value


Thanks,
-Ekr




______________________________________________ [EMAIL PROTECTED] mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help

Reply via email to