Hi, Peter:

Thanks for the comment and reply.

I generally avoid constrained optimizers for three reasons:

1. My experience with them has included many cases where the optimizer would stop with an error when testing parameter values that violate the constraints. If I transform the parameter space to remove the constraints, that never happens. The constrained optimizers in R 2.0.1 may not exhibit this behavior, but I have not checked.

2. In a few cases, I've plotted the log(likelihood) vs. parameter values using various transformations. When I've done that, I typically found that the most nearly parabolic performance used unconstrained parameterizations. This makes asymptotic normality more useful and increases the accuracy of simple, approximate sequential Bayesian procedures.

3. When I think carefully about a particular application, I often find a rationale for claiming that a certain unconstrained parameterization provides a better description of the application. For example, interest income on investments is essentially additive on the log scale. Similarly, the concept of "materiality" in Accounting is closer to being constant in log space: One might look for an error of a few Euros in the accounts of a very small business, but in auditing some major government accounts, errors on the order of a few Euros might not be investigated. Also, measurement errors with microvolts are much smaller than with megavolts; expressing the measurements in decibels (i.e., on the log scale) makes the measurement errors more nearly comparable.

Thanks again for your comments. Best Wishes,
Spencer Graves


Peter Dalgaard wrote:

Spencer Graves <[EMAIL PROTECTED]> writes:



Hi, Peter: What do you do in such situations? Sundar Dorai-Raj
and I have extended "glm" concepts to models driven by a sum of k
independent Poissons, with the a linear model for log(defectRate[i])
for each source (i = 1:k). To handle convergence problems, etc., I
think we need to use informative Bayes, but we're not there yet. In
any context where things are done more than once [which covers most
human activities], informative Bayes seems sensible. A related
question comes with data representing the differences between Poisson
counts, e.g., with d[i] = X[i]-X[i-1] = the number of new defects
added between steps i-1 and i in a manufacturing process. Most of the
time, d[i] is nonnegative. However, in some cases, it can be
negative, either because of metrology errors in X[i] or because of
defect removal between steps i-1 and i. Comments?



I haven't got all that much experience with it, but obviously, the various algorithms for constrained optimization (box- or otherwise) at least allow you to find a proper maximum likelihood estimator.





-- Spencer Graves, PhD, Senior Development Engineer O: (408)938-4420; mobile: (408)655-4567

______________________________________________
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Reply via email to