If I understand your proposal correctly, then it
probably isn't a good idea.

A derivative-based optimization algorithm is going
to get upset whenever it sees negative infinity.
Genetic algorithms, simulated annealing (and I think
Nelder-Mead) will be okay when they see infinity
but if all infeasible solutions have value negative infinity,
then you are not giving the algorithm a clue about what
direction to go.

Pat

Mathieu Ribatet wrote:
Dear Patrick (and other),

Well I used the Sylvester's criteria (which is equivalent) to test for this. But unfortunately, this is not the only issue! Well, to sum up quickly, it's more or less like geostatistics. Consequently, I have several unfeasible regions (covariance, margins and others). The problem seems that the unfeasible regions may be large and sometimes lead to optimization issues - even when the starting values are well defined. This is the reason why I wonder if setting by myself a $-\infty$ in the composite likelihood function is appropriate here.

However, you might be right in setting a tolerance value 'eps' instead of the theoretical bound eigen values > 0.
Thanks for your tips,
Best,
Mathieu


Patrick Burns a écrit :
If the positive definiteness of the covariance
is the only issue, then you could base a penalty on:

eps - smallest.eigen.value

if the smallest eigen value is smaller than eps.

Patrick Burns
[EMAIL PROTECTED]
+44 (0)20 8525 0696
http://www.burns-stat.com
(home of S Poetry and "A Guide for the Unwilling S User")

Mathieu Ribatet wrote:
Thanks Ben for your tips.
I'm not sure it'll be so easy to do (as the non-feasible regions
depend on the model parameters), but I'm sure it's worth giving a try.
Thanks !!!
Best,

Mathieu

Ben Bolker a écrit :
Mathieu Ribatet <mathieu.ribatet <at> epfl.ch> writes:


Dear list,

I'm currently writing a C code to compute the (composite) likelihood - well this is done but not really robust. The C code is wrapped in an R
one which call the optimizer routine - optim or nlm. However, the
fitting procedure is far from being robust as the parameter space
depends on the parameter - I have a covariance matrix that should be a
valid one for example.

  One reasonably straightforward hack to deal with this is
to add a penalty that is (e.g.) a quadratic function of the
distance from the feasible region, if that is reasonably
straightforward to compute -- that way your function will
get gently pushed back toward the feasible region.

  Ben Bolker

______________________________________________
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel



______________________________________________
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel

Reply via email to