Dear Steven,

Thank you for getting back to me!

On Fri, 2013-12-13 at 14:59 -0500, Steven G. Johnson wrote:

> There is a standard trick to transform an n-dimensional L1 into 2n
> differentiable constraints, however.  (Similar to the transformation
> described in the manual: [...])

I see, so that would be your suggested approach... I have finally found
an excellent technical report, that sums up the available options [1]
and that's what they recommend too (see 4. Constrained Optimization).

The downside seems to be, however, that I'll have to double the number
dimensions (for 1000 -> 2000 it really sounds like stretching it), and
also compute the gradients for the new variables, which is both
difficult and expensive.

In this light, "patched up" L-BFGS-B versions (see 2. SubGradient
Strategies) look much more appealing, but I'm afraid I can't afford
venturing into adding any of those to NLopt, even though some free
implementations seem to exist :-(

[1]: Mark Schmidt, Glenn Fung, Romer Rosales. Optimization Methods for
L1-Regularization. UBC Technical Report TR-2009-19, 2009.

http://www.cs.ubc.ca/cgi-bin/tr/2009/TR-2009-19.pdf

> PS. There are also various other ways to "robustly" your solution,
> e.g. by minimizing the worst case (maximum) over some uncertainties.
> This is a minimax problem and requires the transformation described in
> the manual to make it differentiable (but only requires one additional
> dummy variable rather than n). Google "robust optimization" for more
> information.

Thanks for this pointer too!

I will have a look, although in my problem, L1-norm comes out naturally
when you try to incorporate the priors, which makes it easy to justify
the choice of this regularizer, and therefore I would prefer to explore
this option first...

-- 
Sincerely yours,
Yury V. Zaytsev



_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to