On Dec 13, 2013, at 8:48 AM, "Yury V. Zaytsev" <[email protected]> wrote:

> Hello,
> 
> I've been successfully using the Python NLopt interface to minimize my
> (convex) functions of a rather high number of variables (100-1000) using
> L-BFGS, however, when the dimensionality of the problem increases, the
> results are highly contaminated by noise.
> 
> I thought I'd try to solve this problem by adding a penalty term that
> consists of an L1-norm of the variables, because I'm specifically
> looking for sparse solutions, but it doesn't seem to work: the
> optimization never converges.

An L1-norm is not differentiable, so any gradient-based method like BFGS will 
be problematic.

There is a standard trick to transform an n-dimensional L1 into 2n 
differentiable constraints, however.  (Similar to the transformation described 
in the manual: 
http://ab-initio.mit.edu/wiki/index.php/NLopt_Introduction#Equivalent_formulations_of_optimization_problems)

Suppose you are minimizing  f(x) + |g(x)| over x, where |g(x)| is the L1 norm 
of the n-dimensional vector g(x).  You transform this to the equivalent problem:

minimize  f(x) + sum_i t_i                  subject to 2n constraints (i = 
1..n):
      x, t                                                            t_i >= 
g_i(x),    t_i >= -g(x)

where you have introduced n additional "dummy" parameters t_i.  Then both the 
objective and constraints are differentiable (assuming f and g are 
differentiable), and you can use e.g SLSQP, MMA, or CCSA.

--SGJ

PS. There are also various other ways to "robustly" your solution, e.g. by 
minimizing the worst case (maximum) over some uncertainties.  This is a minimax 
problem and requires the transformation described in the manual to make it 
differentiable (but only requires one additional dummy variable rather than n). 
  Google "robust optimization" for more information.
_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to