Hello,

I've been successfully using the Python NLopt interface to minimize my
(convex) functions of a rather high number of variables (100-1000) using
L-BFGS, however, when the dimensionality of the problem increases, the
results are highly contaminated by noise.

I thought I'd try to solve this problem by adding a penalty term that
consists of an L1-norm of the variables, because I'm specifically
looking for sparse solutions, but it doesn't seem to work: the
optimization never converges.

Out of curiosity, I've also tried an L2-norm, and this worked, so I
suspect that I'm having this problem because L1-norm makes the function
not differentiable at zero.

Is there any built-in facility in NLopt for L1-norm regularization that
I've missed? Have anyone got L1-norm regularization to work with NLopt
gradient-based methods before?

Thanks,

-- 
Sincerely yours,
Yury V. Zaytsev



_______________________________________________
NLopt-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/nlopt-discuss

Reply via email to