On Tue, Nov 23, 2010 at 3:37 AM, Sebastian Walter
sebastian.wal...@gmail.com wrote:
On Tue, Nov 23, 2010 at 11:17 AM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Tue, Nov 23, 2010 at 11:13:23AM +0100, Sebastian Walter wrote:
I'm not familiar with dichotomy optimization.
On Tue, Nov 23, 2010 at 3:37 AM, Sebastian Walter
sebastian.wal...@gmail.com wrote:
On Tue, Nov 23, 2010 at 11:17 AM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Tue, Nov 23, 2010 at 11:13:23AM +0100, Sebastian Walter wrote:
I'm not familiar with dichotomy optimization.
On Tue, Nov 23, 2010 at 08:21:55AM +0100, Matthieu Brucher wrote:
optimize.fmin can be enough, I don't know it well enough. Nelder-Mead
is not a constrained optimization algorithm, so you can't specify an
outer hull.
I saw that, after a bit more reading.
As for the integer part, I don't know
Hello Gael,
On Tue, Nov 23, 2010 at 10:27 AM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Tue, Nov 23, 2010 at 10:18:50AM +0100, Matthieu Brucher wrote:
The problem is that I can't tell the Nelder-Mead that the smallest jump
it should attempt is .5. I can set xtol to .5, but it
On Tue, Nov 23, 2010 at 11:13:23AM +0100, Sebastian Walter wrote:
I'm not familiar with dichotomy optimization.
Several techniques have been proposed to solve the problem: genetic
algorithms, simulated annealing, Nelder-Mead and Powell.
To be honest, I find it quite confusing that these
On Tue, Nov 23, 2010 at 11:17 AM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Tue, Nov 23, 2010 at 11:13:23AM +0100, Sebastian Walter wrote:
I'm not familiar with dichotomy optimization.
Several techniques have been proposed to solve the problem: genetic
algorithms, simulated
On Tue, Nov 23, 2010 at 11:37:02AM +0100, Sebastian Walter wrote:
min_x f(x)
s.t. lo = Ax + b = up
0 = g(x)
0 = h(x)
No constraints.
didn't you say that you operate only in some convex hull?
No. I have an initial guess that allows me to specify a convex hull
On Tue, Nov 23, 2010 at 11:43 AM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Tue, Nov 23, 2010 at 11:37:02AM +0100, Sebastian Walter wrote:
min_x f(x)
s.t. lo = Ax + b = up
0 = g(x)
0 = h(x)
No constraints.
didn't you say that you operate only in
On Tue, Nov 23, 2010 at 02:47:10PM +0100, Sebastian Walter wrote:
Well, I don't know what the best method is to solve your problem, so
take the following with a grain of salt:
Wouldn't it be better to change the model than modifying the
optimization algorithm?
In this case, that's not
On Tue, Nov 23, 2010 at 8:50 AM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Tue, Nov 23, 2010 at 02:47:10PM +0100, Sebastian Walter wrote:
Well, I don't know what the best method is to solve your problem, so
take the following with a grain of salt:
Wouldn't it be better to change
On Tue, Nov 23, 2010 at 2:50 PM, Gael Varoquaux
gael.varoqu...@normalesup.org wrote:
On Tue, Nov 23, 2010 at 02:47:10PM +0100, Sebastian Walter wrote:
Well, I don't know what the best method is to solve your problem, so
take the following with a grain of salt:
Wouldn't it be better to change
On Tue, Nov 23, 2010 at 10:19:06AM -0500, josef.p...@gmail.com wrote:
I have a Nelder-Mead that seems to be working quite well on a few toy
problems.
Assuming your function is well behaved, one possible idea is to try
replacing the integer objective function with a continuous
On Nov 23, 2010, at 10:57 AM, Gael Varoquaux wrote:
On Tue, Nov 23, 2010 at 04:33:00PM +0100, Sebastian Walter wrote:
At first glance it looks as if a relaxation is simply not possible:
either there are additional rows or not.
But with some technical transformations it is possible to
2010/11/23 Zachary Pincus zachary.pin...@yale.edu:
On Nov 23, 2010, at 10:57 AM, Gael Varoquaux wrote:
On Tue, Nov 23, 2010 at 04:33:00PM +0100, Sebastian Walter wrote:
At first glance it looks as if a relaxation is simply not possible:
either there are additional rows or not.
But with some
On Tue, Nov 23, 2010 at 07:14:56PM +0100, Matthieu Brucher wrote:
Jumping in a little late, but it seems that simulated annealing might
be a decent method here: take random steps (drawing from a
distribution of integer step sizes), reject steps that fall outside
the fitting range, and
2010/11/24 Gael Varoquaux gael.varoqu...@normalesup.org:
On Tue, Nov 23, 2010 at 07:14:56PM +0100, Matthieu Brucher wrote:
Jumping in a little late, but it seems that simulated annealing might
be a decent method here: take random steps (drawing from a
distribution of integer step sizes),
Hi list,
does anybody have, or knows where I can find some N dimensional dichotomy
optimization code in Python (BSD licensed, or equivalent)?
Worst case, it does not look too bad to code, but I am interested by any
advice. I haven't done my reading yet, and I don't know how ill-posed a problem
2010/11/22 Gael Varoquaux gael.varoqu...@normalesup.org:
Hi list,
Hi ;)
does anybody have, or knows where I can find some N dimensional dichotomy
optimization code in Python (BSD licensed, or equivalent)?
I don't know any code, but it should be too difficult by bgoing
through a KdTree.
On Mon, Nov 22, 2010 at 09:12:45PM +0100, Matthieu Brucher wrote:
Hi ;)
Hi bro
does anybody have, or knows where I can find some N dimensional
dichotomy optimization code in Python (BSD licensed, or equivalent)?
I don't know any code, but it should be too difficult by bgoing
through a
2010/11/22 Gael Varoquaux gael.varoqu...@normalesup.org:
On Mon, Nov 22, 2010 at 09:12:45PM +0100, Matthieu Brucher wrote:
Hi ;)
Hi bro
does anybody have, or knows where I can find some N dimensional
dichotomy optimization code in Python (BSD licensed, or equivalent)?
I don't know any
On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
It seems that a simplex is what you need.
Ha! I am learning new fancy words. Now I can start looking clever.
I realize that maybe I should rephrase my question to try and draw more
out of the common wealth of knowledge on
2010/11/22 Gael Varoquaux gael.varoqu...@normalesup.org:
On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
It seems that a simplex is what you need.
Ha! I am learning new fancy words. Now I can start looking clever.
I realize that maybe I should rephrase my question to try
On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
It seems that a simplex is what you need. It uses the barycenter (more
or less) to find a new point in the simplex. And it works well only in
convex functions (but in fact almost all functions have an issue with
this :D)
One
On Mon, Nov 22, 2010 at 5:27 PM, Matthieu Brucher
matthieu.bruc...@gmail.com wrote:
2010/11/22 Gael Varoquaux gael.varoqu...@normalesup.org:
On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
It seems that a simplex is what you need.
Ha! I am learning new fancy words. Now I can
2010/11/22 Gael Varoquaux gael.varoqu...@normalesup.org:
On Mon, Nov 22, 2010 at 11:12:26PM +0100, Matthieu Brucher wrote:
It seems that a simplex is what you need. It uses the barycenter (more
or less) to find a new point in the simplex. And it works well only in
convex functions (but in fact
25 matches
Mail list logo