Hi,
I think this is getting silly. What I am saying is not a matter of opinions but
of textbook optimizations. This is not a matter of use cases, but of something
that is already well established. I feel like this package is trying to
reinvent the wheel, in a subject that is already well
Hi,
I think this is getting silly. What I am saying is not a matter of opinions but
of textbook optimizations. This is not a matter of use cases, but of something
that is already well established. I feel like this package is trying to
reinvent the wheel, in a subject that is already well
On Tue, Dec 04, 2012 at 05:47:09AM -0500, Konstantin Berlin wrote:
Hi,
I think this is getting silly. What I am saying is not a matter of
opinions but of textbook optimizations. This is not a matter of use
cases, but of something that is already well established. I feel like
this package
On Sun, Dec 02, 2012 at 11:23:58PM +0100, Luc Maisonobe wrote:
Gilles Sadowski gil...@harfang.homelinux.org a écrit :
Hello.
I would propose to simply revert my changes on the optimization
package
and prepare for a reorganization for 4.0. I understand I focused
only on
On Sat, Dec 01, 2012 at 04:58:30PM -0500, Konstantin Berlin wrote:
I would propose to simply revert my changes on the optimization package
and prepare for a reorganization for 4.0. I understand I focused only on
the type of problems Gilles and myself routinely use, i .e. small size
Hi,
In my view there are two related but separate issues: (1) What should we do for
3.1 release, and (2) How do we want to design the optimization package for the
future, such that it is easily extended to linear and non-linear constraints,
sparse matrix operations, etc.
In regards to (1),
I
On Mon, Dec 03, 2012 at 11:22:21AM -0500, Konstantin Berlin wrote:
Hi,
In my view there are two related but separate issues: (1) What should we
do for 3.1 release, and (2) How do we want to design the optimization
package for the future, such that it is easily extended to linear and
Hello.
I would propose to simply revert my changes on the optimization package
and prepare for a reorganization for 4.0. I understand I focused only on
the type of problems Gilles and myself routinely use, i .e. small size
problems
where the cost of the evaluation is several
Gilles Sadowski gil...@harfang.homelinux.org a écrit :
Hello.
I would propose to simply revert my changes on the optimization
package
and prepare for a reorganization for 4.0. I understand I focused
only on
the type of problems Gilles and myself routinely use, i .e. small
size
On 12/01/2012 01:42 AM, Konstantin Berlin wrote:
Hi,
Now that I have some time, let me try to make my case clearly. First I want
to say that this is not some attack on the automatic-differentation package.
I love automatic-differentation and symbolic packages. I personally cannot
compute
Hi,
My opinion is that the package should be organized by what it does rather, than
how it does it. My thinking is
optim
optim.scalar.
optim.scalar.linear
optim.scalar.socp (second order cone programming)'
optim.scalar.qcqp
optim.scalar.nonlinear
optim.scalar.nonlinear.derivfree
Correctness isn't that hard to get. You just need to add a bitmap for
exceptional values in all matrices. This bitmap can be accessed by sparse
operations so that the iteration is across the union of non-zero elements
in the sparse vector/matrix and exception elements in the operand.
That fact
Hello.
Now that I have some time, let me try to make my case clearly. First I want
to say that this is not some attack on the automatic-differentation package.
I love automatic-differentation and symbolic packages. I personally cannot
compute a derivative without a computer for the life
On Sat, Dec 01, 2012 at 09:59:37AM -0800, Ted Dunning wrote:
Correctness isn't that hard to get. You just need to add a bitmap for
exceptional values in all matrices. This bitmap can be accessed by sparse
operations so that the iteration is across the union of non-zero elements
in the sparse
Hi,
Hello.
Now that I have some time, let me try to make my case clearly. First I want
to say that this is not some attack on the automatic-differentation package.
I love automatic-differentation and symbolic packages. I personally cannot
compute a derivative without a computer for
I forgot to say that there are commonly used benchmarks for optimization
algorithm developers. They are commonly used to compare different algorithms in
publications. I am personally not familiar with them, but it would be easy to
google them.
On Dec 1, 2012, at 1:31 PM, Gilles Sadowski
Konstantin Berlin kber...@gmail.com a écrit :
Hi,
Hello.
Now that I have some time, let me try to make my case clearly. First
I want to say that this is not some attack on the
automatic-differentation package. I love automatic-differentation and
symbolic packages. I personally cannot
Hello.
My opinion is that the package should be organized by what it does rather,
than how it does it.
My proposal is based on what the user wants to do and on what input is
required in order to use the tools in the given package, where all
algorithms will share the same interface.
My
I would propose to simply revert my changes on the optimization package
and prepare for a reorganization for 4.0. I understand I focused only on
the type of problems Gilles and myself routinely use, i .e. small size
problems
where the cost of the evaluation is several orders of magnitude
My opinion is that the package should be organized by what it does rather,
than how it does it.
My proposal is based on what the user wants to do and on what input is
required in order to use the tools in the given package, where all
algorithms will share the same interface.
I humbly
Hello.
Now that I have some time, let me try to make my case clearly. First I
want to say that this is not some attack on the automatic-differentation
package. I love automatic-differentation and symbolic packages. I
personally cannot compute a derivative without a computer for
Hi Luc.
I would propose to simply revert my changes on the optimization package
and prepare for a reorganization for 4.0. I understand I focused only on
the type of problems Gilles and myself routinely use, i .e. small size
problems
where the cost of the evaluation is several orders of
As a user of the optimization algorithms I am completely confused by the
change. It seems different from how optimization function are typically used
and seems to be creating a barrier for no reason.
I am not clear why you can't just leave the standard interface to an optimizer
be a function
Hi all,
Le 30/11/2012 17:33, Konstantin Berlin a écrit :
As a user of the optimization algorithms I am completely confused by
the change. It seems different from how optimization function are
typically used and seems to be creating a barrier for no reason.
The reason is that the framework has
Hello.
As a user of the optimization algorithms I am completely confused by the
change. It seems different from how optimization function are typically used
and seems to be creating a barrier for no reason.
If you think that it's for no reason, then you probably missed some
important point:
On Nov 30, 2012, at 12:52 PM, Luc Maisonobe luc.maison...@free.fr wrote:
Hi all,
Le 30/11/2012 17:33, Konstantin Berlin a écrit :
As a user of the optimization algorithms I am completely confused by
the change. It seems different from how optimization function are
typically used and seems
On Nov 30, 2012, at 1:12 PM, Gilles Sadowski gil...@harfang.homelinux.org
wrote:
Hello.
As a user of the optimization algorithms I am completely confused by the
change. It seems different from how optimization function are typically used
and seems to be creating a barrier for no reason.
Hi,
This is the part that confuses me. Why are you adding this complexity
layer to optimization framework, specially when this is completely
non-standard way to interface with it? If you want some fancy framework
for differentiation why not created a wrapper function?
I fully agree!
Why
In my view the framework should be as simple as possible.
class OptimizationFunction
{
public DiffValue value(double[] x)
}
where
class DiffValue
{
double val;
double[] gradient;
}
class DiffValueHessian
{
double val;
double[] gradient;
double[][] Hesssian;
}
or for least squares
Le 30/11/2012 19:38, Konstantin Berlin a écrit :
In my view the framework should be as simple as possible.
class OptimizationFunction
{
public DiffValue value(double[] x)
}
where
class DiffValue
{
double val;
double[] gradient;
}
I understood your previous messages, but am
Le 30/11/2012 19:22, Konstantin Berlin a écrit :
On Nov 30, 2012, at 12:52 PM, Luc Maisonobe luc.maison...@free.fr
wrote:
Hi all,
Le 30/11/2012 17:33, Konstantin Berlin a écrit :
As a user of the optimization algorithms I am completely confused
by the change. It seems different from
Hi,
How you return the values is not important, though combining the two is kind of
dirty and would make it harder for user. Also it would be kind of complex,
would break OO approach, if you want to return values and the Jacobian, or
extends a base class to gradients and hessians.
I was
Hi Luc.
As a user of the optimization algorithms I am completely confused by
the change. It seems different from how optimization function are
typically used and seems to be creating a barrier for no reason.
The reason is that the framework has been done for several uses, not
only
Hi,
After your messages, I though we simply needed to simplify our API for
optimization (and only for optimization) so as to go back to something
more easy for users, up to not using the differentiation framework at
all. This seemed reasonable to me. It seems that now you ask for
Hi,
I don't know if people are confused about auto-differentation, I think most
people working in numerical analysis are very well aware of what it does. The
issue here is that it is a completely separate subject from optimizations. In a
proper OO design you would not mix the two together.
Le 30/11/2012 20:30, Konstantin Berlin a écrit :
Hi,
Hi Konstantin,
I don't know if people are confused about auto-differentation, I
think most people working in numerical analysis are very well aware
of what it does. The issue here is that it is a completely separate
subject from
Hello.
I don't know if people are confused about auto-differentation,
I think most people working in numerical analysis are very well
aware of what it does. The issue here is that it is a completely
separate subject from optimizations.
Not completely separate from the optimizer
I don't know if people are confused about auto-differentation, I
think most people working in numerical analysis are very well aware
of what it does. The issue here is that it is a completely separate
subject from optimizations. In a proper OO design you would not mix
the two
Hi.
[...]
So I suggest we disconnect differentiation from optimization, but in a
way that would let users decide how they provide the differentials. This
means I would not like to reintroduce the former interfaces.
What about having the optimize() methods taking two arguments
Hi,
Now that I have some time, let me try to make my case clearly. First I want to
say that this is not some attack on the automatic-differentation package. I
love automatic-differentation and symbolic packages. I personally cannot
compute a derivative without a computer for the life of me.
40 matches
Mail list logo