There are numerous examples when the optimization might not have
converged to the stopping condition, but the minimum discovered point
is better than the starting point that was initially provided. The
user should have the ability to at least decide if it is good enough,
or use it as a starting
A Johnson wrote:
On Nov 17, 2012, at 6:57 PM, Konstantin Berlin wrote:
There are numerous examples when the optimization might not have
converged to the stopping condition, but the minimum discovered point
is better than the starting point that was initially provided. The
user should have
...@harfang.homelinux.org wrote:
Hi.
[Please do not top-post.]
On Mon, Nov 19, 2012 at 09:17:39PM -0500, Konstantin Berlin wrote:
That would solve the problem. Seems a bit messy though, since you now have
two conflicting stopping conditions, one you provide in the checker
implementation, the other
On Nov 20, 2012, at 7:56 AM, Gilles Sadowski gil...@harfang.homelinux.org
wrote:
On Tue, Nov 20, 2012 at 05:08:23AM -0500, Konstantin Berlin wrote:
I am ok with option 2.
Thanks. I'll add new constructors if nobody has objections. Please note that
by using custom checkers (even those
As a user of the optimization algorithms I am completely confused by the
change. It seems different from how optimization function are typically used
and seems to be creating a barrier for no reason.
I am not clear why you can't just leave the standard interface to an optimizer
be a function
On Nov 30, 2012, at 12:52 PM, Luc Maisonobe luc.maison...@free.fr wrote:
Hi all,
Le 30/11/2012 17:33, Konstantin Berlin a écrit :
As a user of the optimization algorithms I am completely confused by
the change. It seems different from how optimization function are
typically used and seems
On Nov 30, 2012, at 1:12 PM, Gilles Sadowski gil...@harfang.homelinux.org
wrote:
Hello.
As a user of the optimization algorithms I am completely confused by the
change. It seems different from how optimization function are typically used
and seems to be creating a barrier for no reason.
In my view the framework should be as simple as possible.
class OptimizationFunction
{
public DiffValue value(double[] x)
}
where
class DiffValue
{
double val;
double[] gradient;
}
class DiffValueHessian
{
double val;
double[] gradient;
double[][] Hesssian;
}
or for least squares
:
Le 30/11/2012 19:38, Konstantin Berlin a écrit :
In my view the framework should be as simple as possible.
class OptimizationFunction
{
public DiffValue value(double[] x)
}
where
class DiffValue
{
double val;
double[] gradient;
}
I understood your previous messages
Hi,
After your messages, I though we simply needed to simplify our API for
optimization (and only for optimization) so as to go back to something
more easy for users, up to not using the differentiation framework at
all. This seemed reasonable to me. It seems that now you ask for
Hi,
I don't know if people are confused about auto-differentation, I think most
people working in numerical analysis are very well aware of what it does. The
issue here is that it is a completely separate subject from optimizations. In a
proper OO design you would not mix the two together.
Hi,
Now that I have some time, let me try to make my case clearly. First I want to
say that this is not some attack on the automatic-differentation package. I
love automatic-differentation and symbolic packages. I personally cannot
compute a derivative without a computer for the life of me.
Hi,
My opinion is that the package should be organized by what it does rather, than
how it does it. My thinking is
optim
optim.scalar.
optim.scalar.linear
optim.scalar.socp (second order cone programming)'
optim.scalar.qcqp
optim.scalar.nonlinear
optim.scalar.nonlinear.derivfree
Hi,
Hello.
Now that I have some time, let me try to make my case clearly. First I want
to say that this is not some attack on the automatic-differentation package.
I love automatic-differentation and symbolic packages. I personally cannot
compute a derivative without a computer for
I forgot to say that there are commonly used benchmarks for optimization
algorithm developers. They are commonly used to compare different algorithms in
publications. I am personally not familiar with them, but it would be easy to
google them.
On Dec 1, 2012, at 1:31 PM, Gilles Sadowski
I would propose to simply revert my changes on the optimization package
and prepare for a reorganization for 4.0. I understand I focused only on
the type of problems Gilles and myself routinely use, i .e. small size
problems
where the cost of the evaluation is several orders of magnitude
My opinion is that the package should be organized by what it does rather,
than how it does it.
My proposal is based on what the user wants to do and on what input is
required in order to use the tools in the given package, where all
algorithms will share the same interface.
I humbly
the
algorithms. I am very sad to say, but I have barely time to even reply to these
emails.
I look forward to hearing what other people think,
Konstantin
On Dec 3, 2012, at 8:16 AM, Gilles Sadowski gil...@harfang.homelinux.org
wrote:
On Sat, Dec 01, 2012 at 04:58:30PM -0500, Konstantin Berlin wrote
Hi,
I think this is getting silly. What I am saying is not a matter of opinions but
of textbook optimizations. This is not a matter of use cases, but of something
that is already well established. I feel like this package is trying to
reinvent the wheel, in a subject that is already well
Hi,
I think this is getting silly. What I am saying is not a matter of opinions but
of textbook optimizations. This is not a matter of use cases, but of something
that is already well established. I feel like this package is trying to
reinvent the wheel, in a subject that is already well
-1
Per issue MATH-902:
Its hard for me to read all the code, but at first pass it seems that in
BOBYQAOptimizer, maybe for others, no convergence criterion is set. So there is
actually no way to set the maximum number of iterations.
On that note. The optimization data is parsed at runtime,
, at 1:58 PM, Gilles Sadowski gil...@harfang.homelinux.org
wrote:
On Tue, Dec 18, 2012 at 01:24:06PM -0500, Konstantin Berlin wrote:
-1
Per issue MATH-902:
Its hard for me to read all the code, but at first pass it seems that in
BOBYQAOptimizer, maybe for others, no convergence criterion
Hi,
I am not clear on the use of weights in a non-least squares problem is. In a
least-squares problem the weights can simply be taken in by the user in their
input function. So the weights option is a convenience feature, any
non-standard matrix weights could be programmed in by the user, if
Hi,
I can understand Dimitri's frustration, it seems the optimization framework
gets worse with every iteration. However, we should probably look forward and
think about how to design it properly instead.
Several times I brought out some problems and ideas about the package and it
seems the
Several times I brought out some problems and ideas about the package and it
seems the only person who has an opinion is Gilles.
I will list what I consider to be major problems
1) The OO design is bad, too much inheritance which could be handled better
by interfaces,
We may have gone
That's what I was going to write. At the moment, the current implementation
for sparse matrices and vector is deprecated, for lack of convincing fixes
for the bugs which have been found. These bugs are mainly related to the
fact that zero --the unstored value-- is signed, and the sign is
I think this issue can be overcome by proper OO design, and hacked on to
the current interface for now.
We can create an InPlace interface that all linear operators (including
RealMatrix etc) would implement. For example in a hypothetical
InPlaceMatrix class
function InPlaceVector
performance with simple code in those
applications is a real virtue.
On Sat, Dec 29, 2012 at 3:22 PM, Konstantin Berlin kber...@gmail.comwrote:
While visitor pattern is a good abstraction, I think it would make for
terrible linear algebra performance. All operations should be based on
basic
Hi,
In my opinion is that the whole weights fiasco is consequence of improper
design, as much as anything else. All components should be as simple as
possible, with any additional add ons, like weights, not added to the base
implementation, but instead done as an extension of these classes. If
Also. If you have GPU implementation of a matrix, or use another type of a
vector processor, there is no way you can program that in if you force vector
operations to use a visitor patterns.
On Dec 29, 2012, at 6:43 PM, Konstantin Berlin kber...@gmail.com wrote:
That's a good point about
On Dec 29, 2012, at 7:08 PM, Ted Dunning ted.dunn...@gmail.com wrote:
Who said force? Linear algebra operations should be available.
Visitors should be available.
Your mileage will vary. That was always true.
On Sat, Dec 29, 2012 at 3:59 PM, Konstantin Berlin kber...@gmail.comwrote:
Also
On Dec 30, 2012, at 2:26 PM, Sébastien Brisard sebastien.bris...@m4x.org
wrote:
Hello,
Using the visitor pattern as a way to implement in-place basic linear
algebra operations was only a suggestion, I am not sure it's the best
option, even if I do think it's a simple and elegant option.
Here are my 2 cents:
The rationale for redesign of the RealMatrix hierarchy is because a large
amount of numerical packages depend (or should depend) on a linear algebra
package for efficient implementation. However, as it currently stands, it is
not possible (or unnecessarily burdensome) to
As for efficiency (or faster execution, if you want), I don't see the
point in doubting that tasks like global search (e.g. in a genetic
algorithm) will complete in less time when run in parallel...
Just a quick note. This statement is incorrect. Parallelization should be done
at the
On Feb 7, 2013, at 11:05 AM, Gilles gil...@harfang.homelinux.org wrote:
On Thu, 07 Feb 2013 07:01:42 -0800, Phil Steitz wrote:
On 2/7/13 4:58 AM, Gilles wrote:
On Wed, 06 Feb 2013 09:46:55 -0800, Phil Steitz wrote:
On 2/6/13 9:03 AM, Gilles wrote:
On Wed, 06 Feb 2013 07:19:47 -0800, Phil
Sorry, but not of this is making sense to me. We had a long discussion
about how the library doesn't test for large scale problem
performance. A lot of algorithms probably do not scale well as the
result. There was talk of dropping sparse support in linear algebra.
So instead of fixing that, you
Just a few thoughts.
It seems to me that this code suffers from redundant function evaluations. I am
not sure what to think about the movement from edges since it violates proper
behavior for well behaved functions in order to work for some special cases. In
case of infinite integrals it
with sigma=1000.
Cheers,
-Ajo
On Mon, Jul 1, 2013 at 3:17 PM, Konstantin Berlin kber...@gmail.com wrote:
Just a few thoughts.
It seems to me that this code suffers from redundant function evaluations.
I am not sure what to think about the movement from edges since it violates
I am not understanding the discussion here. Adaptive integration is
designed for functions that have different behavior in different
regions. Some regions are smoother, some have higher frequeniesy. How
you integrate a divided region, Simpson rule or whatever is a separate
question.
Adaptive
the accuracy and versatility of AQ are being ignored in
favor of the familiarity of LGQ in the apache codebase. If there are tests
that AQ fails, I'll update my opinion.
Cheers,
Ajo
On Mon, Jul 1, 2013 at 6:49 PM, Konstantin Berlin kber...@gmail.com
wrote:
I am not understanding
On Tue, Jul 2, 2013 at 9:33 AM, Konstantin Berlin kber...@gmail.com
wrote:
At first it seems you are still compute redundant points. See my example
that I posted, where I propagate 3 functional values not two.
In regards to improvement. I am not an expert of different integration
Hi,
Am I missing something? Why is a linear least squares problem (fitting
constants of basis functions) being solved using a non-linear least squares
solver in the first place?
On Jul 17, 2013, at 11:16 AM, Gilles gil...@harfang.homelinux.org wrote:
Hello.
Constructors of classes in the
Hi,
I have two points on this
1) See issue MATH-1009
2) If LM was always better there would be no GuassNewton. Clearly this is not
the case.
LM is a mixture between GN and steepest descent, so it is only faster for
tougher functions. In case of strictly convex function GN should be a good
:16:46 -0400, Konstantin Berlin wrote:
Hi,
I have two points on this
1) See issue MATH-1009
This is not directly related to my question (about cleanup of
_existing_ code); it should thus be discussed in another thread.
2) If LM was always better there would be no GuassNewton. Clearly
I appreciate the comment. I would like to help, but currently my schedule is
full. Maybe towards the end of the year.
I think the first approach should be do no harm. The optimization package keeps
getting refactored every few months without much thinking involved. We had the
discuss
The question is how correctly to do AQ and how to correctly handle
improper integrals. What I would appreciate is some real numerical
analysis or pointers to where we can do the research to support what
Ajo is asking us to commit. Just running tests on one problem
instance demonstrates
Depending on the size of the array, due to caching, it might be faster to
do a single pass and compute min and max at same time.
Is there a function that would compute min and max on an comparable
collection?
On Tue, Jul 23, 2013 at 1:25 PM, Phil Steitz phil.ste...@gmail.com wrote:
On 7/23/13
My thinking is that a good guess can increase speed of refitting under
minor changes. There might be lots of reasons why a user might already have
a good idea of the solution.
On Mon, Aug 5, 2013 at 9:13 AM, Gilles gil...@harfang.homelinux.org wrote:
On Mon, 05 Aug 2013 04:27:11 +0200, Gilles
Terrible function name also :)
On Tue, Aug 6, 2013 at 10:28 AM, Ajo Fod ajo@gmail.com wrote:
When does this become an issue?
-Ajo
On Sun, Aug 4, 2013 at 9:42 AM, Phil Steitz phil.ste...@gmail.com wrote:
On 8/4/13 7:44 AM, Ajo Fod wrote:
Guys,
What is the use of
Could we create an o.a.c.m.experimental sub-package, and put
competing designs inside it: e.g.
o.a.c.m.experimental.evan
o.a.c.m.experimental.gilles
for people to test with 3.3, and postpone a decision until we can see
better the differences? Maybe this would allow to pick all the better
Hi,
As a quick first pass, I will comment on Evans code only for now. Most
of my comments have to do with numerical aspects and associated
design, rather than threading.
I like the idea of the optimize function taking in a Problem class. I
think it can add a lot of flexibility going forward.
There would be an advantage, true. I don't know if commons has one
(doesn't look like it). You can also try http://www.joptimizer.com/
On Thu, Aug 15, 2013 at 4:59 PM, Ajo Fod ajo@gmail.com wrote:
Hello,
Is'nt there an advantage to being able to compute the Jacobian of the
gradient
Joptimizer is restricted to solving convex problems.
My application is to minimize a generic non-linear function with linear
constraints. Know of anything that does it?
-Ajo
On Thu, Aug 15, 2013 at 5:48 PM, Konstantin Berlin
kber...@gmail.comjavascript:;
wrote:
There would
Like you said, least-squares optimization is a specific (restricted)
type of a general convex optimization problem. Since its a restricted
type of an optimization problem it can potentially be more efficiently
solved than the general convex optimization problem. The advantage in
this specific case
We already discussed that some time ago.
Let's not mix different issues. Removing the handling of weights can dealt
with later.
I disagree, please address my points in regards to this, rather than
saying a generalizing statement.
Having weights be included makes it harder to implement a
the rest
of the optimizers.
On 08/15/2013 09:12 AM, Konstantin Berlin wrote:
Hi,
As a quick first pass, I will comment on Evans code only for now. Most
of my comments have to do with numerical aspects and associated
design, rather than threading.
I like the idea of the optimize function
Before someone spend time rewriting the whole package, wouldn't we want the
ability to comment on a skeleton design that might not pass unit tests?
On Tuesday, August 20, 2013, Gilles wrote:
On Tue, 20 Aug 2013 14:55:51 -0700, Ajo Fod wrote:
I agree that in general test are necessary to
I think I like this approach a lot more.
One thing that is left out, and we discussed a lot, is the need to
support vector operations. This can also be done as an additional
interface. This is critical since it forms the basic building block
upon which all higher level matrix operations rely on.
I looked only at the GuassNewton class as a general guide to how
things work. I like it a lot! I only wish all of the optimizations
were rewritten in this way.
Several comments
1) I believe this code can now be simplified
// build the linear problem
final double[] b = new
2013 15:11:22 -0400, Konstantin Berlin wrote:
I looked only at the GuassNewton class as a general guide to how
things work. I like it a lot! I only wish all of the optimizations
were rewritten in this way.
Several comments
1) I believe this code can now be simplified
// build the linear
at 9:26 AM, Evan Ward evan.w...@nrl.navy.mil wrote:
On 08/26/2013 03:11 PM, Konstantin Berlin wrote:
I looked only at the GuassNewton class as a general guide to how
things work. I like it a lot! I only wish all of the optimizations
were rewritten in this way.
Several comments
1) I believe
+1
On Nov 8, 2013 3:28 PM, Thomas Neidhart thomas.neidh...@gmail.com wrote:
On 11/08/2013 09:20 PM, Phil Steitz wrote:
On 11/8/13 11:59 AM, Ted Dunning wrote:
On Fri, Nov 8, 2013 at 11:47 AM, Luc Maisonobe l...@spaceroots.org
wrote:
is there still consensus that we are going to remove
Hi,
I am really having problems believing that matrix copying is the major
problem in an optimization algorithm. Copying is O(N^2) operations. Surely,
for any problem where performance would matter, it is completely dwarfed by
the O(N^3) complexity of actually solving the normal equation.
Also,
63 matches
Mail list logo