There are numerous examples when the optimization might not have
converged to the stopping condition, but the minimum discovered point
is better than the starting point that was initially provided. The
user should have the ability to at least decide if it is good enough,
or use it as a starting poi
:
>> On Nov 17, 2012, at 6:57 PM, Konstantin Berlin wrote:
>>
>>> There are numerous examples when the optimization might not have
>>> converged to the stopping condition, but the minimum discovered point
>>> is better than the starting point that was initially
On Nov 19, 2012, at 10:06 PM, Gilles Sadowski
wrote:
> Hi.
>
> [Please do not top-post.]
>
> On Mon, Nov 19, 2012 at 09:17:39PM -0500, Konstantin Berlin wrote:
>> That would solve the problem. Seems a bit messy though, since you now have
>> two conflicting stopping conditi
On Nov 20, 2012, at 7:56 AM, Gilles Sadowski
wrote:
> On Tue, Nov 20, 2012 at 05:08:23AM -0500, Konstantin Berlin wrote:
>> I am ok with option 2.
>
> Thanks. I'll add new constructors if nobody has objections. Please note that
> by using "custom" checkers (ev
As a user of the optimization algorithms I am completely confused by the
change. It seems different from how optimization function are typically used
and seems to be creating a barrier for no reason.
I am not clear why you can't just leave the standard interface to an optimizer
be a function th
On Nov 30, 2012, at 12:52 PM, Luc Maisonobe wrote:
> Hi all,
>
> Le 30/11/2012 17:33, Konstantin Berlin a écrit :
>> As a user of the optimization algorithms I am completely confused by
>> the change. It seems different from how optimization function are
>> typ
On Nov 30, 2012, at 1:12 PM, Gilles Sadowski
wrote:
> Hello.
>
>> As a user of the optimization algorithms I am completely confused by the
>> change. It seems different from how optimization function are typically used
>> and seems to be creating a barrier for no reason.
>
> If you think th
In my view the framework should be as simple as possible.
class OptimizationFunction
{
public DiffValue value(double[] x)
}
where
class DiffValue
{
double val;
double[] gradient;
}
class DiffValueHessian
{
double val;
double[] gradient;
double[][] Hesssian;
}
or for least squares
Le 30/11/2012 19:38, Konstantin Berlin a écrit :
>> In my view the framework should be as simple as possible.
>>
>> class OptimizationFunction
>> {
>> public DiffValue value(double[] x)
>> }
>>
>> where
>>
>> class DiffValue
>>
>
Hi,
>
> After your messages, I though we simply needed to simplify our API for
> optimization (and only for optimization) so as to go back to something
> more easy for users, up to not using the differentiation framework at
> all. This seemed reasonable to me. It seems that now you ask for
>
Hi,
I don't know if people are confused about auto-differentation, I think most
people working in numerical analysis are very well aware of what it does. The
issue here is that it is a completely separate subject from optimizations. In a
proper OO design you would not mix the two together. Comp
Hi,
Now that I have some time, let me try to make my case clearly. First I want to
say that this is not some attack on the automatic-differentation package. I
love automatic-differentation and symbolic packages. I personally cannot
compute a derivative without a computer for the life of me. And
Hi,
My opinion is that the package should be organized by what it does rather, than
how it does it. My thinking is
optim
optim.scalar.
optim.scalar.linear
optim.scalar.socp (second order cone programming)'
optim.scalar.qcqp
optim.scalar.nonlinear
optim.scalar.nonlinear.derivfree
optim.scalar.no
Hi,
> Hello.
>
>>
>> Now that I have some time, let me try to make my case clearly. First I want
>> to say that this is not some attack on the automatic-differentation package.
>> I love automatic-differentation and symbolic packages. I personally cannot
>> compute a derivative without a co
I forgot to say that there are commonly used benchmarks for optimization
algorithm developers. They are commonly used to compare different algorithms in
publications. I am personally not familiar with them, but it would be easy to
google them.
On Dec 1, 2012, at 1:31 PM, Gilles Sadowski
wrote
>
> I would propose to simply revert my changes on the optimization package
> and prepare for a reorganization for 4.0. I understand I focused only on
> the type of problems Gilles and myself routinely use, i .e. small size
> problems
> where the cost of the evaluation is several orders of magn
>>
>> My opinion is that the package should be organized by what it does rather,
>> than how it does it.
>
> My proposal is based on what the user wants to do and on what input is
> required in order to use the tools in the given package, where all
> algorithms will share the same interface.
>
> There are probably various way to look at a "problem".
>
> For example, I have a leastsquares "problem" but evaluating the derivatives
> is expensive (finite differences), so I nevertheless use the classes in
> "direct" (no derivatives).
>
> [We had another discussion (raised by someone who det
there is a lack of man power to improve the
algorithms. I am very sad to say, but I have barely time to even reply to these
emails.
I look forward to hearing what other people think,
Konstantin
On Dec 3, 2012, at 8:16 AM, Gilles Sadowski
wrote:
> On Sat, Dec 01, 2012 at 04:58:30PM -0500
Hi,
I think this is getting silly. What I am saying is not a matter of opinions but
of textbook optimizations. This is not a matter of use cases, but of something
that is already well established. I feel like this package is trying to
reinvent the wheel, in a subject that is already well known,
Hi,
I think this is getting silly. What I am saying is not a matter of opinions but
of textbook optimizations. This is not a matter of use cases, but of something
that is already well established. I feel like this package is trying to
reinvent the wheel, in a subject that is already well known,
-1
Per issue MATH-902:
Its hard for me to read all the code, but at first pass it seems that in
BOBYQAOptimizer, maybe for others, no convergence criterion is set. So there is
actually no way to set the maximum number of iterations.
On that note. The optimization data is parsed at runtime, whi
1:58 PM, Gilles Sadowski
wrote:
> On Tue, Dec 18, 2012 at 01:24:06PM -0500, Konstantin Berlin wrote:
>> -1
>>
>> Per issue MATH-902:
>>
>> Its hard for me to read all the code, but at first pass it seems that in
>> BOBYQAOptimizer, maybe for others, no
Hi,
I am not clear on the use of weights in a non-least squares problem is. In a
least-squares problem the weights can simply be taken in by the user in their
input function. So the weights option is a convenience feature, any
non-standard matrix weights could be programmed in by the user, if t
fix. Maybe
>> such a test would have been need BEFORE the release!
>
> Sure, but for the record the feature was also a last minute change. This
> was discussed on the list, and the final decision was to add this
> feature despite the release was close. No wonder we failed to test
Hi,
I can understand Dimitri's frustration, it seems the optimization framework
gets worse with every iteration. However, we should probably look forward and
think about how to design it properly instead.
Several times I brought out some problems and ideas about the package and it
seems the o
>>
>> Several times I brought out some problems and ideas about the package and it
>> seems the only person who has an opinion is Gilles.
>>
>> I will list what I consider to be major problems
>> 1) The OO design is bad, too much inheritance which could be handled better
>> by interfaces,
> We
On Dec 28, 2012, at 12:23 PM, Dimitri Pourbaix wrote:
> Hi,
>
>>> I can understand Dimitri's frustration, it seems the optimization
>> framework gets worse with every iteration. However, we should probably
>> look forward and think about how to design it properly instead.
>>
>> +1 - though we
>
> That's what I was going to write. At the moment, the current implementation
> for sparse matrices and vector is deprecated, for lack of convincing fixes
> for the bugs which have been found. These bugs are mainly related to the
> fact that zero --the unstored value-- is signed, and the sign is
>>
>> I think this issue can be overcome by proper OO design, and hacked on to
>> the current interface for now.
>>
>> We can create an "InPlace" interface that all linear operators (including
>> RealMatrix etc) would implement. For example in a hypothetical
>> InPlaceMatrix class
>>
>> function
just isn't feasible for lots of operations in real
> applications. Getting solid performance with simple code in those
> applications is a real virtue.
>
> On Sat, Dec 29, 2012 at 3:22 PM, Konstantin Berlin wrote:
>
>> While visitor pattern is a good abstraction, I think
Hi,
In my opinion is that the whole weights fiasco is consequence of improper
design, as much as anything else. All components should be as simple as
possible, with any additional add ons, like weights, not added to the base
implementation, but instead done as an extension of these classes. If
Also. If you have GPU implementation of a matrix, or use another type of a
vector processor, there is no way you can program that in if you force vector
operations to use a visitor patterns.
On Dec 29, 2012, at 6:43 PM, Konstantin Berlin wrote:
> That's a good point about the com
Dec 29, 2012, at 7:08 PM, Ted Dunning wrote:
> Who said force? Linear algebra operations should be available.
>
> Visitors should be available.
>
> Your mileage will vary. That was always true.
>
> On Sat, Dec 29, 2012 at 3:59 PM, Konstantin Berlin wrote:
>
On Dec 30, 2012, at 2:26 PM, Sébastien Brisard
wrote:
> Hello,
>
> Using the visitor pattern as a way to implement in-place basic linear
> algebra operations was only a suggestion, I am not sure it's the best
> option, even if I do think it's a simple and elegant option. However...
>
>
> I t
Here are my 2 cents:
The rationale for redesign of the RealMatrix hierarchy is because a large
amount of numerical packages depend (or should depend) on a linear algebra
package for efficient implementation. However, as it currently stands, it is
not possible (or unnecessarily burdensome) to cr
>
> As for efficiency (or faster execution, if you want), I don't see the
> point in doubting that tasks like global search (e.g. in a genetic
> algorithm) will complete in less time when run in parallel...
Just a quick note. This statement is incorrect. Parallelization should be done
at the co
On Feb 7, 2013, at 11:05 AM, Gilles wrote:
> On Thu, 07 Feb 2013 07:01:42 -0800, Phil Steitz wrote:
>> On 2/7/13 4:58 AM, Gilles wrote:
>>> On Wed, 06 Feb 2013 09:46:55 -0800, Phil Steitz wrote:
On 2/6/13 9:03 AM, Gilles wrote:
> On Wed, 06 Feb 2013 07:19:47 -0800, Phil Steitz wrote:
>>>
Sorry, but not of this is making sense to me. We had a long discussion
about how the library doesn't test for large scale problem
performance. A lot of algorithms probably do not scale well as the
result. There was talk of dropping sparse support in linear algebra.
So instead of fixing that, you ju
Just a few thoughts.
It seems to me that this code suffers from redundant function evaluations. I am
not sure what to think about the movement from edges since it violates proper
behavior for well behaved functions in order to work for some special cases. In
case of infinite integrals it might
it will likely fail at the depth required for
> integration of the PDF with sigma=1000.
>
> Cheers,
> -Ajo
>
>
> On Mon, Jul 1, 2013 at 3:17 PM, Konstantin Berlin wrote:
>
>> Just a few thoughts.
>>
>> It seems to me that this code suffers from re
I am not understanding the discussion here. Adaptive integration is
designed for functions that have different behavior in different
regions. Some regions are smoother, some have higher frequeniesy. How
you integrate a divided region, Simpson rule or whatever is a separate
question.
Adaptive integ
de. I think Gilles requires that I use the weight generation
> scheme he has worked with in the codebase in order to consider the code
> usable in Apache MATH.
>
> In summary, I feel the accuracy and versatility of AQ are being ignored in
> favor of the familiarity of LGQ in the ap
> **
>
> Cheers,
> -Ajo
>
>
>
>
> On Tue, Jul 2, 2013 at 9:33 AM, Konstantin Berlin
> wrote:
>
> > At first it seems you are still compute redundant points. See my example
> > that I posted, where I propagate 3 functional values not two.
> >
Hi,
Am I missing something? Why is a linear least squares problem (fitting
constants of basis functions) being solved using a non-linear least squares
solver in the first place?
On Jul 17, 2013, at 11:16 AM, Gilles wrote:
> Hello.
>
> Constructors of classes in the "o.a.c.m.fitting" are inst
Hi,
I have two points on this
1) See issue MATH-1009
2) If LM was always better there would be no GuassNewton. Clearly this is not
the case.
LM is a mixture between GN and steepest descent, so it is only faster for
"tougher" functions. In case of strictly convex function GN should be a good
am
u about several reasons why it shouldn't
be done without further thinking, yet somehow you feel that it is not enough. I
guess you will do whatever you wanted to do anyways, but I would hope you
listen to what I just said.
On Jul 18, 2013, at 9:34 AM, Gilles wrote:
> On Thu, 18 Jul 2013 09:
I appreciate the comment. I would like to help, but currently my schedule is
full. Maybe towards the end of the year.
I think the first approach should be do no harm. The optimization package keeps
getting refactored every few months without much thinking involved. We had the
discuss previously
The question is how correctly to do AQ and how to correctly handle
> improper integrals. What I would appreciate is some real numerical
> analysis or pointers to where we can do the research to support what
> Ajo is asking us to commit. Just running tests on one problem
> instance demonstrates n
Depending on the size of the array, due to caching, it might be faster to
do a single pass and compute min and max at same time.
Is there a function that would compute min and max on an comparable
collection?
On Tue, Jul 23, 2013 at 1:25 PM, Phil Steitz wrote:
> On 7/23/13 10:17 AM, Ajo Fod wr
My thinking is that a good guess can increase speed of refitting under
minor changes. There might be lots of reasons why a user might already have
a good idea of the solution.
On Mon, Aug 5, 2013 at 9:13 AM, Gilles wrote:
> On Mon, 05 Aug 2013 04:27:11 +0200, Gilles wrote:
>
>> Hi.
>>
>> Please
Terrible function name also :)
On Tue, Aug 6, 2013 at 10:28 AM, Ajo Fod wrote:
> When does this become an issue?
>
> -Ajo
>
>
> On Sun, Aug 4, 2013 at 9:42 AM, Phil Steitz wrote:
>
> > On 8/4/13 7:44 AM, Ajo Fod wrote:
> > > Guys,
> > >
> > > What is the use of inverseCumAccuracy when people w
Could we create an "o.a.c.m.experimental" sub-package, and put
> "competing" designs inside it: e.g.
> o.a.c.m.experimental.evan
> o.a.c.m.experimental.gilles
> for people to test with 3.3, and postpone a decision until we can see
> better the differences? Maybe this would allow to pick all the
Hi,
As a quick first pass, I will comment on Evans code only for now. Most
of my comments have to do with numerical aspects and associated
design, rather than threading.
I like the idea of the optimize function taking in a Problem class. I
think it can add a lot of flexibility going forward.
Fir
There would be an advantage, true. I don't know if commons has one
(doesn't look like it). You can also try http://www.joptimizer.com/
On Thu, Aug 15, 2013 at 4:59 PM, Ajo Fod wrote:
> Hello,
>
> Is'nt there an advantage to being able to compute the Jacobian of the
> gradient precisely at a point
ike Joptimizer is restricted to solving convex problems.
>
> My application is to minimize a generic non-linear function with linear
> constraints. Know of anything that does it?
>
> -Ajo
>
>
> On Thu, Aug 15, 2013 at 5:48 PM, Konstantin Berlin
>
> >wrote:
>
&
Like you said, least-squares optimization is a specific (restricted)
type of a general convex optimization problem. Since its a restricted
type of an optimization problem it can potentially be more efficiently
solved than the general convex optimization problem. The advantage in
this specific case
> We already discussed that some time ago.
> Let's not mix different issues. Removing the handling of weights can dealt
> with later.
>
>
I disagree, please address my points in regards to this, rather than
saying a generalizing statement.
Having weights be included makes it harder to implement a
ch the rest
> of the optimizers.
>
>
> On 08/15/2013 09:12 AM, Konstantin Berlin wrote:
>> Hi,
>>
>> As a quick first pass, I will comment on Evans code only for now. Most
>> of my comments have to do with numerical aspects and associated
>> design, rather
Before someone spend time rewriting the whole package, wouldn't we want the
ability to comment on a skeleton design that might not pass unit tests?
On Tuesday, August 20, 2013, Gilles wrote:
> On Tue, 20 Aug 2013 14:55:51 -0700, Ajo Fod wrote:
>
>> I agree that in general test are necessary to en
I think I like this approach a lot more.
One thing that is left out, and we discussed a lot, is the need to
support vector operations. This can also be done as an additional
interface. This is critical since it forms the basic building block
upon which all higher level matrix operations rely on. I
I looked only at the GuassNewton class as a general guide to how
things work. I like it a lot! I only wish all of the optimizations
were rewritten in this way.
Several comments
1) I believe this code can now be simplified
// build the linear problem
final double[] b = new double[nC];
400, Konstantin Berlin wrote:
>>
>> I looked only at the GuassNewton class as a general guide to how
>> things work. I like it a lot! I only wish all of the optimizations
>> were rewritten in this way.
>>
>> Several comments
>>
>> 1) I believe this code can
9:26 AM, Evan Ward wrote:
>
> On 08/26/2013 03:11 PM, Konstantin Berlin wrote:
>> I looked only at the GuassNewton class as a general guide to how
>> things work. I like it a lot! I only wish all of the optimizations
>> were rewritten in this way.
>>
>> Several c
+1
On Nov 8, 2013 3:28 PM, "Thomas Neidhart" wrote:
> On 11/08/2013 09:20 PM, Phil Steitz wrote:
> > On 11/8/13 11:59 AM, Ted Dunning wrote:
> >> On Fri, Nov 8, 2013 at 11:47 AM, Luc Maisonobe
> wrote:
> >>
> is there still consensus that we are going to remove the sparse
> implementat
Hi,
I am really having problems believing that matrix copying is the major
problem in an optimization algorithm. Copying is O(N^2) operations. Surely,
for any problem where performance would matter, it is completely dwarfed by
the O(N^3) complexity of actually solving the normal equation.
Also, I
66 matches
Mail list logo