Le 10/11/2011 23:39, Gilles Sadowski a écrit :
> Hi.
> 
>> [...]
>>>>>
>>>>>>
>>>>>> Another argument for not hiding the mapping is that another poor
>>>> man's
>>>>>> approach is to use a penalty (when the optimizer's "guess" falls
>>>> out of
>>>>>> bounds) and I wonder whether some algorithm could behave better
>>>> with one or
>>>>>> the other approach.
>>>>>
>>>>> As far as I understand, direct methods that do not rely on
>>>> derivatives
>>>>> support well penalty functions. Gradients based methods do not.
>>>>
>>>> We could thus implement two adapters, one that will do a mapping of the
>>>> variables and another that will use the penalty approach.
>>
>> Done in subversion repository as of r1200516. 
> 
> This is not what I had in mind. Not having thought much about it, I thought
> that we would have to implement an adapter around the optimizer (i.e. that
> would transform a "...Optimizer" into a "...SimpleBoundsOptimizer").
> 
> What you've coded would have been part of the internal workings of those
> adapters.
> Now, as I said, this is not very clear in my mind at this moment... Maybe
> that all that is needed is what you already provided.

It could be done just as you say: building optimizers adapters on top of
the functions adapters. Documentation should be updated, though.
An intersting side effect would be that we could also do the
encoding/decoding of start point and result point in case of mapping
adapter. For no, the user has to do it by himself which is cumbersome.

So +1 to add this.

> 
> One minor point: I'd put these adapters in a new package "optimization.util"
> (instead of in "optimization.direct").
> 
> Second, maybe less minor, point (and somewhat related to the above
> suggestion) is that your adapters are "MultivariateRealFunction"s. What if
> one wants to try this approach with an optimizer that would need the
> derivatives ("DifferentiableMultivariateRealFunction")?
> Admittedly, there are no such optimizers currently but the "Abstract..."
> base class is already available; so, either we think that for some reason,
> there won't be such optimizers (and we should probably removed the unused
> stuff) or we should foresee this option and the adapters should be
> genericized somehow (like the "BaseAbstractScalarOptimizer<...>" in package
> "optimiszation.direct).
> 
> Does this make sense?

It would work only for mapping adapter. Penalty adapter cannot be
differentiated.

Luc

> 
>> [...]
> 
> 
> Gilles
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
> For additional commands, e-mail: dev-h...@commons.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org

Reply via email to