Hi,

Thanks, I will make the changes. It will take some time tough...

Regarding the "bounds" in nmsmax I did it in a primitive way 
that is enough for my needs:

[f(1)] = dirn*feval(fun,x,varargin{:});

--> [f(1), param] = dirn*feval(fun,x,varargin{:});

where the function fun renormalise the parameters within 'bounds' 
and send them back to nmsmax. 

In my case these are not  'bounds' (b_min<=p<=b_max) ) in the 
 strict sense but I need that \sum{some_parameters} = 1. 
(I can probably try something like 1-\sum_2{param}=param(1) with 
0<=param(2,:)<=1)
I have to think a bit...

Regards,

Bertrand




---

http://broessli.redbubble.com/

--- On Tue, 8/4/09, Olaf Till <olaf.t...@uni-jena.de> wrote:

From: Olaf Till <olaf.t...@uni-jena.de>
Subject: Re: [OctDev] package contribution
To: octave-dev@lists.sourceforge.net
Date: Tuesday, August 4, 2009, 1:02 PM

On Mon, Aug 03, 2009 at 01:49:45PM +0200, Olaf Till wrote:
> On Mon, Aug 03, 2009 at 01:07:23PM +0200, Olaf Till wrote:
> > I don't know if there are theoretical objections against bounds in
> > Levenberg-Marquardt optimization, but I also ended up introducing
> > bounds into leasqr for my use; I would think this is general
> > purpose.
> 
> Sorry, I remember now that there were some problems. Often, a bound is
> desirable because some parameters do not produce useful results (of
> the user function) outside a certain range (e.g. a division by zero
> might occur). In such a case, the bounds should already be respected
> during gradient determination, not only later in the 'step'. This
> would mean that if a parameter is already at its bound, only a
> _one_-sided gradient can be computed for this parameter, but only on
> the 'side' away from the bound. Also, in two-sided bounds, the
> possible parameter-change for gradient estimation could become small
> (too small for some problems?). Since currently gradient estimation in
> leasqr is done by different function (dfdp), which knows nothing on
> bounds, implementation is not so easy. One could pass the
> bounds-argument also to dfdp, but there might exist user-replacements
> for dftp (there is an argument to leasqr just to choose this function)
> which do not honour the 'bounds' argument ... so people can not expect
> bounds to work without changing their dfdp-replacement.

Some new suggestions.


-- "bounds" in leasqr

Attached are new versions of leasqr.m and dfdp.m. They handle "bounds"
in steps and in gradient estimation. The "options" argument is used
for this and is now a structure of options. For backwards
compatibility, the old options-matrix is still recognized. A lot of
cosmetic changes have been made to make leasqr.m better editable under
emacs, so almost each line is changed and its not much use making
diffs :-(. I have tested it a bit. Since leasqr does not seem to have
an "assigned" active maintainer, I would commit it and change the
version number of the optim package, but Soeren please decide on that.

Roessli, if there is nothing wrong with the new leasqr version I
suggest you rewrite your code to make use of it, and make it depend on
the new version of the optim package. The computations according to
Bragg you had put inside it can surely be made _after_ calling leasqr?

-- "bounds" in nmsmax

I am not familiar with the method in nmsmax, but at first glance I do
not see where you introduced "bounds" ...

-- keeping parameters "fixed"

The way you have done this in nmsmax.m could break existing code since
the new argument is not the last. And it can not be made the last due
to the already existing varargin argument. (BTW, why not using an
argument which is a logical vector, true for each parameter to be
fixed?)  To be on the safe side, I would suggest you to solve the
"fixing" problem with a wrapper, which could be left in your package
and need not necessarily put into the optim package. Something like
the following sketch (may have overlooked some details):

function [outp1, outp2, ...] = optim_wrapper (fixed, varargin)
## according to index "fixed", remove fixed parameters and possibly
## other related vector-elements
...
## replace name of (or handle to) user-function with an anonymous
## function like that (calls the here defined subfunction)
@ (x, p) local_func (x, p, user_func, fixed, values_of_fixed_parameters)
...
## call optimization routine
[oup1, outp2, ...] = call_optimizer (varargin(:));
## reintroduce fixed parameters into output arguments if necessary
...
## define subfunction
function ret = local_func (x, p, user_func, fixed, values_fixed)
## reintroduce fixed parameters into "p" according to "fixed" and
## "values_fixed"
...
## call user function
ret = feval (user_func, x, p);

-----Inline Attachment Follows-----

------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
-----Inline Attachment Follows-----

_______________________________________________
Octave-dev mailing list
Octave-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/octave-dev



      
------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
Octave-dev mailing list
Octave-dev@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/octave-dev

Reply via email to