I have found it most convenient to make a branch on the code in .julia and 
work from there, then everything will work as if it were the normal 
package.  The package in the .julia directory is just a clone of the 
project on GitHub.  This is probably morally wrong somehow, but it is 
convenient.  

You can also include the code instead of using it if it is elsewhere.

On Wednesday, June 4, 2014 2:15:32 PM UTC-4, Thomas Covert wrote:
>
> Thanks for the encouragement.  Is there a good Julia-specific reference 
> for how to do this?  So far I have:
>
> 1) Set up a GitHub account
> 2) clicked the "fork" button on the Optim.jl page and cloned it to my 
> machine (right now it lives in ~/github/Optimize.jl/ on my machine)
> 3) made my changes to my new version of Optim.jl/src/optimize.jl and 
> committed them to my repository
>
> How do I tell Julia to use my revised version of Optim.jl instead of the 
> one in ~/.julia/?
>
> Note, I also tried Pkg.clone("Optim") but since I already have it on my 
> system, I had to type Pkg.clone("Optim","TCOptim").  I figured that after I 
> did this step I could just type "using TCOptim" and Julia would use this 
> new code, but that also doesn't work.
>
> Thanks in advance for any help...
>
> -Thom
>
>
> On Wed, Jun 4, 2014 at 10:43 AM, John Myles White <[email protected] 
> <javascript:>> wrote:
>
>> Exposing the option to control the initial approximate Hessian would be a 
>> good idea. If you don’t mind, I’d encourage you to put a little time into 
>> learning GitHub since it will make it a lot easier in the future for you to 
>> help fix problems you’re running into.
>>
>> Fixing our line search routine would also be a good thing, but is 
>> potentially harder. I find the line search code quite difficult to reason 
>> about, so it takes me a long time to convince myself that a proposed change 
>> is safe to make.
>>
>>  — John
>>
>> On Jun 4, 2014, at 8:38 AM, Thomas Covert <[email protected] 
>> <javascript:>> wrote:
>>
>> > When calling for a BFGS or L-BFGS optimization in Optim.jl, the initial 
>> inverse hessian is set to the identity matrix.  I am running into some 
>> trouble with this design decision, as my objective function is numerically 
>> unstable at the first function evaluation point x0 - (I \ g0), where x0 is 
>> what I know to be a good starting value and g0 is the gradient at that 
>> point.  If I were to compute a finite-difference hessian H, this first 
>> point would be x0 - (H \ g0) which works fine (as far as I can tell).
>> >
>> > The numerical stability issues come from 2 sources: (1) the fact that 
>> g0 contains entries generally quite a bit larger in absolute value than x0 
>> and (2) my objective function calls for a cholesky factorization of a 
>> matrix which is partially defined by x, and this seems to fail for very 
>> large absolute values of x.
>> >
>> > When I peeked at the source for Optim.jl, I noticed that all the 
>> underlying newton-style solver routines allow for an explicitly defined 
>> initial inverse hessian, but this interface does not seem to be exposed in 
>> optimize().  Is it possible to change this?  I can see how the code would 
>> change, but I'm not github proficient (yet), so I don't now how to make 
>> these changes and offer them as a pull request...
>> >
>> > By the way, MATLAB's fminunc and fmincon don't seem to suffer from this 
>> problem since MATLAB's line search operation is able to recover from a 
>> Cholesky error and just look for a teensy-tiny step size that works.
>> >
>> > Thanks in advance for any help or suggestions.
>> >
>> > -Thom
>>
>>
>

Reply via email to