Are you suggesting a clarification in the documentation or new 
functionality? The docs clearly explain when ForwardDiff is used and what 
the performance drawbacks are. If you know of a package in Julia which we 
could use to perform reverse mode AD on user-defined functions, we'll 
gladly accept pull requests.

On Tuesday, March 29, 2016 at 12:45:02 PM UTC-4, feza wrote:
>
> I suggest clarification in the documents regarding which mode of automatic 
> differentiation since this can have a large impact on computation time.
>
> It seems like this 'ForwardDiff is only used for used-defined functions 
> with the autodiff=true option. ReverseDiffSparse is used for all other 
> derivative computations.'   is not very well thought out. 
> If the input dimension is much larger than the output dimension then 
> autodiff=true should by default use reverse mode differentiation and 
> otherwise forward mode differentation.
>
>
>
> On Wednesday, March 9, 2016 at 8:27:06 AM UTC-5, Miles Lubin wrote:
>>
>> On Wednesday, March 9, 2016 at 12:52:38 AM UTC-5, Evan Fields wrote:
>>>
>>> Great to hear. Two minor questions which aren't clear (to me) from the 
>>> documentation:
>>> - Once a user defined function has been defined and registered, can it 
>>> be incorporated into NL expressions via @defNLExpr?
>>>
>>
>> Yes.
>>  
>>
>>> - The documentation references both ForwardDiff.jl and 
>>> ReverseDiffSparse.jl. Which is used where? What are the tradeoffs users 
>>> should be aware of?
>>>
>>
>> ForwardDiff is only used for used-defined functions with the 
>> autodiff=true option. ReverseDiffSparse is used for all other derivative 
>> computations.
>> Using ForwardDiff to compute a gradient of a user-defined function is not 
>> particularly efficient for functions with high-dimensional input.
>>  
>>
>>> Semi-unrelated: two days ago I was using JuMP 0.12 and NLopt to solve 
>>> what should have been a very simple (2 variable) nonlinear problem. When I 
>>> fed the optimal solution as the starting values for the variables, the 
>>> solve(model) command (or NLopt) hung indefinitely. Perturbing my starting 
>>> point by .0001 fixed that - solve returned a solution 
>>> instantaneously-by-human-perception. Am I doing something dumb?
>>>
>>
>> I've also observed hanging within NLopt but haven't had a chance to debug 
>> it (anyone is welcome to do so!). Hanging usually means that NLopt is 
>> iterating without converging, since NLopt has no output 
>> <https://github.com/JuliaOpt/NLopt.jl/issues/16>. Try setting an 
>> iteration limit.
>>
>

Reply via email to