The function arguments are modified in place for efficiency reasons, to avoid temporary allocations. This isn't needed in principle.
On Tuesday, February 11, 2014 11:19:09 PM UTC-5, Sam Urmy wrote: > > Wow, very cool. I learned something new today... > > Miles, in > autodiff.jl<https://github.com/mlubin/NLsolve.jl/blob/4340c378ebb7b3864c88c1590f4385c677b37893/src/autodiff.jl>, > > are the function arguments all modified in place just to save memory in the > case of large vectors, or is there some other reason? > > > On Tue, Feb 11, 2014 at 9:34 PM, Miles Lubin <[email protected]<javascript:> > > wrote: > >> We're still in the process of putting together a nice interface for this, >> but automatic differentiation is a good option that isn't available in most >> other languages. It will give you an *exact* numerical derivative, not >> subject to approximation error from finite differences. As an example of >> how to use the DualNumbers package to compute a Jacobian matrix, see >> https://github.com/EconForge/NLsolve.jl/pull/6. If you have any >> questions on this, I'm happy to help out. >> >> As a fallback, the Calculus package has routines for computing a Jacobian >> using finite differences. >> >> >> On Tuesday, February 11, 2014 5:25:20 PM UTC-5, Mauro wrote: >> >>> You could try automatic differentiation. Have a look at the example in >>> the readme of https://github.com/scidom/DualNumbers.jl >>> >>> On Tue, 2014-02-11 at 21:35, [email protected] wrote: >>> > I imagine this exists somewhere already, but I haven't been able to >>> find >>> > it: is there a function that takes a vector-valued function and a >>> point in >>> > its domain, and returns the Jacobian matrix at that point? >>> > >>> > Thanks~ >>> > >>> > Sam >>> >> >
