It seems like this discussion is heading down the rabbit's hole, but let me 
address the original question. 

AMPL is certainly the right choice from a maturity perspective, but AMPL 
itself being closed source and the AMPL solver library being written in 
seemingly intentionally obfuscated C leaves a lot to be desired with 
respect to extensibility. It's hard to hook in user-provided functions. 
It's hard to query Hessians of subblocks, which is essential for Schur 
complement decomposition of large-scale structured modules. The need to 
communicate through NL files often causes I/O bottenecks when running on 
high-performance clusters. The recent development of CasADi is another 
indicator that the mature tools like AMPL and ADOL-C aren't satisfactory in 
all applications.

Julia's tagline is "a fresh approach to technical computing", and in that 
vein I think that it's time to take a fresh look at AD. As Tony mentioned, 
Julia has a number of new technical features that, I believe, can really 
change the way that AD is implemented in practice. For example, 
forward-mode AD is already built into the base optimization library (Optim) 
and works when user functions are written generically; there's no need to 
modify the user code to use a particular "AD double" type. This has been 
under discussion in scipy since before Julia was publicly announced 
(https://github.com/scipy/scipy/issues/2035). With easy to access 
expression manipulation together with JIT, we can design and compile 
specialized functions to evaluate the derivatives of a particular model. 
This is certainly not a new idea, but now it's actually easy to implement. 
ReverseDiffSparse.jl does this in a total of about 1,300 lines of code, 300 
lines of which are for graph coloring routines to exploit sparsity. I'm not 
claiming that it's production-ready at this point, or faster than AMPL 
(yet), but it does work. AD is indeed its own field, and we do intend to 
document and publish our approaches.

On Monday, April 14, 2014 7:14:00 PM UTC-4, Dominique Orban wrote:
>
> I realize I should probably have posted this on the Julia-Opt group, but I 
> didn't know about it. I have to look around JuMP more to understand its 
> design. I have 10+ year of successful research with the ampl.jl model (only 
> in Python) so I'm quite confident that it has most of what a continuous 
> optimizer needs, with the exception perhaps of cone optimization. Also I'd 
> like to ask why you would reimplement backward AD. Some people spend their 
> entire career writing reliable AD software. It's not a judgement on your 
> code; I would have interfaced ADOL-C which already provides much of what 
> optimization needs, including Jacobian-vector products and Hessian-vector 
> products (and Hessian of the Lagrangian - vector products).
>
> On Sunday, April 13, 2014 2:48:26 AM UTC-7, Miles Lubin wrote:
>>
>> Hi Dominique,
>>
>> This will definitely be very useful for accessing the large array of 
>> problem instances written in AMPL. 
>>
>> As for writing solvers in Julia around this format, I'm admittedly biased 
>> but I don't think it's an ideal approach. We already have a pure-Julia 
>> implementation of AD for exact sparse Hessians in JuMP. That said, solvers 
>> written in Julia shouldn't be tied to a particular AD implementation or 
>> modeling language; ideally they will just implement a nonlinear 
>> MathProgBase interface (which doesn't quite exist yet), on top of which 
>> they could be called from JuMP or AMPL. I agree with Tony that it could be 
>> very interesting to use this interface as an interchangeable backend for 
>> JuMP's AD.
>>
>> Also, I'm not sure if you've considered it, but by licensing this 
>> interface under GPL, all solvers that use it must be released under GPL 
>> also, I believe. 
>>
>> Miles
>>
>> On Sunday, April 13, 2014 6:14:51 AM UTC+1, Dominique Orban wrote:
>>>
>>> I just put together a few C and Julia files that let users read models 
>>> in the AMPL modeling language for optimization.
>>>
>>> https://github.com/dpo/ampl.jl
>>>
>>> It's not quite a module or a package; please bear with me as I'm still 
>>> learning Julia. This gives access to a huge collection of problems already 
>>> written in AMPL, e.g.,
>>>
>>> http://orfe.princeton.edu/~rvdb/ampl/nlmodels/index.html
>>> https://github.com/mpf/Optimization-Test-Problems (many of the same 
>>> problems, without the solve command)
>>> http://netlib.org/ampl/models/
>>> etc.
>>>
>>> AMPL computes first and second derivatives for you, so it should be easy 
>>> to pass such problems to solvers written in Julia, and to write solvers 
>>> around this model format.
>>>
>>> Cheers.
>>>
>>

Reply via email to