In Python I either use models written in AMPL, or in pure Python models 
with derivatives computed by ADOL-C or CppAD (the latter being limited to 
dense derivatives for now because the Python interface hasn't been updated 
in a while). At least in Python, trips in and out of ADOL-C almost count 
for nothing as they're just passing pointers around. At least on OSX, both 
can be installed with brew with one command. And I don't mind dependencies 
if they provide state-of-the-art tools. We're probably coming from 
different angles. My goal is to assemble numerical methods rather than to 
provide a modeling environment. AMPL is by no measure a general-purpose 
programming language (Jon Bentley might have called it a "little 
language"), and it's commercial, but it's amazing at what it does. I'll 
look at the paper; thanks for the pointer.

On Monday, April 14, 2014 5:01:55 PM UTC-7, Tony Kelman wrote:
>
> Miles can speak to the design, but JuMP provides quite a bit more than 
> just AD. It gives a user-friendly modeling environment like Pyomo or 
> Yalmip, or the .mod layer of AMPL which is neither open-source nor a modern 
> general-purpose programming language. Miles and Iain have a very 
> interesting paper from last year discussing more of the motivation here 
> http://www.mit.edu/~mlubin/juliacomputing.pdf - although as I understand 
> it the nonlinear section from that paper differs in implementation from 
> what's under development in JuMP right now.
>
> In your Python workflow, were you using Pyomo, or custom generating AMPL 
> format input files in some other way?
>
> There are major performance and accessibility advantages to having the 
> user-facing modeling environment in pure Julia. If both the AD 
> implementation and the solver were using binary C/C++ libraries, that would 
> be several trips in and out of various libraries and their individual data 
> structures at each solver iteration. And it introduces more binary 
> dependencies which make maintenance and installation more challenging. 
> Julia has better tools for doing this in a smooth cross-platform way than 
> any other environment I've seen, but it takes some investment to set up in 
> the first place. And the upstream library needs to be set up correctly - 
> the vast majority of upstream dependencies have much slower development 
> cycles than Julia does.
>
> Performance benchmarks will surely be forthcoming to compare the 
> performance of the pure-Julia AD to AMPL as well as established C libraries 
> like ADOL-C and CppAD.
>
>
> On Monday, April 14, 2014 4:14:00 PM UTC-7, Dominique Orban wrote:
>>
>> I realize I should probably have posted this on the Julia-Opt group, but 
>> I didn't know about it. I have to look around JuMP more to understand its 
>> design. I have 10+ year of successful research with the ampl.jl model (only 
>> in Python) so I'm quite confident that it has most of what a continuous 
>> optimizer needs, with the exception perhaps of cone optimization. Also I'd 
>> like to ask why you would reimplement backward AD. Some people spend their 
>> entire career writing reliable AD software. It's not a judgement on your 
>> code; I would have interfaced ADOL-C which already provides much of what 
>> optimization needs, including Jacobian-vector products and Hessian-vector 
>> products (and Hessian of the Lagrangian - vector products).
>>
>> On Sunday, April 13, 2014 2:48:26 AM UTC-7, Miles Lubin wrote:
>>>
>>> Hi Dominique,
>>>
>>> This will definitely be very useful for accessing the large array of 
>>> problem instances written in AMPL. 
>>>
>>> As for writing solvers in Julia around this format, I'm admittedly 
>>> biased but I don't think it's an ideal approach. We already have a 
>>> pure-Julia implementation of AD for exact sparse Hessians in JuMP. That 
>>> said, solvers written in Julia shouldn't be tied to a particular AD 
>>> implementation or modeling language; ideally they will just implement a 
>>> nonlinear MathProgBase interface (which doesn't quite exist yet), on top of 
>>> which they could be called from JuMP or AMPL. I agree with Tony that it 
>>> could be very interesting to use this interface as an interchangeable 
>>> backend for JuMP's AD.
>>>
>>> Also, I'm not sure if you've considered it, but by licensing this 
>>> interface under GPL, all solvers that use it must be released under GPL 
>>> also, I believe. 
>>>
>>> Miles
>>>
>>> On Sunday, April 13, 2014 6:14:51 AM UTC+1, Dominique Orban wrote:
>>>>
>>>> I just put together a few C and Julia files that let users read models 
>>>> in the AMPL modeling language for optimization.
>>>>
>>>> https://github.com/dpo/ampl.jl
>>>>
>>>> It's not quite a module or a package; please bear with me as I'm still 
>>>> learning Julia. This gives access to a huge collection of problems already 
>>>> written in AMPL, e.g.,
>>>>
>>>> http://orfe.princeton.edu/~rvdb/ampl/nlmodels/index.html
>>>> https://github.com/mpf/Optimization-Test-Problems (many of the same 
>>>> problems, without the solve command)
>>>> http://netlib.org/ampl/models/
>>>> etc.
>>>>
>>>> AMPL computes first and second derivatives for you, so it should be 
>>>> easy to pass such problems to solvers written in Julia, and to write 
>>>> solvers around this model format.
>>>>
>>>> Cheers.
>>>>
>>>

Reply via email to