On Monday, April 14, 2014 05:30:02 PM Dominique Orban wrote: > At least in Python, trips in and out of ADOL-C almost count > for nothing as they're just passing pointers around.
That still implies a function call; in Python you might not notice because most things are triggering function calls anyway. However, in Julia there are times where the operation can be inlined, and when applicable this can sometimes lead to a noticeable performance benefit. Back before we directly called LLVM's powi intrinsic, writing x^3 was about 3-fold slower than x*x*x, even though x^3 was implemented by calling a C-library function that executed x*x*x (i.e., did not use the general-power algorithm, which was of course many times slower yet). People in the Julia community tend to notice things like factors of 3 in performance :). That said, depending on circumstance a mature implementation in C might be much faster than a less-mature implementation in Julia. But overall I agree with others that a gradual migration towards Julia implementations makes a lot of sense, even if it transiently feels like reimplementing the wheel. Want to run your algorithm with BigFloat precision? In Julia that might be a 1-line change, but unless the C library has been compiled with that as an option, you're stuck. Heck, some libraries assume double and don't even let you use floats. --Tim
