Yes, it's only first derivatives.

It would be possible to add second (and third) derivatives, but because the
main point is taking advantage of SSE/AVX registers and ops, it sort of
awkwardly makes most sense for things that fall into multiples of two (val,
derivative) (val, first, second, third).

It would be great to add higher orders and multivariate support, but I dont
anticipate my Copious Free Time enabling that anytime soon.  If someone
wanted to chip in, that would be fancy.  There are several similar
libraries that have those; the unique thing here is the explicit
vectorization and leveraging Eigen's cache-optimized matrix ops.



On Tue, Dec 3, 2019 at 4:20 PM Ian Bell <ian.h.b...@gmail.com> wrote:

> That's great, I'll check out your implementation.  This still only gives
> first derivatives, right?  Any plans for multivariate derivatives?
>
> On Tue, Dec 3, 2019, 10:16 AM Michael Tesch <tes...@gmail.com> wrote:
>
>> Yes, of course.  It's mentioned in the paper.  The dual approach is
>> faster (one multiplication fewer per op) and exact (doesn't rely on +h).
>> The only place where the complex-step might be better is when a) your
>> function is real, and b) you dont need much accuracy, and c) you have
>> BLAS's complex-complex matrix operations that are faster than Eigen's
>> optimization of the dual-valued matrix ops.
>>
>> There is/was some performance problem with Eigen's optimization of
>> mat-mult of complex-valued matrices (and by extension, my dual-valued Eiegn
>> matrices).  The real-valued operations are/were very close to OpenBLAS on
>> my machines, but the complex valued ops are quite a bit slower.  I
>> mentioned it on the Eigen chat room once, but I didn't have time to track
>> it down and come up with a proper bug-report.  (sorry, it will happen some
>> day.. in case it hasn't already been fixed.)
>>
>>
>>
>> On Tue, Dec 3, 2019 at 4:04 PM Ian Bell <ian.h.b...@gmail.com> wrote:
>>
>>> Have you compared against naive complex step derivatives :
>>> https://sinews.siam.org/Details-Page/differentiation-without-a-difference
>>> ?  For first derivatives, CSD are trivial to apply, and doesn't require any
>>> additional machinations.  I think that would serve as a nice benchmark.
>>>
>>> On Tue, Dec 3, 2019, 8:36 AM Michael Tesch <tes...@gmail.com> wrote:
>>>
>>>> Hello,
>>>>
>>>> I've written (yet another!) Dual Number implementation for automatic
>>>> differentiation.  It is meant to be used as the value-type in Eigen
>>>> matrices, and has templates for vectorization (shockingly) similar to (and
>>>> based on) Eigen's complex-type vectorizations.  It is quite fast for
>>>> first-order forward diff, and imho pretty easy to use.  There are also
>>>> SSE/SSE3/AVX vectorizations for std::complex<dual< float | double >> types.
>>>>
>>>> The library is here: https://gitlab.com/tesch1/cppduals , and there's
>>>> a small paper in JOSS too: https://doi.org/10.21105/joss.01487
>>>>
>>>> I hope this could be useful for someone and would be glad for any
>>>> feedback, improvements, etc.
>>>>
>>>> It would be interesting to compare this approach to others, by
>>>> hand-wavey arguments I believe it should ultimately be faster in certain
>>>> cases.
>>>>
>>>> Cheers,
>>>> Michael
>>>>
>>>>

Reply via email to