The current state of AD in Nim is as follows (unless something new is around 
I'm not aware of), but with the expectation of "fully-developed one" you'll 
likely be disappointed unfortunately:

  * Arraymancer has its AD implementation based on Wengert lists (iirc) that is 
used for its neural network backpropagation. It's very purpose built though and 
likely not much use unless you intend to extend arraymancer: 
<https://github.com/mratsim/Arraymancer/tree/master/src/arraymancer/autograd>
  * `exprgrad` has its own implementation for its JIT compiled ML DSL. Can't 
tell you much about it unfortunately: <https://github.com/can-lehmann/exprgrad>
  * `runtimeGrad` contains three different runtime based autograd 
implementations. One is a port of Karpathy's micrograd (reverse mode autograd 
based on a `Value = ref object` type that accumulates the gradient), the other 
a dual number implementation (forward mode autograd) and the last complex step 
derivative (which is pretty much a dual number impl in disguise). The dual 
number implementation is pretty robust and straightforward. If such an approach 
(and the performance overhead of doing all calculations on duals instead of 
`float`) is acceptable, it should be easy to extend to support whatever is 
missing. For ML this is _[not](https://forum.nim-lang.org/postActivity.xml#not) 
interesting of course: <https://github.com/SciNim/runtimeGrad>
  * not autograd, but a compile time symbolic differentiation is found in 
`astGrad`. Very useful for certain operations, but only supports 
differentiating math expressions: <https://github.com/SciNim/astGrad>
  * there is a Zygote.jl like library in development @hugogranstrom and me 
started a while back, but it's not super useful yet (cannot deal with 
conditional statements in procedures yet). I don't suppose you're interested in 
jumping head first into the hellscape of implementing 
<https://arxiv.org/abs/1810.07951> in Nim macros with us?



There may be some other stuff that I'm missing though. Generally I would pick 
the right tool for the job. Having a single library for ML and other autograd 
purposes is tricky. Forward mode autograd is often more applicable once you're 
not dealing with large neural networks.

Reply via email to