I'm following closely on deep learning compilers. I find the technology behind
S4TF very interesting in particular their differentiable programming document:
[https://github.com/apple/swift/blob/master/docs/DifferentiableProgramming.md](https://github.com/apple/swift/blob/master/docs/DifferentiableProgramming.md)
however:
* Swift main platform is Mac, and Mac are very poor at machine learning
because:
* No support of OpenMP in their compiler, you need to build gcc/llvm from
source making usage of libraries like XgBoost or LightGBM a huge core
* No Nvidia GPU/Cuda, for better or worse, most of the code is written for
Nvidia GPU. AMD is catching up with ROCm but I'm pretty sure you can't install
it on Mac because Apple wants to push Metal down the Mac user throats and it's
a very limited API.
* Swift threading model is GrandCentralDispatch / libdispatch. There is no
issue in performance but it requires kernel hooks. This means that you need a
custom kernel module on Linux, which is very constraining for cloud computing.
* On Windows you have to use Windows Fibers for threading, seems like a
second citizen (Swift for Windows last update was 2018, you have to compile it
yourself).
Also Swift is seriously lacking in multidimensional array, dataframe and a
plotting library. Those are the 3 building blocks off a machine learning
ecosystem.
Lastly Google seems to be betting more on the following:
* Jax [https://github.com/google/jax](https://github.com/google/jax), a
compiler for Python with differentiable programming support
* MLIR
[https://llvm.org/devmtg/2019-04/slides/Keynote-ShpeismanLattner-MLIR.pdf](https://llvm.org/devmtg/2019-04/slides/Keynote-ShpeismanLattner-MLIR.pdf)
which got accepted as an LLVM project. AFAIK Chris Lattner has been working
more on MLIR than Swift for Tensorflow.