On Tuesday, November 10, 2015 at 8:28:32 PM UTC-8, Alireza Nejati wrote: > > Randy: To answer your question, I'd reckon that the two major gaps in > julia that TensorFlow could fill are: > > 1. Lack of automatic differentiation on arbitrary graph structures. > 2. Lack of ability to map computations across cpus and clusters. >
>From reading through some of the TensorFlow docs, it seems to currently only run on one machine. This is where MXNet has an advantage (and MXNet.jl) as it can run across multiple machines/gpus (see: https://mxnet.readthedocs.org/en/latest/distributed_training.html for example) > > Funny enough, I was thinking about (1) for the past few weeks and I think > I have an idea about how to accomplish it using existing JuliaDiff > libraries. About (2), I have no idea, and that's probably going to be the > most important aspect of TensorFlow moving forward (and also probably the > hardest to implement). So for the time being, I think it's definitely > worthwhile to just have an interface to TensorFlow. There are a few ways > this could be done. Some ways that I can think of: > > 1. Just tell people to use PyCall directly. Not an elegant solution. > 2. A more julia-integrated interface *a la* SymPy. > 3. Using TensorFlow as the 'backend' of a novel julia-based machine > learning library. In this scenario, everything would be in julia, and > TensorFlow would only be used to map computations to hardware. > > I think 3 is the most attractive option, but also probably the hardest to > do. > So if I understand correctly, we need bindings to TensorFlow - they use SWIG to generate Python bindings, but there is no Julia backend for SWIG. Then using the #3 approach we'd build something more general on top of those bindings. Julia's macros should allow for some features that would be difficult in C++ or Python.
