Randy: To answer your question, I'd reckon that the two major gaps in julia that TensorFlow could fill are:
1. Lack of automatic differentiation on arbitrary graph structures. 2. Lack of ability to map computations across cpus and clusters. Funny enough, I was thinking about (1) for the past few weeks and I think I have an idea about how to accomplish it using existing JuliaDiff libraries. About (2), I have no idea, and that's probably going to be the most important aspect of TensorFlow moving forward (and also probably the hardest to implement). So for the time being, I think it's definitely worthwhile to just have an interface to TensorFlow. There are a few ways this could be done. Some ways that I can think of: 1. Just tell people to use PyCall directly. Not an elegant solution. 2. A more julia-integrated interface *a la* SymPy. 3. Using TensorFlow as the 'backend' of a novel julia-based machine learning library. In this scenario, everything would be in julia, and TensorFlow would only be used to map computations to hardware. I think 3 is the most attractive option, but also probably the hardest to do.