I'm afraid it is not as easy as simply wrapping "existing" functionality, unless one is ok with a lot of wrapper packages for C backends. I do realize that a lot of people might be ok with this, but to some (me included) that would defeat the purpose of using Julia in the first place. I really love Julia, but I am not going to use Julia for the sake of using Julia. I do agree though, that it might be a good first step to wrap the C backends. The thing is one has to find someone interested in implementing that.

There are a few people working towards the goal of a scikit-learn/caret like interface nonetheless, but some basic things have to be implemented in Julia first (such as a detailed treatment of SVMs). A couple of us are interested and actively gravitating towards a common code base (e.g. loss functions, class encodings) but this takes time to flesh out and get right.

On 2015-11-11 15:29, Randy Zwitch wrote:
Sure. I'm not against anyone doing anything, just that it seems like Julia suffers from an "expert/edge case" problem right now. For me, it'd be awesome if there was just a scikit-learn (Python) or caret (R) type mega-interface that ties together the packages that are already coded together. From my cursory reading, it seems like TensorFlow is more like a low-level toolkit for expressing/solving equations, where I see Julia lacking an easy method to evaluate 3-5 different algorithms on the same dataset quickly.

A tweet I just saw sums it up pretty succinctly: "TensorFlow already has more stars than scikit-learn, and probably more stars than people actually doing deep learning"


On Tuesday, November 10, 2015 at 11:28:32 PM UTC-5, Alireza Nejati wrote:

    Randy: To answer your question, I'd reckon that the two major gaps
    in julia that TensorFlow could fill are:

    1. Lack of automatic differentiation on arbitrary graph structures.
    2. Lack of ability to map computations across cpus and clusters.

    Funny enough, I was thinking about (1) for the past few weeks and
    I think I have an idea about how to accomplish it using existing
    JuliaDiff libraries. About (2), I have no idea, and that's
    probably going to be the most important aspect of TensorFlow
    moving forward (and also probably the hardest to implement). So
    for the time being, I think it's definitely worthwhile to just
    have an interface to TensorFlow. There are a few ways this could
    be done. Some ways that I can think of:

    1. Just tell people to use PyCall directly. Not an elegant solution.
    2. A more julia-integrated interface /a la/ SymPy.
    3. Using TensorFlow as the 'backend' of a novel julia-based
    machine learning library. In this scenario, everything would be in
    julia, and TensorFlow would only be used to map computations to
    hardware.

    I think 3 is the most attractive option, but also probably the
    hardest to do.


Reply via email to