Relevant to this conversation about Rust / Swift bindings, is this
recent interview with Yann LeCun which I think hits the nail in the
head:   
https://venturebeat.com/2019/02/18/facebooks-chief-ai-scientist-deep-learning-may-need-a-new-programming-language/amp/

The questions are, do people / science community really want something
different than Python?  If so what language would it be?   Rust would
be good for production but for experimentation the most likely
candidate is a language like Swift or Julia.

Pedro.

On Tue, Feb 19, 2019 at 5:53 AM Sheng Zha <zhash...@apache.org> wrote:
>
> Hi,
>
> Thanks for sharing the results. A problem in the benchmark is that the 
> comparison does not take into account that MXNet is making a copy while 
> pytorch is not.
>
> MXNet made the choice of not doing a zero-copy for numpy arrays, but instead 
> making a copy of the numpy data. This means that users are free to change the 
> numpy array after passing it into MXNet. On the other hand, PyTorch chose not 
> to make a copy, by keeping the array alive through incrementing the reference 
> count and then reuse the data pointer.
>
> This also explains why pytorch fp16 is this much worse than fp32 in your 
> results (`.half()` has to make a copy).
>
> If you control for that factor, you will find MXNet to be 50%-100% faster on 
> your workload. I shared the results in your gist comments [1]. Feel free to 
> let me know if you have questions.
>
> -sz
>
> [1] 
> https://gist.github.com/SunDoge/59a8ff336703b45be30b46dc3ee8b4ab#gistcomment-2841120
>
> On 2019/02/19 02:33:20, epsund...@gmail.com <epsund...@gmail.com> wrote:
> > I wrote some benchmark code, and here's the discussion:
> > https://discuss.mxnet.io/t/hybrid-training-speed-is-20-slower-than-pytorch/2731/3
> >
> > There's another discussion here:
> > https://discuss.mxnet.io/t/performance-of-symbol-vs-ndarray-vs-pytorch/870/6
> >
> > I slightly modify it:
> > https://gist.github.com/SunDoge/59a8ff336703b45be30b46dc3ee8b4ab
> >
> >
> > On 2019/02/18 19:26:27, Edison Gustavo Muenz <edisongust...@gmail.com> 
> > wrote:
> > > Hello!
> > >
> > > > mxnet is somehow slower than pytorch, even with hybridize on, and that's
> > > why I start writing binding for pytorch now.
> > >
> > > I believe many people in this list will be very interested in why you say
> > > this.
> > >
> > > As far as I know, and correct me if I'm wrong, MXNet is supposed to be a
> > > very fast, if not the fastest, dl framework. I mean in raw performance
> > > numbers.
> > >
> > > Would you mind expanding on what you mean? I'm genuinely interested.
> > >
> > > Best,
> > > Edison Gustavo Muenz
> > >
> > > On Mon 18. Feb 2019 at 17:28, epsund...@gmail.com <epsund...@gmail.com>
> > > wrote:
> > >
> > > > The rust crate for tensorflow support only inference, which limit its
> > > > usage. If you really want to deploy your network, TensorRT and TVM may 
> > > > be
> > > > better choice.
> > > >
> > > > I really want to write a dl framework in rust from scratch. However,
> > > > there's no mature GPU Tensor library in rust (rust-ndarray is a great 
> > > > crate
> > > > but it only support CPU. arrayfire may support ND array in the future,
> > > > which is a good candidate). So I have to write bindings for existing
> > > > project, which is much easier. .The benefit is that I can safely wrap 
> > > > those
> > > > unsafe C pointer, and with the help of generic, I can manipulate data 
> > > > with
> > > > ndarray in a type-safe way.
> > > >
> > > > The only difficulty is that I'm a postgraduate and I'm pretty sure my 
> > > > boss
> > > > won't be happy to see me writing rust code instead of doing research.
> > > > Besides, mxnet is somehow slower than pytorch, even with hybridize on, 
> > > > and
> > > > that's why I start writing binding for pytorch now.
> > > >
> > > > On 2019/02/09 01:35:04, Zach Boldyga <z...@scalabull.com> wrote:
> > > > > I did some homework and stumbled across something that changed my 
> > > > > view of
> > > > > where machine learning libraries are headed:
> > > > >
> > > > >
> > > > https://github.com/tensorflow/swift/blob/master/docs/WhySwiftForTensorFlow.md
> > > > >
> > > > > Google & Apple are building first-class support for Tensorflow right 
> > > > > into
> > > > > the Swift language. They chose Swift very carefully, and while they 
> > > > > noted
> > > > > Rust is a great choice for lots of reasons, the learning curve of the
> > > > > language is too steep... It seems like Rust isn't going to get much 
> > > > > love
> > > > > from the ML community in the places that matter.
> > > > >
> > > > > I also see that as of writing this, the Rust crate for Tensorflow has
> > > > only
> > > > > ~10,000 lifetime downloads, which is pretty low considering how much
> > > > effort
> > > > > the client library required. So the existing set of practitioners in 
> > > > > the
> > > > > language is very small, and it's unlikely to grow.
> > > > >
> > > > > Also, the benefits of Rust memory safety and ownership won't really be
> > > > > realized via a client library that uses FFI on a C API.
> > > > >
> > > > > I'm not going to move forward with this client lib. I'll check back 
> > > > > here
> > > > in
> > > > > the future and see if there's any activity... In the meantime, if 
> > > > > someone
> > > > > stumbles across this in the future and wants to pick it up, don't let 
> > > > > me
> > > > > stand in the way!
> > > > >
> > > > > - Zach
> > > > >
> > > > >
> > > > > On Wed, Jan 30, 2019 at 11:16 PM Zach Boldyga <z...@scalabull.com>
> > > > wrote:
> > > > >
> > > > > > Rad, thanks for the input everyone!
> > > > > >
> > > > > > I'm anticipating some friction with using FFI with the C API since 
> > > > > > it's
> > > > > > considered unsafe in Rust; difficulty of integrating will depend on 
> > > > > > the
> > > > > > nuances of the C API as HY mentioned...
> > > > > >
> > > > > > Going to go ahead and dive in. Will be back eventually for feedback 
> > > > > > /
> > > > > > input!
> > > > > >
> > > > > > Zach Boldyga
> > > > > > Scalabull  |  Founder
> > > > > > 1 (866) 846-8771 x 101
> > > > > >
> > > > > >
> > > > > > On Wed, Jan 30, 2019 at 12:02 AM HY Chen <chenhy12...@gmail.com>
> > > > wrote:
> > > > > >
> > > > > >> I have tried to create a a module via existing rust FFI generators 
> > > > > >> but
> > > > > >> failed. It seems like you have to think a lot more than just
> > > > translate the
> > > > > >> C api to make it work. It's better understand the C API first and 
> > > > > >> make
> > > > > >> sure
> > > > > >> it won't introduce new problems in rust.
> > > > > >>
> > > > > >> HY
> > > > > >>
> > > > > >> Pedro Larroy <pedro.larroy.li...@gmail.com> 于2019年1月30日周三 上午4:35写道:
> > > > > >>
> > > > > >> > I have been thinking about this and I find really exciting to 
> > > > > >> > have
> > > > > >> > Rust bindings and bring a powerful framework like MXNet to the 
> > > > > >> > Rust
> > > > > >> > community and to native applications in a convenient Rust crate. 
> > > > > >> > I
> > > > > >> > would love to see this happen. I think basically MXNet needs to 
> > > > > >> > be
> > > > > >> > wrapped in a Rust crate via FFI / C Bindings.
> > > > > >> >
> > > > > >> > Pedro.
> > > > > >> >
> > > > > >> > On Tue, Jan 29, 2019 at 11:05 AM Zach Boldyga 
> > > > > >> > <z...@scalabull.com>
> > > > > >> wrote:
> > > > > >> > >
> > > > > >> > > Hey y'all!
> > > > > >> > >
> > > > > >> > > I'm thinking about spending this week working on a rust client
> > > > lib for
> > > > > >> > > MXNet. saw a little bit of chatter about this in the github 
> > > > > >> > > issues
> > > > > >> and no
> > > > > >> > > strong existing crates at the moment. Any pointers on 
> > > > > >> > > approaching
> > > > this
> > > > > >> > in a
> > > > > >> > > way that will lead to it being adopted as an officially 
> > > > > >> > > supported
> > > > > >> client
> > > > > >> > > library? And overall yay/nay on whether adding a Rust lib makes
> > > > sense
> > > > > >> &
> > > > > >> > why
> > > > > >> > > / why not?
> > > > > >> > >
> > > > > >> > > Zach Boldyga
> > > > > >> > > Scalabull  |  Founder
> > > > > >> > > 1 (866) 846-8771 x 101
> > > > > >> >
> > > > > >>
> > > > > >
> > > > >
> > > >
> > >
> >

Reply via email to