Commiter with CMake knowledge wanted for review on PR on improving OpenBLAS integration

2019-02-18 Thread Edison Gustavo Muenz
Hello dear MXNet community, I would really appreciate if a Commiter with CMake knowledge could take a look at this PR: https://github.com/apache/incubator-mxnet/pull/14028 It is stated in the PR, just mentioning again: The objective of the PR is to “*Ease the pain of linking with OpenBLAS using

Re: Rust Client Lib

2019-02-18 Thread epsundoge
The rust crate for tensorflow support only inference, which limit its usage. If you really want to deploy your network, TensorRT and TVM may be better choice. I really want to write a dl framework in rust from scratch. However, there's no mature GPU Tensor library in rust (rust-ndarray is a grea

Re: Rust Client Lib

2019-02-18 Thread Edison Gustavo Muenz
Hello! > mxnet is somehow slower than pytorch, even with hybridize on, and that's why I start writing binding for pytorch now. I believe many people in this list will be very interested in why you say this. As far as I know, and correct me if I'm wrong, MXNet is supposed to be a very fast, if no

Re: [VOTE] Release Apache MXNet (incubating) version 1.4.0.rc3

2019-02-18 Thread Yuxi Hu
+1 Built from source (Ubuntu 16.04) successfully and verified the training speed for ResNet50 is at par with MXNet 1.3.1 release on a single p3.16xlarge instance. On Sun, Feb 17, 2019 at 12:13 PM Carin Meier wrote: > +1 Downloaded and verified the signature on the tar. Built and tested the > Sc

Apache MXNet (Incubating) User Group Berlin - cancelled on 02/19/19

2019-02-18 Thread Marco de Abreu
Hello, the recurring user group, hosted by Berlin contributors, will be cancelled for this week due to an availability clash. Please excuse any inconveniences this may cause. Best regards, Marco

Re: [VOTE] Release Apache MXNet (incubating) version 1.4.0.rc3

2019-02-18 Thread Roshani Nagmote
+1 Downloaded, installed on Ubuntu 16.04. Verified signatures. Built from source with cuda enabled. Ran train_mnist.py test successfully. Thanks, Roshani On Sun, Feb 17, 2019 at 12:13 PM Carin Meier wrote: > +1 Downloaded and verified the signature on the tar. Built and tested the > Scala/Cloju

Re: Rust Client Lib

2019-02-18 Thread epsundoge
I wrote some benchmark code, and here's the discussion: https://discuss.mxnet.io/t/hybrid-training-speed-is-20-slower-than-pytorch/2731/3 There's another discussion here: https://discuss.mxnet.io/t/performance-of-symbol-vs-ndarray-vs-pytorch/870/6 I slightly modify it: https://gist.github.com/Sun

Re: Rust Client Lib

2019-02-18 Thread Sheng Zha
Hi, Thanks for sharing the results. A problem in the benchmark is that the comparison does not take into account that MXNet is making a copy while pytorch is not. MXNet made the choice of not doing a zero-copy for numpy arrays, but instead making a copy of the numpy data. This means that users

[Announcement] New Committer - Kan Wu (@wkcn)

2019-02-18 Thread Sheng Zha
Hi, Please join me in welcoming Kan Wu (@wkcn), as a new committer! Kan has brought many valuable contributions to MXNet [1]. He also enriches the MXNet ecosystem with his operator toolkit MobulaOP. We are excited to have Kan join us as a committer. -sz [1] https://github.com/apache/incubator-

RE: [Announcement] New Committer - Kan Wu (@wkcn)

2019-02-18 Thread Lv, Tao A
Congratulations Kan! You're well deserved! -Original Message- From: Sheng Zha [mailto:szha@gmail.com] Sent: Tuesday, February 19, 2019 2:10 PM To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org Cc: Anirudh Subramanian ; Jackie Wu Subject: [Announcement] New Committer - Kan Wu

RE: [Announcement] New Committer - Kan Wu (@wkcn)

2019-02-18 Thread Zhao, Patric
Congratulation! We have the cooperation with Kan before and he is easy to communicate and very professional :) It's really deserved! > -Original Message- > From: Lv, Tao A [mailto:tao.a...@intel.com] > Sent: Tuesday, February 19, 2019 2:17 PM > To: dev@mxnet.incubator.apache.org; d...@