Hello dear MXNet community,
I would really appreciate if a Commiter with CMake knowledge could take a
look at this PR: https://github.com/apache/incubator-mxnet/pull/14028
It is stated in the PR, just mentioning again:
The objective of the PR is to “*Ease the pain of linking with OpenBLAS
using
The rust crate for tensorflow support only inference, which limit its usage. If
you really want to deploy your network, TensorRT and TVM may be better choice.
I really want to write a dl framework in rust from scratch. However, there's no
mature GPU Tensor library in rust (rust-ndarray is a grea
Hello!
> mxnet is somehow slower than pytorch, even with hybridize on, and that's
why I start writing binding for pytorch now.
I believe many people in this list will be very interested in why you say
this.
As far as I know, and correct me if I'm wrong, MXNet is supposed to be a
very fast, if no
+1
Built from source (Ubuntu 16.04) successfully and verified the training
speed for ResNet50 is at par with MXNet 1.3.1 release on a single
p3.16xlarge instance.
On Sun, Feb 17, 2019 at 12:13 PM Carin Meier wrote:
> +1 Downloaded and verified the signature on the tar. Built and tested the
> Sc
Hello,
the recurring user group, hosted by Berlin contributors, will be cancelled
for this week due to an availability clash.
Please excuse any inconveniences this may cause.
Best regards,
Marco
+1 Downloaded, installed on Ubuntu 16.04. Verified signatures.
Built from source with cuda enabled. Ran train_mnist.py test successfully.
Thanks,
Roshani
On Sun, Feb 17, 2019 at 12:13 PM Carin Meier wrote:
> +1 Downloaded and verified the signature on the tar. Built and tested the
> Scala/Cloju
I wrote some benchmark code, and here's the discussion:
https://discuss.mxnet.io/t/hybrid-training-speed-is-20-slower-than-pytorch/2731/3
There's another discussion here:
https://discuss.mxnet.io/t/performance-of-symbol-vs-ndarray-vs-pytorch/870/6
I slightly modify it:
https://gist.github.com/Sun
Hi,
Thanks for sharing the results. A problem in the benchmark is that the
comparison does not take into account that MXNet is making a copy while pytorch
is not.
MXNet made the choice of not doing a zero-copy for numpy arrays, but instead
making a copy of the numpy data. This means that users
Hi,
Please join me in welcoming Kan Wu (@wkcn), as a new committer!
Kan has brought many valuable contributions to MXNet [1]. He also enriches
the MXNet ecosystem with his operator toolkit MobulaOP.
We are excited to have Kan join us as a committer.
-sz
[1]
https://github.com/apache/incubator-
Congratulations Kan! You're well deserved!
-Original Message-
From: Sheng Zha [mailto:szha@gmail.com]
Sent: Tuesday, February 19, 2019 2:10 PM
To: dev@mxnet.incubator.apache.org; d...@mxnet.apache.org
Cc: Anirudh Subramanian ; Jackie Wu
Subject: [Announcement] New Committer - Kan Wu
Congratulation!
We have the cooperation with Kan before and he is easy to communicate and very
professional :)
It's really deserved!
> -Original Message-
> From: Lv, Tao A [mailto:tao.a...@intel.com]
> Sent: Tuesday, February 19, 2019 2:17 PM
> To: dev@mxnet.incubator.apache.org; d...@
11 matches
Mail list logo