+1 Built from the dist [1] on Ubuntu 16.04 DL AMI for CPU + MKLDNN Tested 1. OpPerf (benchmark utility) - Promising results (faster forward times for certain ops compared to 1.4.0 and 1.5.1) 2. Large tensor support (used the USE_INT64_TENSOR_SIZE = ON flag while building) : Tests pass
Thanks Przemyslaw for leading 1.6.0! It's taken long. But we're close to the finish line! Awesome work! Thanks Chai [1] https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.6.0.rc1/ On Thu, 9 Jan 2020 at 22:05, Chen, Ciyong <ciyong.c...@intel.com> wrote: > +1 > > Build from source on CentOS 7.6 with GCC 4.8.5 with MKLDNN. > Unit tests passed and imagenet examples (with MKL-DNN subgraph backend) > looked good on performance and accuracy in both FP32 and INT8 mode, RNN > training worked. > > Thanks, > -Ciyong > > -----Original Message----- > From: Lai Wei <roywei...@gmail.com> > Sent: Wednesday, January 8, 2020 8:56 AM > To: dev@mxnet.incubator.apache.org > Cc: d...@mxnet.apache.org > Subject: Re: [VOTE] Release Apache MXNet (incubating) version 1.6.0.rc1 > > +1 > Build from source on Ubuntu with CUDA/CUDNN/MKLDNN and tested with > keras-mxnet. > Unit tests passed and example works on CPU/GPU. > > > Best Regards > > Lai > > > On Tue, Jan 7, 2020 at 11:49 AM Lin Yuan <apefor...@gmail.com> wrote: > > > Correction: it was built from source on Ubuntu 16.04 > > > > On Tue, Jan 7, 2020 at 11:42 AM Lin Yuan <apefor...@gmail.com> wrote: > > > > > +1 > > > > > > Build from source on Ubuntu 18 with CUDA/CUDNN/NCCL on and verified > > > it works with Horovod 0.18.2 > > > > > > On Tue, Jan 7, 2020 at 9:55 AM Przemysław Trędak > > > <ptre...@apache.org> > > > wrote: > > > > > >> Dear MXNet community, > > >> > > >> This is the vote to release Apache MXNet (incubating) version 1.6.0. > > >> Voting starts today and will close on Friday 1/10/2020 23:59 PST. > > >> > > >> Link to release notes: > > >> https://cwiki.apache.org/confluence/display/MXNET/1.6.0+Release+not > > >> es > > >> > > >> Link to release candidate: > > >> https://github.com/apache/incubator-mxnet/releases/tag/1.6.0.rc1 > > >> > > >> Link to source and signatures on apache dist server: > > >> https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.6.0.rc1/ > > >> > > >> The differences comparing to previous release candidate 1.6.0.rc0: > > >> * Fix for RNN gradient calculation for MKLDNN ([v1.6.x] Cherry-pick > > >> MKL-DNN Rnn operator enhancements to v1.6.x (#17225)) > > >> * Fix for Windows CMake build (Backport #16980 #17031 #17018 #17019 > > >> to > > >> 1.6 branch (#17213)) > > >> * CPU counterpart to contrib multihead attention operators > > >> (Interleaved MHA for CPU path (#17138) (#17211)) > > >> * Fix for #16060 (fix norm sparse fallback (#17149)) > > >> * Fix for inconsistent names in estimator API (fix parameter names > > >> in > > the > > >> estimator api (#17051) (#17162)) > > >> * Fixes for OpenMP (Backport 3rdparty/openmp fixes (#17193)) > > >> * Fix for pointwise fusion speed for large networks (which was the > > reason > > >> of -1 in the vote for rc0) as well as fixes for nondeterminism in > > >> sum of squares operator and trainer parameter order (Backport > > >> #17002, #17068 > > and > > >> #17114 to 1.6 branch (#17137)) > > >> > > >> > > >> Please remember to TEST first before voting accordingly: > > >> +1 = approve > > >> +0 = no opinion > > >> -1 = disapprove (provide reason) > > >> > > >> > > >> Best regards, > > >> Przemyslaw Tredak > > >> > > > > > > -- *Chaitanya Prakash Bapat* *+1 (973) 953-6299* [image: https://www.linkedin.com//in/chaibapat25] <https://github.com/ChaiBapchya>[image: https://www.facebook.com/chaibapat] <https://www.facebook.com/chaibapchya>[image: https://twitter.com/ChaiBapchya] <https://twitter.com/ChaiBapchya>[image: https://www.linkedin.com//in/chaibapat25] <https://www.linkedin.com//in/chaibapchya/>