+1

Build from source on CentOS 7.6 with MKL/DNNL on and verified it works with 
DNNL backend for fp32/int8 inference.
Both functionality and performance are great 😊

Thanks
Xinyu

On 2020/1/8, 1:55 AM, "Przemysław Trędak" <ptre...@apache.org> wrote:

    Dear MXNet community,
    
    This is the vote to release Apache MXNet (incubating) version 1.6.0. Voting 
starts today and will close on Friday 1/10/2020 23:59 PST.
    
    Link to release notes:
    https://cwiki.apache.org/confluence/display/MXNET/1.6.0+Release+notes
    
    Link to release candidate:
    https://github.com/apache/incubator-mxnet/releases/tag/1.6.0.rc1
    
    Link to source and signatures on apache dist server:
    https://dist.apache.org/repos/dist/dev/incubator/mxnet/1.6.0.rc1/
    
    The differences comparing to previous release candidate 1.6.0.rc0:
    * Fix for RNN gradient calculation for MKLDNN ([v1.6.x] Cherry-pick MKL-DNN 
Rnn operator enhancements to v1.6.x (#17225))
    * Fix for Windows CMake build (Backport #16980 #17031 #17018 #17019 to 1.6 
branch (#17213))
    * CPU counterpart to contrib multihead attention operators (Interleaved MHA 
for CPU path (#17138) (#17211))
    * Fix for #16060 (fix norm sparse fallback (#17149))
    * Fix for inconsistent names in estimator API (fix parameter names in the 
estimator api (#17051) (#17162))
    * Fixes for OpenMP (Backport 3rdparty/openmp fixes (#17193))
    * Fix for pointwise fusion speed for large networks (which was the reason 
of -1 in the vote for rc0) as well as fixes for nondeterminism in sum of 
squares operator and trainer parameter order (Backport #17002, #17068 and 
#17114 to 1.6 branch (#17137))
    
    
    Please remember to TEST first before voting accordingly:
    +1 = approve
    +0 = no opinion
    -1 = disapprove (provide reason)
    
    
    Best regards,
    Przemyslaw Tredak
    

Reply via email to