This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch v1.2.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.2.0 by this push:
     new 3525cb9  mark MKLDNN experimantal. (#10661)
3525cb9 is described below

commit 3525cb9f3ec953a869e0803d1e71552565773314
Author: Da Zheng <[email protected]>
AuthorDate: Tue Apr 24 23:55:46 2018 -0700

    mark MKLDNN experimantal. (#10661)
---
 NEWS.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/NEWS.md b/NEWS.md
index fd537c4..2b984c2 100644
--- a/NEWS.md
+++ b/NEWS.md
@@ -11,7 +11,7 @@ MXNet Change Log
 - Implemented model quantization by adopting the [TensorFlow 
approach](https://www.tensorflow.org/performance/quantization) with calibration 
by borrowing the idea from Nvidia's 
[TensorRT](http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf).
 The focus of this work is on keeping quantized models (ConvNets for now) 
inference accuracy loss under control when compared to their corresponding FP32 
models. Please see the [example](https://github.com/ap [...]
 
 ### New Features - MKL-DNN Integration
-- MXNet now integrates with Intel MKL-DNN to accelerate neural network 
operators: Convolution, Deconvolution, FullyConnected, Pooling, Batch 
Normalization, Activation, LRN, Softmax, as well as some common operators: sum 
and concat (#9677). This integration allows NDArray to contain data with 
MKL-DNN layouts and reduces data layout conversion to get the maximal 
performance from MKL-DNN.
+- MXNet now integrates with Intel MKL-DNN to accelerate neural network 
operators: Convolution, Deconvolution, FullyConnected, Pooling, Batch 
Normalization, Activation, LRN, Softmax, as well as some common operators: sum 
and concat (#9677). This integration allows NDArray to contain data with 
MKL-DNN layouts and reduces data layout conversion to get the maximal 
performance from MKL-DNN. Currently, the MKL-DNN integration is still 
experimental. Please use it with caution.
 
 ### New Features - Added Exception Handling Support for Operators
 - Implemented [Exception Handling Support for 
Operators](https://cwiki.apache.org/confluence/display/MXNET/Improved+exception+handling+in+MXNet)
 in MXNet. MXNet now transports backend C++ exceptions to the different 
language front-ends and prevents crashes when exceptions are thrown during 
operator execution (#9681).

-- 
To stop receiving notification emails like this one, please contact
[email protected].

Reply via email to