eric-haibin-lin closed pull request #10661: mark MKLDNN experimantal.
URL: https://github.com/apache/incubator-mxnet/pull/10661
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/NEWS.md b/NEWS.md
index fd537c4055b..2b984c22bd4 100644
--- a/NEWS.md
+++ b/NEWS.md
@@ -11,7 +11,7 @@ MXNet Change Log
 - Implemented model quantization by adopting the [TensorFlow 
approach](https://www.tensorflow.org/performance/quantization) with calibration 
by borrowing the idea from Nvidia's 
[TensorRT](http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf).
 The focus of this work is on keeping quantized models (ConvNets for now) 
inference accuracy loss under control when compared to their corresponding FP32 
models. Please see the 
[example](https://github.com/apache/incubator-mxnet/tree/master/example/quantization)
 on how to quantize a FP32 model with or without calibration (#9552).
 
 ### New Features - MKL-DNN Integration
-- MXNet now integrates with Intel MKL-DNN to accelerate neural network 
operators: Convolution, Deconvolution, FullyConnected, Pooling, Batch 
Normalization, Activation, LRN, Softmax, as well as some common operators: sum 
and concat (#9677). This integration allows NDArray to contain data with 
MKL-DNN layouts and reduces data layout conversion to get the maximal 
performance from MKL-DNN.
+- MXNet now integrates with Intel MKL-DNN to accelerate neural network 
operators: Convolution, Deconvolution, FullyConnected, Pooling, Batch 
Normalization, Activation, LRN, Softmax, as well as some common operators: sum 
and concat (#9677). This integration allows NDArray to contain data with 
MKL-DNN layouts and reduces data layout conversion to get the maximal 
performance from MKL-DNN. Currently, the MKL-DNN integration is still 
experimental. Please use it with caution.
 
 ### New Features - Added Exception Handling Support for Operators
 - Implemented [Exception Handling Support for 
Operators](https://cwiki.apache.org/confluence/display/MXNET/Improved+exception+handling+in+MXNet)
 in MXNet. MXNet now transports backend C++ exceptions to the different 
language front-ends and prevents crashes when exceptions are thrown during 
operator execution (#9681).


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to