This is an automated email from the ASF dual-hosted git repository.
bgawrych pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git
The following commit(s) were added to refs/heads/master by this push:
new 8d933fdcdb Add proper link to scripts in quantization with INC example
(#21133)
8d933fdcdb is described below
commit 8d933fdcdbcc3b8b0ba04a0e584778e62e988f08
Author: Andrzej Kotłowski <[email protected]>
AuthorDate: Mon Aug 29 11:32:07 2022 +0200
Add proper link to scripts in quantization with INC example (#21133)
---
.../performance/backend/dnnl/dnnl_quantization_inc.md | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git
a/docs/python_docs/python/tutorials/performance/backend/dnnl/dnnl_quantization_inc.md
b/docs/python_docs/python/tutorials/performance/backend/dnnl/dnnl_quantization_inc.md
index b2b7eed957..081792d318 100644
---
a/docs/python_docs/python/tutorials/performance/backend/dnnl/dnnl_quantization_inc.md
+++
b/docs/python_docs/python/tutorials/performance/backend/dnnl/dnnl_quantization_inc.md
@@ -158,7 +158,7 @@ Since this model already achieves good accuracy using
native quantization (less
This example shows how to use INC to quantize ResNet50 v2. In this case, the
native MXNet quantization introduce a huge accuracy drop (70% using `naive`
calibration mode) and INC allows to automatically find better solution.
-This is the (TODO link to INC configuration file) for this example:
+This is the [INC configuration
file](https://github.com/apache/incubator-mxnet/blob/master/example/quantization_inc/resnet50v2_mse.yaml)
for this example:
```yaml
version: 1.0
@@ -182,7 +182,7 @@ tuning:
```
It could be used with script below
-(TODO link to resnet_mse.py)
+([resnet_mse.py](https://github.com/apache/incubator-mxnet/blob/master/example/quantization_inc/resnet_mse.py))
to find operator, which caused the most significant accuracy drop and disable
it from quantization.
You can find description of MSE strategy
[here](https://github.com/intel/neural-compressor/blob/master/docs/tuning_strategies.md#user-content-mse).
@@ -241,10 +241,10 @@
print(quantizer.strategy.best_qmodel.q_config['quant_cfg'])
#### Results:
Resnet50 v2 model could be prepared to achieve better performance with various
calibration and tuning methods.
It is done by
-(TODO link to resnet_tuning.py)
+[resnet_tuning.py](https://github.com/apache/incubator-mxnet/blob/master/example/quantization_inc/resnet_tuning.py)
script on a small part of data set to reduce time required for tuning (9
batches).
Later saved models are validated on a whole data set by
-(TODO link to resnet_measurment.py)
+[resnet_measurment.py](https://github.com/apache/incubator-mxnet/blob/master/example/quantization_inc/resnet_measurment.py)
script.
Accuracy results on the whole validation dataset (782 batches) are shown below.
@@ -274,7 +274,7 @@ to find the optimized model and final model performance
efficiency, different st
better results for specific models and tasks. You can notice, that the most
important thing done by INC
was to find the operator, which had the most significant impact on the loss of
accuracy and disable it from quantization if needed.
You can see below which operator was excluded by `mse` strategy in last print
given by
-(TODO link to resnet_mse.py)
+[resnet_mse.py](https://github.com/apache/incubator-mxnet/blob/master/example/quantization_inc/resnet_mse.py)
script:
{'excluded_symbols': ['**sg_onednn_conv_bn_act_0**'], 'quantized_dtype':
'auto', 'quantize_mode': 'smart', 'quantize_granularity': 'tensor-wise'}