This is an automated email from the ASF dual-hosted git repository.

bgawrych pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
     new 7d602e3b23 [DOC] Fix the table in Improving accuracy with INC (#21140)
7d602e3b23 is described below

commit 7d602e3b2382eb501fdeb94c4d97e652a723af11
Author: Andrzej KotÅ‚owski <[email protected]>
AuthorDate: Mon Sep 26 14:15:06 2022 +0200

    [DOC] Fix the table in Improving accuracy with INC (#21140)
    
    Fix formating of  the table in 
https://mxnet.apache.org/versions/master/api/python/docs/tutorials/performance/backend/dnnl/dnnl_quantization_inc.html
---
 .../backend/dnnl/dnnl_quantization_inc.md            | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git 
a/docs/python_docs/python/tutorials/performance/backend/dnnl/dnnl_quantization_inc.md
 
b/docs/python_docs/python/tutorials/performance/backend/dnnl/dnnl_quantization_inc.md
index 4841bce979..c1e85fc6c7 100644
--- 
a/docs/python_docs/python/tutorials/performance/backend/dnnl/dnnl_quantization_inc.md
+++ 
b/docs/python_docs/python/tutorials/performance/backend/dnnl/dnnl_quantization_inc.md
@@ -250,15 +250,15 @@ Accuracy results on the whole validation dataset (782 
batches) are shown below.
 
 | Optimization method  | Top 1 accuracy | Top 5 accuracy | Top 1 relative 
accuracy loss [%] | Top 5 relative accuracy loss [%] | Cost = one-time 
optimization on 9 batches [s] | Validation time [s] | Speedup |
 
|----------------------|-------:|-------:|------:|------:|-------:|--------:|------:|
-| fp32 no optimization 0.7699 | 0.9340 |  0.00 |  0.00 |   0.00 | 316.50 | 1.0 
|
-| fp32 fused           0.7699 | 0.9340 |  0.00 |  0.00 |   0.03 | 147.77 | 2.1 
|
-| int8 full naive      0.2207 | 0.3912 | 71.33 | 58.12 |  11.29 |  45.81 | 
**6.9** |
-| int8 full entropy    0.6933 | 0.8917 |  9.95 |  4.53 |  80.23 |  46.39 | 6.8 
|
-| int8 smart naive     0.2210 | 0.3905 | 71.29 | 58.19 |  11.15 |  46.02 | 6.9 
|
-| int8 smart entropy   0.6928 | 0.8910 | 10.01 |  4.60 |  79.75 |  45.98 | 6.9 
|
-| int8 INC basic       0.7692 | 0.9331 | **0.09** |  0.10 | 266.50 |  48.32 | 
**6.6** |
-| int8 INC mse         0.7692 | 0.9337 | **0.09** |  0.03 | 106.50 |  49.76 | 
**6.4** |
-| int8 INC mycustom    0.7699 | 0.9338 | **0.00** |  0.02 | 370.29 |  70.07 | 
**4.5** |
+| fp32 no optimization | 0.7699 | 0.9340 |  0.00 |  0.00 |   0.00 | 316.50 | 
1.0 |
+| fp32 fused           | 0.7699 | 0.9340 |  0.00 |  0.00 |   0.03 | 147.77 | 
2.1 |
+| int8 full naive      | 0.2207 | 0.3912 | 71.33 | 58.12 |  11.29 |  45.81 | 
**6.9** |
+| int8 full entropy    | 0.6933 | 0.8917 |  9.95 |  4.53 |  80.23 |  46.39 | 
6.8 |
+| int8 smart naive     | 0.2210 | 0.3905 | 71.29 | 58.19 |  11.15 |  46.02 | 
6.9 |
+| int8 smart entropy   | 0.6928 | 0.8910 | 10.01 |  4.60 |  79.75 |  45.98 | 
6.9 |
+| int8 INC basic       | 0.7692 | 0.9331 | **0.09** |  0.10 | 266.50 |  48.32 
| **6.6** |
+| int8 INC mse         | 0.7692 | 0.9337 | **0.09** |  0.03 | 106.50 |  49.76 
| **6.4** |
+| int8 INC mycustom    | 0.7699 | 0.9338 | **0.00** |  0.02 | 370.29 |  70.07 
| **4.5** |
 
 
 Environment:  
@@ -293,4 +293,4 @@ script:
   from neural_compressor.utils.utility import recover
 
   quantized_model = recover(f32_model, 'nc_workspace/<tuning 
date>/history.snapshot', configuration_idx).model
-  ```
\ No newline at end of file
+  ```

Reply via email to