jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-397143625
@marcoabreu @zheng-da Thanks for the approve! Regarding how to check
different backend context, if we are not in an agre
jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-396109115
@marcoabreu @zheng-da A kind reminder. Would you help to review the latest
diff? Thanks.
--
jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-395630010
@marcoabreu I used my proposed approach 1 to test all cases and skip and
print the cases that is expected to skip, pleas
jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-395478506
@marcoabreu I tried to use below "try .. catch ..." statement to test native
CPU, MKLDNN, GPU
```
for dtype in ['i
jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-395478506
@marcoabreu I tried to use below "try .. catch ..." statement to test native
CPU, MKLDNN, GPU
```
for dtype in ['i
jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-395478506
@marcoabreu I tried to use below "try .. catch ..." statement to test native
CPU, MKLDNN, GPU
```
for dtype in ['i
jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-395122305
@marcoabreu Please let me know if I misunderstood your points, and please
also let us know if there is better suggested
jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-395100685
@marcoabreu Sorry maybe I didn't explain very clearly, for current
quantization test, besides the dtype difference betwe
jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-395100685
@marcoabreu Sorry maybe I didn't explain very clearly, for current
quantization test, besides the dtype difference betwe
jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-393739496
@marcoabreu This PR has been approved by reminisce and @zheng-da will finish
the review very soon, would you help to tak
jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-393196924
@reminisce @zheng-da We have resolved all the comments, would you help to
check if you have further comments on the cha
jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-393196924
@reminisce @zheng-da We have resolved all the comments, would you help to
check if you have further comments on the cha
jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-390582611
@zheng-da Currently the int8 performance is not good as FP32, we are
planning to add several enhancement to improve perf
jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-389866702
@reminisce @zheng-da We have resolved all the comments, would you check if
you have further comments on current change?
jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-384533399
@zheng-da Correct my word, for mkldnn_OIhw4i16o4i change in mkldnn_base.cc,
this format is needed by int8 otherwise will
jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-384533399
@zheng-da Correct my word, for mkldnn_OIhw4i16o4i change in mkldnn_base.cc,
this format is needed by int8 otherwise will
jinhuang415 commented on issue #10433: [MXNET-290] MKLDNN support for model
quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#issuecomment-383627507
@marcoabreu Sure, will do.
This is an automated message
17 matches
Mail list logo