xinyu-intel commented on a change in pull request #15910: [Quantization]support 
exclude operators while quantization
URL: https://github.com/apache/incubator-mxnet/pull/15910#discussion_r315043467
 
 

 ##########
 File path: include/mxnet/c_api.h
 ##########
 @@ -1909,16 +1909,22 @@ MXNET_DLL int MXSymbolInferTypePartial(SymbolHandle 
sym,
  * \brief Convert a symbol into a quantized symbol where FP32 operators are 
replaced with INT8
  * \param sym_handle symbol to be converted
  * \param ret_sym_handle quantized symbol result
- * \param num_excluded_symbols number of layers excluded from being quantized 
in the input symbol
- * \param excluded_symbols op names to be excluded from being quantized
+ * \param dev_type device type
+ * \param num_excluded_sym_names number of layers excluded from being 
quantized in the input symbol
+ * \param excluded_sym_names node names to be excluded from being quantized
+ * \param num_excluded_op_names number of operators excluded from being 
quantized in the input symbol
+ * \param excluded_op_names operator names to be excluded from being quantized
 
 Review comment:
   some models may define layer names with specific style. so, it's not easy to 
group these two functions:(

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to