d-smirnov commented on a change in pull request #6127:
URL: https://github.com/apache/incubator-tvm/pull/6127#discussion_r460439414
##########
File path: tests/python/frontend/tflite/test_forward.py
##########
@@ -1850,7 +1850,7 @@ def _test_quantize_dequantize(data):
# First TFLite quantize op converts float32 tensor to int8 tensor - Qnn
quantize.
# Second TFLite quantize op converts int8 tensor to int8 tensor - Qnn
requantize.
data_in = tf.keras.layers.Input(shape=data.shape[1:])
- relu = tf.keras.layers.ReLU()(data_in)
+ relu = tf.keras.layers.ReLU()(data)
Review comment:
The idea is to use data as constant parameter for quantize operation
which will be inserted on quantisation step
##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -2726,7 +2726,13 @@ def convert_quantize(self, op):
assert len(input_tensors) == 1, "input tensors length should be 1"
input_tensor = input_tensors[0]
input_tensor_type_str =
self.get_tensor_type_str(input_tensor.tensor.Type())
- in_expr = self.get_expr(input_tensor.tensor_idx)
+
+ if self.has_expr(input_tensor.tensor_idx):
+ in_expr = self.get_expr(input_tensor.tensor_idx)
+ else:
+ in_value = self.get_tensor_value(input_tensor)
Review comment:
Deepspeech itself is here
https://github.com/mozilla/DeepSpeech/releases/tag/v0.7.4
However the model which was used was a quantised version fro internal model
zoo. I am not sure that I can share it fully
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]