FrozenGene commented on a change in pull request #4652: 
[Relay][Frontend][TFLite] Add parser support for squared difference
URL: https://github.com/apache/incubator-tvm/pull/4652#discussion_r367298709
 
 

 ##########
 File path: python/tvm/relay/frontend/tflite.py
 ##########
 @@ -726,6 +727,15 @@ def convert_greater(self, op):
                 'TFlite quantized greater operator is not supported yet.')
         return self._convert_elemwise(_op.greater, op)
 
+    def convert_squared_difference(self, op):
+        # Check if the input tensor is quantized, call QNN op
+        if self.is_quantized(op):
+            raise tvm.error.OpNotImplemented(
+                'TFlite quantized squared difference operator is not supported 
yet.')
+        difference = self._convert_elemwise(_op.subtract, op)
+        out = _op.power(difference, relay.const(2, 'float32'))
 
 Review comment:
   It it better not make it be `relay.const(2, 'float32'))`. Because let us 
imagine if op's data is fp16 (in the future), `2` shouldn't be `float32`. So we 
could do like this: call 
`get_tensor_type_str(self.get_output_tensors(op)[0].tensor.Type())` to get 
type, then call `_op.power`. Or if you think it is complex, you could simply 
call `_op.multiple(difference, difference)`, I think it is OK. Previously, I 
mention _op.power, because we could have intrinsic of power to handle it. 
However, I think it shouldn't be bottleneck after thinking twice.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to