[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4695: [Relay][Frontend][TFlite] Add parser support for relational ops

2020-01-29 Thread GitBox
FrozenGene commented on a change in pull request #4695: 
[Relay][Frontend][TFlite] Add parser support for relational ops
URL: https://github.com/apache/incubator-tvm/pull/4695#discussion_r372449323
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -705,47 +710,87 @@ def convert_div(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized div operator is not supported yet.')
+'TFlite quantized DIV operator is not supported yet.')
 return self._convert_elemwise(_op.divide, op)
 
 def convert_pow(self, op):
+"""Convert TFLite POW"""
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized pow operator is not supported yet.')
+'TFlite quantized POW operator is not supported yet.')
 return self._convert_elemwise(_op.power, op)
 
+def convert_squared_difference(self, op):
 
 Review comment:
   This part shouldn’t appear in your pr if you rebase master correctly. Seems 
that you don’t handle it correctly and move this implementation manually. 
Please rebase your pr to master correctly referring this: 
https://docs.tvm.ai/contribute/git_howto.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4695: [Relay][Frontend][TFlite] Add parser support for relational ops

2020-01-29 Thread GitBox
FrozenGene commented on a change in pull request #4695: 
[Relay][Frontend][TFlite] Add parser support for relational ops
URL: https://github.com/apache/incubator-tvm/pull/4695#discussion_r372450103
 
 

 ##
 File path: tests/python/frontend/tflite/test_forward.py
 ##
 @@ -843,6 +843,13 @@ def _test_pow(data):
 """ One iteration of power """
 return _test_elemwise(math_ops.pow, data)
 ###
+# Squared_difference
+# --
+
+def _test_squared_difference(data):
 
 Review comment:
   Same problem as previous comment. Please correct it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4695: [Relay][Frontend][TFlite] Add parser support for relational ops

2020-01-27 Thread GitBox
FrozenGene commented on a change in pull request #4695: 
[Relay][Frontend][TFlite] Add parser support for relational ops
URL: https://github.com/apache/incubator-tvm/pull/4695#discussion_r371276409
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -705,47 +710,77 @@ def convert_div(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized div operator is not supported yet.')
+'TFlite quantized DIV operator is not supported yet.')
 return self._convert_elemwise(_op.divide, op)
 
 def convert_pow(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized pow operator is not supported yet.')
+'TFlite quantized POW operator is not supported yet.')
 return self._convert_elemwise(_op.power, op)
 
+def convert_squared_difference(self, op):
+# Check if the input tensor is quantized, call QNN op
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
+'TFlite quantized SQUARED_DIFFERENCE operator is not supported 
yet.')
+difference = self._convert_elemwise(_op.subtract, op)
+# _convert_elemwise has guaranteed only have one output tensor
+exp_type = 
self.get_tensor_type_str(self.get_output_tensors(op)[0].tensor.Type())
+out = _op.power(difference, relay.const(2, exp_type))
+return out
+
 def convert_maximum(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized maximum operator is not supported yet.')
+'TFlite quantized MAXIMUM operator is not supported yet.')
 return self._convert_elemwise(_op.maximum, op)
 
 def convert_minimum(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized minimum operator is not supported yet.')
+'TFlite quantized MINIMUM operator is not supported yet.')
 return self._convert_elemwise(_op.minimum, op)
 
 def convert_greater(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized greater operator is not supported yet.')
+'TFlite quantized GREATER operator is not supported yet.')
 return self._convert_elemwise(_op.greater, op)
 
-def convert_squared_difference(self, op):
-# Check if the input tensor is quantized, call QNN op
+def convert_greater_equal(self, op):
 
 Review comment:
   add such a doc string to every elemwise function.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4695: [Relay][Frontend][TFlite] Add parser support for relational ops

2020-01-21 Thread GitBox
FrozenGene commented on a change in pull request #4695: 
[Relay][Frontend][TFlite] Add parser support for relational ops
URL: https://github.com/apache/incubator-tvm/pull/4695#discussion_r368906216
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -705,47 +710,77 @@ def convert_div(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized div operator is not supported yet.')
+'TFlite quantized DIV operator is not supported yet.')
 return self._convert_elemwise(_op.divide, op)
 
 def convert_pow(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized pow operator is not supported yet.')
+'TFlite quantized POW operator is not supported yet.')
 return self._convert_elemwise(_op.power, op)
 
+def convert_squared_difference(self, op):
+# Check if the input tensor is quantized, call QNN op
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
+'TFlite quantized SQUARED_DIFFERENCE operator is not supported 
yet.')
+difference = self._convert_elemwise(_op.subtract, op)
+# _convert_elemwise has guaranteed only have one output tensor
+exp_type = 
self.get_tensor_type_str(self.get_output_tensors(op)[0].tensor.Type())
+out = _op.power(difference, relay.const(2, exp_type))
+return out
+
 def convert_maximum(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized maximum operator is not supported yet.')
+'TFlite quantized MAXIMUM operator is not supported yet.')
 return self._convert_elemwise(_op.maximum, op)
 
 def convert_minimum(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized minimum operator is not supported yet.')
+'TFlite quantized MINIMUM operator is not supported yet.')
 return self._convert_elemwise(_op.minimum, op)
 
 def convert_greater(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized greater operator is not supported yet.')
+'TFlite quantized GREATER operator is not supported yet.')
 return self._convert_elemwise(_op.greater, op)
 
-def convert_squared_difference(self, op):
-# Check if the input tensor is quantized, call QNN op
+def convert_greater_equal(self, op):
 
 Review comment:
   Last captious comment.:-)  We omit the doc string of function.  For example, 
""" Convert TFLite greater equal op """ like other functions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4695: [Relay][Frontend][TFlite] Add parser support for relational ops

2020-01-18 Thread GitBox
FrozenGene commented on a change in pull request #4695: 
[Relay][Frontend][TFlite] Add parser support for relational ops
URL: https://github.com/apache/incubator-tvm/pull/4695#discussion_r368262724
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -616,6 +621,36 @@ def convert_greater(self, op):
 'TFlite quantized greater operator is not supported yet.')
 return self._convert_elemwise(_op.greater, op)
 
+def convert_greater_equal(self, op):
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
+'TFlite quantized GREATER_EQUAL operator is not supported 
yet.')
 
 Review comment:
   Seems that we have two styles of it. Some code use `lowercase` like 
`convert_greater`, some code use `uppercase` like `convert_neg`. I think we 
should unify them to `uppercase`.  Two reasons: 1. it corresponds to our 
`convert_map` / tflite operator string represenation. 2. it could make us catch 
the operator more quickly in one sentence like @inadob said. So @inadob could 
you help to update other code to `uppercase` too? Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4695: [Relay][Frontend][TFlite] Add parser support for relational ops

2020-01-18 Thread GitBox
FrozenGene commented on a change in pull request #4695: 
[Relay][Frontend][TFlite] Add parser support for relational ops
URL: https://github.com/apache/incubator-tvm/pull/4695#discussion_r368262762
 
 

 ##
 File path: tests/python/frontend/tflite/test_forward.py
 ##
 @@ -722,6 +722,42 @@ def _test_minimum(data):
 def _test_greater(data):
 """ One iteration of greater """
 return _test_elemwise(math_ops.greater, data)
+###
+# Greater_equal
+# -
+
+def _test_greater_equal(data):
+""" One iteration of greater_equal """
+return _test_elemwise(math_ops.greater_equal, data)
+###
+# Less
+# 
+
+def _test_less(data):
+""" One iteration of less """
+return _test_elemwise(math_ops.less, data)
+###
+# Less_equal
+# --
+
+def _test_less_equal(data):
+""" One iteration of less_equal """
+return _test_elemwise(math_ops.less_equal, data)
+###
+# Equal
+# -
+
+def _test_equal(data):
+""" One iteration of equal """
+return _test_elemwise(math_ops.equal, data)
+###
+# Not_equal
+# -
+
+def _test_not_equal(data):
+""" One iteration of not_equal"""
+return _test_elemwise(math_ops.not_equal, data)
+###
 
 Review comment:
   Yes. one redundant line . Pls remove it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services