ekalda commented on code in PR #16266:
URL: https://github.com/apache/tvm/pull/16266#discussion_r1433783966


##########
python/tvm/relay/backend/contrib/ethosu/legalize.py:
##########
@@ -176,27 +222,39 @@ def callback(self, pre: tvm.relay.Expr, post: 
tvm.relay.Expr, node_map: tvm.ir.c
         output_scale = float(params.ofm.q_params.scale_f32)
         output_zp = int(params.ofm.q_params.zero_point)
 
-        lut_values = get_lut_from_func(
-            input_scale,
-            input_zp,
-            output_scale,
-            output_zp,
-            self.calc_func,
-        )
-        lut = relay.const(lut_values, dtype=params.ifm.dtype)
+        if params.ifm.dtype == "int8":
+            lut_values = get_lut_from_func(
+                input_scale, input_zp, output_scale, output_zp, 
self.calc_func, np.int8
+            )
+            lut = relay.const(lut_values, dtype=params.ifm.dtype)
 
-        # We baked the requantization into the LUT, so we don't requantize the 
identity operator
-        identity = ethosu_ops.ethosu_identity(
-            ifm=params.ifm.tensor,
-            lut=lut,
-            ifm_scale=input_scale,
-            ifm_zero_point=input_zp,
-            ofm_scale=input_scale,
-            ofm_zero_point=input_zp,
-            activation=self.activation_type,
-        )
+            # We baked the requantization into the LUT, so we don't requantize 
the identity operator
+            identity = ethosu_ops.ethosu_identity(
+                ifm=params.ifm.tensor,
+                lut=lut,
+                ifm_scale=input_scale,
+                ifm_zero_point=input_zp,
+                ofm_scale=input_scale,
+                ofm_zero_point=input_zp,
+                activation=self.activation_type,
+            )
 
-        return identity
+            return identity
+        elif params.ifm.dtype == "int16":
+            lut_tanh = relay.const([], "int16")

Review Comment:
   Seems like this is adding an empty lookup table to the identity operator? 



##########
python/tvm/relay/backend/contrib/ethosu/tir_to_cs_translator.py:
##########
@@ -877,7 +877,7 @@ def _create_npu_activation(serial_activation: 
spec.SerialActivation) -> vapi.Npu
         return None
     op_map = {
         "CLIP": vapi.NpuActivationOp.NONE_OR_RELU,
-        "TANH": vapi.NpuActivationOp.TABLE_LOOKUP,
+        "TANH": vapi.NpuActivationOp.TANH,

Review Comment:
   This change would make it always use the NPU's builtin tanh function instead 
of the calculated lookup table values, making it to not match TFLite reference 
kernels and turning all tanh LUT value calculation into dead code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to