mbrookhart commented on a change in pull request #7613:
URL: https://github.com/apache/tvm/pull/7613#discussion_r590691319



##########
File path: python/tvm/topi/nn/qnn.py
##########
@@ -0,0 +1,186 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Quantized Neural Network (QNN) Operators"""
+import tvm
+from tvm import te, tir, topi
+
+SQNN_FP32 = 0
+SQNN_INT8 = 1
+SQNN_UINT8 = 2
+SQNN_INT32 = 3
+
+SQNN_DTYPE_TO_CODE = {
+    "float32": SQNN_FP32,
+    "int8": SQNN_INT8,
+    "uint8": SQNN_UINT8,
+    "int32": SQNN_INT32,
+}
+
+SQNN_CODE_TO_DTYPE = {v: k for k, v in SQNN_DTYPE_TO_CODE.items()}
+
+
[email protected]_scope(tag=topi.tag.ELEMWISE)
+def simulated_quantize(data, out_dtype, output_scale=None, 
output_zero_point=None, axis=-1):
+    """Simulated QNN quantize operator that mimics QNN outputs in floating 
point. The benefit
+    of this operator over true QNN quantize is that this operator allows 
dynamic datatype
+    selection and can operate on both per-channel and scalar scales and zero 
points while
+    QNN quantize requires both of these to be fixed at compile time.
+
+    Parameters
+    ----------
+    data: tvm.te.Tensor
+        An N-D input tensor to the operator.
+
+    out_dtype: tvm.te.Tensor
+        A scalar variable that indicates which datatype to simulate 
quantization with. Use
+        SQNN_DTYPE_TO_CODE to convert a dtype string into the corresponding 
variable
+        value.
+
+    output_scale: tvm.te.Tensor, optional
+        A scalar tensor representing the scale to use when quantizing to 
integer datatypes.
+        When it contains more than a single value, N must match the number of 
channels in data.
+
+    output_zero_point: tvm.te.Tensor, optional
+        A 1-D tensor representing the zero point to use when quantizing to 
integer datatypes.
+        When it contains more than a single value, N must match the number of 
channels in data.
+
+    axis: int, optional
+        The channel axis for quantization. Default value is -1 which 
corresponds to the last axis.
+
+    """
+    # Since all simulated outputs are in float32, we can just return the input 
tensor for fp32.

Review comment:
       I don't think we should be doing this based off dtype. As you mentioned 
in the type relation function, we might want to pass in something that isn't 
float32 and run this against it's own dtype. What's wrong with allowing the 
user to pass in scale=1, zp=0, dtype=data.dtype if they want to passthrough?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to