anijain2305 commented on a change in pull request #5848:
URL: https://github.com/apache/incubator-tvm/pull/5848#discussion_r445326022
##########
File path: python/tvm/relay/frontend/tflite.py
##########
@@ -262,21 +298,25 @@ def get_tensor_value(self, tensor_wrapper):
except ImportError:
raise ImportError("The tflite package must be installed")
+ data = tensor_wrapper.buffer.DataAsNumpy()
+ shape = tensor_wrapper.tensor.ShapeAsNumpy()
+
+ # Set shape to 1 if the data is a scalar type
+ if data.shape == (1,) and isinstance(shape, int) and shape == 0:
+ shape = (1,)
+
+ if tensor_wrapper.tensor.Type() == TensorType.INT8:
+ return np.frombuffer(data, dtype=np.int8).reshape(shape)
if tensor_wrapper.tensor.Type() == TensorType.UINT8:
- return np.frombuffer(tensor_wrapper.buffer.DataAsNumpy(),
dtype=np.uint8).reshape(
- tensor_wrapper.tensor.ShapeAsNumpy())
- if tensor_wrapper.tensor.Type() == TensorType.FLOAT32:
- return np.frombuffer(tensor_wrapper.buffer.DataAsNumpy(),
dtype=np.float32).reshape(
- tensor_wrapper.tensor.ShapeAsNumpy())
+ return np.frombuffer(data, dtype=np.uint8).reshape(shape)
Review comment:
Samuel. Thats a good point. I looked into int16 support. This is
experimental in TFLite for now.
For float16, I think we will need a good number of tests. I think we can
handle float16 in a separate PR.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]