yeandy opened a new issue, #23756:
URL: https://github.com/apache/beam/issues/23756

   ### What happened?
   
   I am trying to use the TensorFlow MobileNet v2 320x320 
[model](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md)
 model in RunInference using the [API from 
tfx_bsl](https://github.com/tensorflow/tfx-bsl/blob/master/tfx_bsl/public/beam/run_inference.py).
   
   But I'm running into this error. Please see this 
[PR](https://github.com/apache/beam/pull/23754). Looks like a signature dtype 
issue, where the expected type is a string (which has the enum value of `7`), 
but the `dtype` of the MobileNet model is `uint8` (which corresponds to the 
enum value of `4`)
   
   ```
   2022-10-20 11:40:58.577845: I 
tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is 
optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following 
CPU instructions in performance-critical operations:  AVX2 FMA
   To enable them in other operations, rebuild TensorFlow with the appropriate 
compiler flags.
   Traceback (most recent call last):
     File "/Users/yeandy/.pyenv/versions/3.8.9/lib/python3.8/runpy.py", line 
194, in _run_module_as_main
       return _run_code(code, main_globals, None,
     File "/Users/yeandy/.pyenv/versions/3.8.9/lib/python3.8/runpy.py", line 
87, in _run_code
       exec(code, run_globals)
     File 
"/Users/yeandy/projects/beam/sdks/python/apache_beam/examples/inference/tensorflow_object_detection.py",
 line 260, in <module>
       run()
     File 
"/Users/yeandy/projects/beam/sdks/python/apache_beam/examples/inference/tensorflow_object_detection.py",
 line 233, in run
       tf_model_handler = CreateModelHandler(inference_spec_type)
     File 
"/Users/yeandy/.pyenv/versions/mypy-pytorch3.8/lib/python3.8/site-packages/tfx_bsl/public/beam/run_inference.py",
 line 281, in CreateModelHandler
       return run_inference.create_model_handler(inference_spec_type, None, 
None)
     File 
"/Users/yeandy/.pyenv/versions/mypy-pytorch3.8/lib/python3.8/site-packages/tfx_bsl/beam/run_inference.py",
 line 125, in create_model_handler
       return _get_saved_model_handler(inference_spec_type, load_override_fn)
     File 
"/Users/yeandy/.pyenv/versions/mypy-pytorch3.8/lib/python3.8/site-packages/tfx_bsl/beam/run_inference.py",
 line 243, in _get_saved_model_handler
       return _PredictModelHandler(inference_spec_type, load_override_fn)
     File 
"/Users/yeandy/.pyenv/versions/mypy-pytorch3.8/lib/python3.8/site-packages/tfx_bsl/beam/run_inference.py",
 line 586, in __init__
       self._io_tensor_spec = self._make_io_tensor_spec()
     File 
"/Users/yeandy/.pyenv/versions/mypy-pytorch3.8/lib/python3.8/site-packages/tfx_bsl/beam/run_inference.py",
 line 626, in _make_io_tensor_spec
       raise ValueError(
   ValueError: Input dtype is expected to be 7, got 4
   ```
   
   Original command:
   ```
   python -m apache_beam.examples.inference.tensorflow_object_detection \
   --input gs://apache-beam-ml/testing/inputs/tensorrt_image_file_names.txt \
   --output tf_predictions_val2017.txt \
   --model_path 
gs://apache-beam-testing-yeandy/tfx-inference/model/ssd_mobilenet_v2_320x320_coco17_tpu-8/
   ```
   
   ### Issue Priority
   
   Priority: 2
   
   ### Issue Component
   
   Component: run-inference


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to