yeandy commented on code in PR #22131:
URL: https://github.com/apache/beam/pull/22131#discussion_r934942380


##########
sdks/python/apache_beam/examples/inference/README.md:
##########
@@ -154,6 +169,52 @@ This writes the output to the `predictions.csv` with 
contents like:
 ```
 Each line has data separated by a semicolon ";". The first item is the file 
name. The second item is a list of predicted instances.
 
+---
+## Object Detection
+
+[`tensorrt_object_detection.py`](./tensorrt_object_detection.py) contains an 
implementation for a RunInference pipeline that performs object detection using 
[Tensorflow Object 
Detection's](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md)
 SSD MobileNet v2 320x320 architecture.
+
+The pipeline reads the images, performs basic preprocessing, passes them to 
the TensorRT implementation of RunInference, and then writes the predictions to 
a text file.
+
+### Dataset and model for image classification
+
+You will need to create or download images, and place them into your 
`IMAGES_DIR` directory. Popular dataset for such task is [COCO 
dataset](https://cocodataset.org/#home). COCO validation dataset can be 
obtained [here](http://images.cocodataset.org/zips/val2017.zip).
+- **Required**: A path to a file called `IMAGE_FILE_NAMES` that contains the 
absolute paths of each of the images in `IMAGES_DIR` on which you want to run 
image segmentation. Paths can be different types of URIs such as your local 
file system, a AWS S3 bucket or GCP Cloud Storage bucket. For example:
+```
+/absolute/path/to/000000000139.jpg
+/absolute/path/to/000000289594.jpg
+```
+- **Required**: A path to a file called `TRT_ENGINE` that contains the 
pre-built TensorRT engine from SSD MobileNet v2 320x320 model. You will need to 
[follow 
instructions](https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api)
 on how to download and convert this SSD model into TensorRT engine. At [Create 
ONNX 
Graph](https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api#create-onnx-graph)
 step, keep batch size at 1. As soon as you are done with [Build TensorRT 
Engine](https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api#build-tensorrt-engine)
 step. You can use resulted engine as `TRT_ENGINE` input. In addition, make 
sure that environment you use for TensorRT engine creation is the same 
environment you use to run TensorRT inference. It is related not only to 
TensorRT version, but also to a specific GPU used. Read more about it 
[here](https://docs.nvidia.com/deeplearning/
 tensorrt/developer-guide/index.html#compatibility-serialized-engines).
+
+- **Required**: A path to a file called `OUTPUT`, to which the pipeline will 
write the predictions.
+- **Optional**: `IMAGES_DIR`, which is the path to the directory where images 
are stored. Not required if image names in the input file `IMAGE_FILE_NAMES` 
have absolute paths.
+
+### Running `tensorrt_object_detection.py`
+
+To run the image classification pipeline locally, use the following command:

Review Comment:
   Update:
   
   I'm able to kick off a DataFlow job for the `tensorrt_object_detection` 
pipeline from my local machine. It runs successfully, reading an input txt file 
and trt engine file from GCS, as well as writing predictions to GCS, with no 
issue. 
   
   There may be cleaner ways to do this, so please let me know if you have any 
ideas. Here are the changes.
   
   1. Move any `tensorrt` or `cuda` imports inside helper functions
   2. Wrap `TRT_LOGGER = trt.Logger(trt.Logger.INFO)` inside a try/catch block 
so that I could get the job submission to work.
   3. Move `trt.init_libnvinfer_plugins(TRT_LOGGER, namespace="")` from the 
`__init__` function of `TensorRTEngineHandlerNumPy` to the `load_model` part. 
`trt.init_libnvinfer_plugins(TRT_LOGGER, namespace="")` needs to be called 
once, but we can't leave it in `__init__` because logic in here will be called 
at job submission time, which won't guarantee to have `tensorrt` installed 
locally.
   4. Add `from __future__ import annotations` to allow type-checking to pass. 
Otherwise, we will see `NameError: name 'trt' is not defined` in this line: 
`def __init__(self, engine: trt.ICudaEngine):` 
   5. Slightly modified the Dockerfile to reintroduce `ENTRYPOINT [ 
"/opt/apache/beam/boot" ]` so that the pipeline could start.
   
   In addition. One thing we need to do is to be able to rebuild the TRT engine 
from the onnx file (already in GCS) every time. For this job, I did the engine 
building in a separate VM. The next thing to do is to create a job -- probably 
a Dataflow one -- to build the engine before running the actual test. 
   
   Please see this [PR](https://github.com/azhurkevich/beam/pull/1) with my 
changes, and let me know what you think.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to