azhurkevich commented on code in PR #22131:
URL: https://github.com/apache/beam/pull/22131#discussion_r935062001


##########
sdks/python/apache_beam/examples/inference/README.md:
##########
@@ -154,6 +169,52 @@ This writes the output to the `predictions.csv` with 
contents like:
 ```
 Each line has data separated by a semicolon ";". The first item is the file 
name. The second item is a list of predicted instances.
 
+---
+## Object Detection
+
+[`tensorrt_object_detection.py`](./tensorrt_object_detection.py) contains an 
implementation for a RunInference pipeline that performs object detection using 
[Tensorflow Object 
Detection's](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md)
 SSD MobileNet v2 320x320 architecture.
+
+The pipeline reads the images, performs basic preprocessing, passes them to 
the TensorRT implementation of RunInference, and then writes the predictions to 
a text file.
+
+### Dataset and model for image classification
+
+You will need to create or download images, and place them into your 
`IMAGES_DIR` directory. Popular dataset for such task is [COCO 
dataset](https://cocodataset.org/#home). COCO validation dataset can be 
obtained [here](http://images.cocodataset.org/zips/val2017.zip).
+- **Required**: A path to a file called `IMAGE_FILE_NAMES` that contains the 
absolute paths of each of the images in `IMAGES_DIR` on which you want to run 
image segmentation. Paths can be different types of URIs such as your local 
file system, a AWS S3 bucket or GCP Cloud Storage bucket. For example:
+```
+/absolute/path/to/000000000139.jpg
+/absolute/path/to/000000289594.jpg
+```
+- **Required**: A path to a file called `TRT_ENGINE` that contains the 
pre-built TensorRT engine from SSD MobileNet v2 320x320 model. You will need to 
[follow 
instructions](https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api)
 on how to download and convert this SSD model into TensorRT engine. At [Create 
ONNX 
Graph](https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api#create-onnx-graph)
 step, keep batch size at 1. As soon as you are done with [Build TensorRT 
Engine](https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api#build-tensorrt-engine)
 step. You can use resulted engine as `TRT_ENGINE` input. In addition, make 
sure that environment you use for TensorRT engine creation is the same 
environment you use to run TensorRT inference. It is related not only to 
TensorRT version, but also to a specific GPU used. Read more about it 
[here](https://docs.nvidia.com/deeplearning/
 tensorrt/developer-guide/index.html#compatibility-serialized-engines).
+
+- **Required**: A path to a file called `OUTPUT`, to which the pipeline will 
write the predictions.
+- **Optional**: `IMAGES_DIR`, which is the path to the directory where images 
are stored. Not required if image names in the input file `IMAGE_FILE_NAMES` 
have absolute paths.
+
+### Running `tensorrt_object_detection.py`
+
+To run the image classification pipeline locally, use the following command:

Review Comment:
    @yeandy Thanks for this crystal clear explanation! In the end, I just want 
to make sure that whatever I do is compliant with your vision. From the high 
level looks good to me, [I added a small 
suggestion](https://github.com/azhurkevich/beam/pull/1#issuecomment-1201944306).
 We can merge after it is addressed. 
   I have diving deeper and figuring out GCP planned for this week. I am 
complete rookie so if you make any Dataflow suggestions, please be specific so 
that I can understand. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to