yeandy commented on code in PR #22131:
URL: https://github.com/apache/beam/pull/22131#discussion_r958819406


##########
sdks/python/apache_beam/examples/inference/README.md:
##########
@@ -160,6 +175,52 @@ This writes the output to the `predictions.csv` with 
contents like:
 ```
 Each line has data separated by a semicolon ";". The first item is the file 
name. The second item is a list of predicted instances.
 
+---
+## Object Detection
+
+[`tensorrt_object_detection.py`](./tensorrt_object_detection.py) contains an 
implementation for a RunInference pipeline that performs object detection using 
[Tensorflow Object 
Detection's](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md)
 SSD MobileNet v2 320x320 architecture.
+
+The pipeline reads the images, performs basic preprocessing, passes them to 
the TensorRT implementation of RunInference, and then writes the predictions to 
a text file.
+
+### Dataset and model for image classification
+
+You will need to create or download images, and place them into your 
`IMAGES_DIR` directory. Popular dataset for such task is [COCO 
dataset](https://cocodataset.org/#home). COCO validation dataset can be 
obtained [here](http://images.cocodataset.org/zips/val2017.zip).
+- **Required**: A path to a file called `IMAGE_FILE_NAMES` that contains the 
absolute paths of each of the images in `IMAGES_DIR` on which you want to run 
image segmentation. Paths can be different types of URIs such as your local 
file system, a AWS S3 bucket or GCP Cloud Storage bucket. For example:
+```
+/absolute/path/to/000000000139.jpg
+/absolute/path/to/000000289594.jpg
+```
+- **Required**: A path to a file called `TRT_ENGINE` that contains the 
pre-built TensorRT engine from SSD MobileNet v2 320x320 model. You will need to 
[follow 
instructions](https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api)
 on how to download and convert this SSD model into TensorRT engine. At [Create 
ONNX 
Graph](https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api#create-onnx-graph)
 step, keep batch size at 1. As soon as you are done with [Build TensorRT 
Engine](https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api#build-tensorrt-engine)
 step. You can use resulted engine as `TRT_ENGINE` input. In addition, make 
sure that environment you use for TensorRT engine creation is the same 
environment you use to run TensorRT inference. It is related not only to 
TensorRT version, but also to a specific GPU used. Read more about it 
[here](https://docs.nvidia.com/deeplearning/
 tensorrt/developer-guide/index.html#compatibility-serialized-engines).
+
+- **Required**: A path to a file called `OUTPUT`, to which the pipeline will 
write the predictions.
+- **Optional**: `IMAGES_DIR`, which is the path to the directory where images 
are stored. Not required if image names in the input file `IMAGE_FILE_NAMES` 
have absolute paths.
+
+### Running `tensorrt_object_detection.py`
+
+To run the image classification pipeline locally, use the following command:
+```sh
+python -m apache_beam.examples.inference.tensorrt_object_detection \
+  --input IMAGE_FILE_NAMES \
+  --images_dir IMAGES_DIR \
+  --output OUTPUT \
+  --engine_path TRT_ENGINE
+```
+For example:
+```sh
+python -m apache_beam.examples.inference.tensorrt_object_detection \
+  --input image_file_names.txt \
+  --output predictions.csv \
+  --model_state_dict_path ssd_mobilenet_v2_320x320_coco17_tpu-8.trt
+```
+This writes the output to the `predictions.csv` with contents like:
+```
+/absolute/path/to/000000000139.jpg;[{'ymin': '217.31875205039978'       
'xmin': '295.93122482299805'    'ymax': '315.90323209762573'    'xmax': 
'357.8959655761719'     'score': '0.72342616'   'class': 'chair'}       
{'ymin': '166.81788557767868'.....
+
+/absolute/path/to/000000289594.jpg;[{'ymin': '227.25109100341797'       
'xmin': '331.7402381300926'     'ymax': '476.88533782958984'    'xmax': 
'402.2928895354271'     'score': '0.77217317'   'class': 'person'}      
{'ymin': '231.8712615966797'    'xmin': '292.8590789437294'.....
+...
+```
+Each line has data separated by a semicolon ";". The first item is the file 
name. The second item is a list of dictionaries, each dictionary corresponds 
with a single detection. As a result providing: box coordinates (ymin, xmin, 
ymax, xmax); score and class on a per detection basis.

Review Comment:
   ```suggestion
   Each line has data separated by a semicolon ";". The first item is the file 
name. The second item is a list of dictionaries, where each dictionary 
corresponds with a single detection. A detection contains: box coordinates 
(ymin, xmin, ymax, xmax); score; and class.
   ```



##########
sdks/python/apache_beam/examples/inference/README.md:
##########
@@ -160,6 +175,52 @@ This writes the output to the `predictions.csv` with 
contents like:
 ```
 Each line has data separated by a semicolon ";". The first item is the file 
name. The second item is a list of predicted instances.
 
+---
+## Object Detection
+
+[`tensorrt_object_detection.py`](./tensorrt_object_detection.py) contains an 
implementation for a RunInference pipeline that performs object detection using 
[Tensorflow Object 
Detection's](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md)
 SSD MobileNet v2 320x320 architecture.
+
+The pipeline reads the images, performs basic preprocessing, passes them to 
the TensorRT implementation of RunInference, and then writes the predictions to 
a text file.
+
+### Dataset and model for image classification
+
+You will need to create or download images, and place them into your 
`IMAGES_DIR` directory. Popular dataset for such task is [COCO 
dataset](https://cocodataset.org/#home). COCO validation dataset can be 
obtained [here](http://images.cocodataset.org/zips/val2017.zip).
+- **Required**: A path to a file called `IMAGE_FILE_NAMES` that contains the 
absolute paths of each of the images in `IMAGES_DIR` on which you want to run 
image segmentation. Paths can be different types of URIs such as your local 
file system, a AWS S3 bucket or GCP Cloud Storage bucket. For example:
+```
+/absolute/path/to/000000000139.jpg
+/absolute/path/to/000000289594.jpg
+```
+- **Required**: A path to a file called `TRT_ENGINE` that contains the 
pre-built TensorRT engine from SSD MobileNet v2 320x320 model. You will need to 
[follow 
instructions](https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api)
 on how to download and convert this SSD model into TensorRT engine. At [Create 
ONNX 
Graph](https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api#create-onnx-graph)
 step, keep batch size at 1. As soon as you are done with [Build TensorRT 
Engine](https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api#build-tensorrt-engine)
 step. You can use resulted engine as `TRT_ENGINE` input. In addition, make 
sure that environment you use for TensorRT engine creation is the same 
environment you use to run TensorRT inference. It is related not only to 
TensorRT version, but also to a specific GPU used. Read more about it 
[here](https://docs.nvidia.com/deeplearning/
 tensorrt/developer-guide/index.html#compatibility-serialized-engines).
+
+- **Required**: A path to a file called `OUTPUT`, to which the pipeline will 
write the predictions.
+- **Optional**: `IMAGES_DIR`, which is the path to the directory where images 
are stored. Not required if image names in the input file `IMAGE_FILE_NAMES` 
have absolute paths.

Review Comment:
   The style of instructions on how to run the examples has been modified since 
you first made this PR. So under the `Dataset and model for image 
classification` section, I've made some changes. This includes some rewording 
and fixing some typos. PTAL, and if ok, let's replace the old wording.
   
   Changes starts below:
   
   ### Dataset and model for image classification
   To use this transform, you need a dataset and model for image classification.
   
   1. Create a directory named `IMAGES_DIR`. Create or download images and put 
them in this directory. The directory is not required if image names in the 
input file `IMAGE_FILE_NAMES` have absolute paths. One popular dataset is from 
[COCO](https://cocodataset.org/#home). The COCO validation dataset can be 
obtained [here](http://images.cocodataset.org/zips/val2017.zip). Follow their 
instructions to download the images.
   2. Create a file named `IMAGE_FILE_NAMES` that contains the absolute paths 
of each of the images in `IMAGES_DIR` that you want to use to run image 
classification. The path to the file can be different types of URIs such as 
your local file system, an AWS S3 bucket, or a GCP Cloud Storage bucket. For 
example:
   ```
   /absolute/path/to/000000000139.jpg
   /absolute/path/to/000000289594.jpg
   ```
   3. Follow the 
[instructions](https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api)
 on how to download and convert a SSD MobileNet v2 320x320 model into TensorRT 
engine. At [Create ONNX 
Graph](https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api#create-onnx-graph)
 step, keep batch size at 1. As soon as you are done with [Build TensorRT 
Engine](https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api#build-tensorrt-engine)
 step, you can use resulting engine as `TRT_ENGINE` input (see step 4).
       - Note: Make sure that environment you use for TensorRT engine creation 
is the same environment you use to run TensorRT inference. It is related not 
only to TensorRT version, but also to a specific GPU used. Read more about it 
[here](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#compatibility-serialized-engines).
   4. Create a path to a file named `TRT_ENGINE` that contains the pre-built 
TensorRT engine from the SSD MobileNet v2 320x320 model. 
   5. Note the path to the `OUTPUT` file. This file is used by the pipeline to 
write the predictions.



##########
sdks/python/apache_beam/examples/inference/README.md:
##########
@@ -160,6 +175,52 @@ This writes the output to the `predictions.csv` with 
contents like:
 ```
 Each line has data separated by a semicolon ";". The first item is the file 
name. The second item is a list of predicted instances.
 
+---
+## Object Detection
+
+[`tensorrt_object_detection.py`](./tensorrt_object_detection.py) contains an 
implementation for a RunInference pipeline that performs object detection using 
[Tensorflow Object 
Detection's](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md)
 SSD MobileNet v2 320x320 architecture.
+
+The pipeline reads the images, performs basic preprocessing, passes them to 
the TensorRT implementation of RunInference, and then writes the predictions to 
a text file.
+
+### Dataset and model for image classification
+
+You will need to create or download images, and place them into your 
`IMAGES_DIR` directory. Popular dataset for such task is [COCO 
dataset](https://cocodataset.org/#home). COCO validation dataset can be 
obtained [here](http://images.cocodataset.org/zips/val2017.zip).
+- **Required**: A path to a file called `IMAGE_FILE_NAMES` that contains the 
absolute paths of each of the images in `IMAGES_DIR` on which you want to run 
image segmentation. Paths can be different types of URIs such as your local 
file system, a AWS S3 bucket or GCP Cloud Storage bucket. For example:
+```
+/absolute/path/to/000000000139.jpg
+/absolute/path/to/000000289594.jpg
+```
+- **Required**: A path to a file called `TRT_ENGINE` that contains the 
pre-built TensorRT engine from SSD MobileNet v2 320x320 model. You will need to 
[follow 
instructions](https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api)
 on how to download and convert this SSD model into TensorRT engine. At [Create 
ONNX 
Graph](https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api#create-onnx-graph)
 step, keep batch size at 1. As soon as you are done with [Build TensorRT 
Engine](https://github.com/NVIDIA/TensorRT/tree/main/samples/python/tensorflow_object_detection_api#build-tensorrt-engine)
 step. You can use resulted engine as `TRT_ENGINE` input. In addition, make 
sure that environment you use for TensorRT engine creation is the same 
environment you use to run TensorRT inference. It is related not only to 
TensorRT version, but also to a specific GPU used. Read more about it 
[here](https://docs.nvidia.com/deeplearning/
 tensorrt/developer-guide/index.html#compatibility-serialized-engines).
+
+- **Required**: A path to a file called `OUTPUT`, to which the pipeline will 
write the predictions.
+- **Optional**: `IMAGES_DIR`, which is the path to the directory where images 
are stored. Not required if image names in the input file `IMAGE_FILE_NAMES` 
have absolute paths.
+
+### Running `tensorrt_object_detection.py`
+
+To run the image classification pipeline locally, use the following command:
+```sh
+python -m apache_beam.examples.inference.tensorrt_object_detection \
+  --input IMAGE_FILE_NAMES \
+  --images_dir IMAGES_DIR \
+  --output OUTPUT \
+  --engine_path TRT_ENGINE
+```
+For example:
+```sh
+python -m apache_beam.examples.inference.tensorrt_object_detection \
+  --input image_file_names.txt \
+  --output predictions.csv \
+  --model_state_dict_path ssd_mobilenet_v2_320x320_coco17_tpu-8.trt

Review Comment:
   ```suggestion
     --engine_path ssd_mobilenet_v2_320x320_coco17_tpu-8.trt
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to