damccorm commented on code in PR #23215:
URL: https://github.com/apache/beam/pull/23215#discussion_r970803854


##########
website/www/site/content/en/documentation/sdks/python-machine-learning.md:
##########
@@ -109,6 +109,28 @@ with pipeline as p:
 
 Where `model_handler_A` and `model_handler_B` are the model handler setup code.
 
+#### Use Resource Hints for Different Model Requirements
+
+When using multiple models in a single pipeline, different models may have 
different memory or worker SKU requirements.
+Resource hints allow you to provide information to a runner about the compute 
resource requirements for each step in your
+pipeline.
+
+For example, the following snippet extends the previous ensemble pattern with 
hints for each RunInference call
+to specify RAM and hardware accelerator requirements:
+
+```
+with pipeline as p:
+   data = p | 'Read' >> beam.ReadFromSource('a_source')
+   model_a_predictions = data | 
RunInference(<model_handler_A>).with_resource_hints(min_ram="20GB")
+   model_b_predictions = model_a_predictions
+      | beam.Map(some_post_processing)
+      | RunInference(<model_handler_B>).with_resource_hints(
+         min_ram="4GB",
+         accelerator="type:nvidia-tesla-k80;count:1;install-nvidia-driver")
+```
+
+For more information on resource hints, see [Resource 
hints](../runtime/resource-hints.md).

Review Comment:
   Oh good catch - I had thought the website build process would convert these 
relative paths to links, but I guess I got my wires crossed with different 
tooling



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to