damccorm commented on code in PR #29233:
URL: https://github.com/apache/beam/pull/29233#discussion_r1378962385


##########
website/www/site/content/en/documentation/sdks/python-machine-learning.md:
##########
@@ -47,9 +47,16 @@ For more infomation about machine learning with Apache Beam, 
see:
 * [About Beam ML](/documentation/ml/about-ml)
 * [RunInference 
notebooks](https://github.com/apache/beam/tree/master/examples/notebooks/beam-ml)
 
+## Support and limitations
+
+- The RunInference API is supported in Apache Beam 2.40.0 and later versions.
+- PyTorch and Scikit-learn frameworks are supported. Tensorflow models are 
supported through tfx-bsl.
+- The RunInference API supports batch and streaming pipelines.
+- The RunInference API supports local and remote inference.
+
 ## Why use the RunInference API?
 
-RunInference takes advantage of existing Apache Beam concepts, such as the 
`BatchElements` transform and the `Shared` class, to enable you to use models 
in your pipelines to create transforms optimized for machine learning 
inferences. The ability to create arbitrarily complex workflow graphs also 
allows you to build multi-model pipelines.
+RunInference takes advantage of existing Apache Beam concepts, such as the 
`BatchElements` transform and the `Shared` class, to enable you to use models 
in your pipelines optimized for machine learning inferences. The ability to 
create arbitrarily complex workflow graphs also allows you to build multi-model 
pipelines.

Review Comment:
   +1 on redirecting to that page. We'd probably need to move most of the 
content from this page over there (and deduplicate it). We need to make sure 
the link is still valid, but we can do that with 
[aliases](https://github.com/apache/beam/blob/3d3a7afe0df985f3b1ff7632edbf435bcf86c80c/website/www/site/content/en/community/logos.md?plain=1#L4).
   
   Regardless, I think we should keep a section like this one near the top with 
high level feature highlights. On top of adding new things, I think it would be 
good to focus more on the value that this delivers for users (as opposed to 
leading with "RunInference takes advantage of existing Apache Beam concepts"). 
So maybe something like:
   
   ```
   RunInference allows you to efficiently use models in your pipelines 
optimized for machine learning inferences. It includes the following out of the 
box features:
   
   - Dynamic batching of inputs based on pipeline throughput to efficiently 
feed your model.
   - A central model manager that determines the optimal number of models to 
load in order to balance memory and throughput usage
   - Automatic mode refresh to ensure that your pipeline always uses the most 
recently deployed version of your model
   - Support for many frameworks and model hubs, including Tensorflow, Pytorch, 
Sklearn, XGBoost, Hugging Face, TensorFlow Hub, Vertex AI, TensorRT, and ONNX
   - Support for arbitrary frameworks using a custom model handler
   - Support for multi-model pipelines
   ```
   
   with links to the sections where we talk about each of these.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to