rezarokni commented on code in PR #29233:
URL: https://github.com/apache/beam/pull/29233#discussion_r1378259253


##########
website/www/site/content/en/documentation/sdks/python-machine-learning.md:
##########
@@ -47,9 +47,16 @@ For more infomation about machine learning with Apache Beam, 
see:
 * [About Beam ML](/documentation/ml/about-ml)
 * [RunInference 
notebooks](https://github.com/apache/beam/tree/master/examples/notebooks/beam-ml)
 
+## Support and limitations
+
+- The RunInference API is supported in Apache Beam 2.40.0 and later versions.
+- PyTorch and Scikit-learn frameworks are supported. Tensorflow models are 
supported through tfx-bsl.

Review Comment:
   This is out dated. we no longer need tfx-bsl. Lets just change this to 
Tensorflow, Pytorch and other frame works are supported, for a full list look 
...



##########
website/www/site/content/en/documentation/sdks/python-machine-learning.md:
##########
@@ -47,9 +47,16 @@ For more infomation about machine learning with Apache Beam, 
see:
 * [About Beam ML](/documentation/ml/about-ml)
 * [RunInference 
notebooks](https://github.com/apache/beam/tree/master/examples/notebooks/beam-ml)
 
+## Support and limitations
+
+- The RunInference API is supported in Apache Beam 2.40.0 and later versions.
+- PyTorch and Scikit-learn frameworks are supported. Tensorflow models are 
supported through tfx-bsl.
+- The RunInference API supports batch and streaming pipelines.
+- The RunInference API supports local and remote inference.

Review Comment:
   local to the runner worker 



##########
website/www/site/content/en/documentation/sdks/python-machine-learning.md:
##########
@@ -47,9 +47,16 @@ For more infomation about machine learning with Apache Beam, 
see:
 * [About Beam ML](/documentation/ml/about-ml)
 * [RunInference 
notebooks](https://github.com/apache/beam/tree/master/examples/notebooks/beam-ml)
 
+## Support and limitations
+
+- The RunInference API is supported in Apache Beam 2.40.0 and later versions.
+- PyTorch and Scikit-learn frameworks are supported. Tensorflow models are 
supported through tfx-bsl.
+- The RunInference API supports batch and streaming pipelines.
+- The RunInference API supports local and remote inference.
+
 ## Why use the RunInference API?
 
-RunInference takes advantage of existing Apache Beam concepts, such as the 
`BatchElements` transform and the `Shared` class, to enable you to use models 
in your pipelines to create transforms optimized for machine learning 
inferences. The ability to create arbitrarily complex workflow graphs also 
allows you to build multi-model pipelines.
+RunInference takes advantage of existing Apache Beam concepts, such as the 
`BatchElements` transform and the `Shared` class, to enable you to use models 
in your pipelines optimized for machine learning inferences. The ability to 
create arbitrarily complex workflow graphs also allows you to build multi-model 
pipelines.

Review Comment:
   @damccorm Can we add the full list here as we have done much more since the 
early days :-) Model manager would be nice to discuss. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to