rszper commented on code in PR #24125:
URL: https://github.com/apache/beam/pull/24125#discussion_r1022130309


##########
examples/notebooks/beam-ml/custom_remote_inference.ipynb:
##########
@@ -34,13 +34,17 @@
         "id": "0UGzzndTBPWQ"
       },
       "source": [
-        "# Remote inference in Beam\n",
+        "# Remote inference in Apache Beam\n",
         "\n",
-        "The prefered way of running inference in Beam is by using the 
[RunInference 
API](https://beam.apache.org/documentation/sdks/python-machine-learning/). The 
RunInference API enables you to run your models as part of your pipeline in a 
way that is optimized for machine learning inference. It supports features such 
as batching, so that you do not need to take care of it yourself. For more info 
on the RunInference API you can check out the [RunInference 
notebook](https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_pytorch_tensorflow_sklearn.ipynb),
 which demonstrates how you can implement model inference in pytorch, 
scikit-learn and tensorflow.\n",
+        "The prefered way to run inference in Apache Beam is by using the 
[RunInference 
API](https://beam.apache.org/documentation/sdks/python-machine-learning/). \n",
+        "The RunInference API enables you to run your models as part of your 
pipeline in a way that is optimized for machine learning inference. \n",
+        "To reduce the number of steps that you need to take, RunInference 
supports features like batching. For more infomation about the RunInference 
API, review the [RunInference 
notebook](https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_pytorch_tensorflow_sklearn.ipynb),
 \n",

Review Comment:
   I also find this notebook confusing. I agree about the link. Will fix.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to