damccorm commented on code in PR #23887:
URL: https://github.com/apache/beam/pull/23887#discussion_r1010760917


##########
website/www/site/content/en/documentation/ml/overview.md:
##########
@@ -54,7 +54,7 @@ The recommended way to implement inference is by using the 
[RunInference API](ht
 
 You can easily integrate your model in your pipeline by using the 
corresponding model handlers. A `ModelHandler` is an object that wraps the 
underlying model and allows you to configure its parameters. Model handlers are 
available for PyTorch, Scikit-learn and TensorFlow. Examples of how to use 
RunInference for PyTorch, Scikit-learn and TensorFlow are shown in this 
[notebook](https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_pytorch_tensorflow_sklearn.ipynb).
 
-GPUs are optimized for training artificial intelligence and deep learning 
models as they can process multiple computations simultaneously. RunInference 
also allows you to use GPUs for significant inference speedup. An example of 
how to use RunInference with GPUs is 
demonstrated[here](/documentation/ml/runinference-metrics).
+GPUs are optimized for training artificial intelligence and deep learning 
models as they can process multiple computations simultaneously. RunInference 
also allows you to use GPUs for significant inference speedup. RunInference 
also allows you to use GPUs for significant inference speedup. An example of 
how to use RunInference with GPUs is 
demonstrated[here](/documentation/ml/runinference-metrics).

Review Comment:
   From your last commit: `RunInference also allows you to use GPUs for 
significant inference speedup. RunInference also allows you to use GPUs for 
significant inference speedup. An example of how to use RunInference with GPUs 
is demonstrated[here](/documentation/ml/runinference-metrics).` - you're 
repeating the same sentence twice, and you didn't add the suggested space from 
my last comment.
   
   This should be `RunInference also allows you to use GPUs for significant 
inference speedup. An example of how to use RunInference with GPUs is 
demonstrated [here](/documentation/ml/runinference-metrics).`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to