damccorm commented on code in PR #25607:
URL: https://github.com/apache/beam/pull/25607#discussion_r1115671215


##########
website/www/site/content/en/documentation/ml/overview.md:
##########
@@ -78,20 +80,28 @@ The RunInference API doesn't currently support making 
remote inference calls usi
 
 * Consider monitoring and measuring the performance of a pipeline when 
deploying, because monitoring can provide insight into the status and health of 
the application.
 
+## Model validation
+
+Model validation allows you to benchmark your model’s performance against an 
unseen dataset. You can extract chosen metrics, create visualizations, log 
metadata, and compare the performance of different models with the end goal of 
validating whether your model is ready to deploy. Beam provides support for 
running model evaluation on a TensorFlow model directly inside your pipeline.

Review Comment:
   ```suggestion
   Model validation allows you to benchmark your model’s performance against a 
previously unseen dataset. You can extract chosen metrics, create 
visualizations, log metadata, and compare the performance of different models 
with the end goal of validating whether your model is ready to deploy. Beam 
provides support for running model evaluation on a TensorFlow model directly 
inside your pipeline.
   ```
   
   Small wording nit



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to