[ 
https://issues.apache.org/jira/browse/BEAM-13985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anand Inguva updated BEAM-13985:
--------------------------------
    Description: RunInference benchmarks will evaluate performance of 
Pipelines, which represent common use cases of Beam + Dataflow in Pytorch, 
sklearn and possibly TFX. These benchmarks would be the integration tests that 
exercise several software components using Beam, PyTorch, Scikit learn and 
TensorFlow extended.  (was: The goal of the end-to-end it test is to check if 
the code changes in RunInference are working as intended. 

Make calls to the RunInference classes for TFX, Pytorch, and Scikit-learn.
 * For TFX, need to use their proto

Process
 * Read data from GCS bucket
 * Use pre-trained model.
 * Predict the output predictions
 * Assert if output predictions match actual

Add task for using GPU container images)

> Implement end-to-end tests for RunInference classes
> ---------------------------------------------------
>
>                 Key: BEAM-13985
>                 URL: https://issues.apache.org/jira/browse/BEAM-13985
>             Project: Beam
>          Issue Type: Sub-task
>          Components: sdk-py-core
>            Reporter: Andy Ye
>            Assignee: Anand Inguva
>            Priority: P2
>              Labels: run-inference
>
> RunInference benchmarks will evaluate performance of Pipelines, which 
> represent common use cases of Beam + Dataflow in Pytorch, sklearn and 
> possibly TFX. These benchmarks would be the integration tests that exercise 
> several software components using Beam, PyTorch, Scikit learn and TensorFlow 
> extended.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to