[
https://issues.apache.org/jira/browse/BEAM-14362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Andy Ye updated BEAM-14362:
---------------------------
Description:
RunInference currently returns both the example/input and prediction.
{code:java}
PredictionResult:
example: torch(...)
inference: torch(...)
{code}
Users may want the ability to only return the inference to minimize potential
memory/serialization issues later in the pipeline. They can do this with a
flag, and the return value may look like
{code:java}
PredictionResult:
example: None
inference: torch(...)
{code}
was:
RunInference currently returns both the example/input and prediction.
{code:java}
PredictionResult:
example: torch(...)
inference: torch(...)
{code}
Users may want the ability to only return the inference to minimize potential
memory/serialization issues later in the pipeline. They can do this with a
flag, and the return value may look like
{code:java}
PredictionResult:
example: None
inference: torch(...)
{code}
> Add flag to drop example from PredicitonResult
> ----------------------------------------------
>
> Key: BEAM-14362
> URL: https://issues.apache.org/jira/browse/BEAM-14362
> Project: Beam
> Issue Type: Sub-task
> Components: sdk-py-core
> Reporter: Andy Ye
> Priority: P2
> Labels: run-inference
>
> RunInference currently returns both the example/input and prediction.
> {code:java}
> PredictionResult:
> example: torch(...)
> inference: torch(...)
> {code}
> Users may want the ability to only return the inference to minimize potential
> memory/serialization issues later in the pipeline. They can do this with a
> flag, and the return value may look like
> {code:java}
> PredictionResult:
> example: None
> inference: torch(...)
> {code}
>
--
This message was sent by Atlassian Jira
(v8.20.7#820007)