damccorm opened a new issue, #21444:
URL: https://github.com/apache/beam/issues/21444

   RunInference currently returns both the example/input and prediction. 
   ```
   
   PredictionResult:
     example: torch(...)
     inference: torch(...)
   
   ```
   
   Users may want the ability to only return the inference to minimize 
potential memory/serialization issues later in the pipeline. They can do this 
with a flag, and the return value may look like
   ```
   
   PredictionResult:
     example: None
     inference: torch(...)
   
   ```
   
    
   
   Imported from Jira 
[BEAM-14362](https://issues.apache.org/jira/browse/BEAM-14362). Original Jira 
may contain additional context.
   Reported by: yeandy.
   Subtask of issue #21435


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to