yeandy opened a new issue, #22979:
URL: https://github.com/apache/beam/issues/22979

   ### What happened?
   
   Originally documented 
[here](https://beam.apache.org/documentation/sdks/python-machine-learning/#incorrect-inferences-in-the-predictionresult-object)
 in our RunInference Troubleshooting section:
   
   
   For some models, the PredictionResults output might not include the correct 
predictions in the inferences field. This issue occurs when you use a model 
whose inferences return a dictionary that maps keys to predictions and other 
metadata. An example return type is `Dict[str, Tensor]`.
   
   The RunInference API currently expects outputs to be an `Iterable[Any]`. 
Example return types are `Iterable[Tensor]` or `Iterable[Dict[str, Tensor]]`. 
When RunInference zips the inputs with the predictions, the predictions iterate 
over the dictionary keys instead of the batch elements. The result is that the 
key name is preserved but the prediction tensors are discarded.
   
   To work with the current RunInference implementation, you can create a 
wrapper class that overrides the model(input) call. In PyTorch, for example, 
your wrapper would override the forward() function and return an output with 
the appropriate format of `List[Dict[str, torch.Tensor]]`. 
   
   ### Issue Priority
   
   Priority: 2
   
   ### Issue Component
   
   Component: sdk-py-core


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to