yeandy commented on code in PR #21868:
URL: https://github.com/apache/beam/pull/21868#discussion_r898253546
##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -190,6 +194,10 @@ def run_inference(
For the same key across all examples, this will stack all Tensors values
in a vectorized format to optimize the inference call.
+ Args:
+ batch: A sequence of Tensors.
+ model: A pytorch model.
Review Comment:
```suggestion
batch: A sequence of keyed Tensors. These Tensors should be batchable,
as this method will call `torch.stack()` and pass in batched Tensors with
dimensions (batch_size, n_features, etc.) into the model's forward() function.
model: A PyTorch model.
inference_args: Non-batchable arguments required as inputs to the
model's forward() function. Unlike Tensors in `batch`, these parameters will
not be dynamically batched
```
@AnandInguva Can you review my wording here?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]