yeandy commented on code in PR #21868:
URL: https://github.com/apache/beam/pull/21868#discussion_r898253546
##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -190,6 +194,10 @@ def run_inference(
For the same key across all examples, this will stack all Tensors values
in a vectorized format to optimize the inference call.
+ Args:
+ batch: A sequence of Tensors.
+ model: A pytorch model.
Review Comment:
```suggestion
batch: A sequence of keyed Tensors. These Tensors should be batchable,
as this method will call `torch.stack()` and pass in batched Tensors with
dimensions (batch_size, n_features, etc.) into the model's forward() function.
model: A PyTorch model. Must implement forward(X).
Where the parameter X is a dictionary of torch Tensors.
inference_args: Extra non-batchable arguments required as inputs to
the model's forward() function. These should not be batchable, as this method
will not call `torch.stack()` and pass them directly into the model's forward
function()`
```
@AnandInguva Can you review my wording here?
##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -103,6 +106,13 @@ def run_inference(
This method stacks the list of Tensors in a vectorized format to optimize
the inference call.
+
+ Args:
+ batch: A sequence of Tensors.
+ model: A pytorch model.
Review Comment:
And add `inference_args` like my other comment.
##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -91,24 +88,43 @@ def run_inference(
batch: Sequence[ExampleT],
model: ModelT,
inference_args: Optional[Dict[str, Any]] = None) ->
Iterable[PredictionT]:
- """Runs inferences on a batch of examples and
- returns an Iterable of Predictions."""
+ """Runs inferences on a batch of examples.
+
+ Args:
+ batch: A sequence of examples or features.
+ model: The model used to make inferences.
+
Review Comment:
Add
```
inference_args: Extra arguments for models whose inference call
requires
extra parameters.
```
##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -103,6 +106,13 @@ def run_inference(
This method stacks the list of Tensors in a vectorized format to optimize
the inference call.
+
+ Args:
+ batch: A sequence of Tensors.
+ model: A pytorch model.
Review Comment:
```suggestion
batch: A sequence of Tensors. These Tensors should be batchable, as
this method will call `torch.stack()` and pass in batched Tensors with
dimensions (batch_size, n_features, etc.) into the model's forward() function.
model: A pytorch model. Must implement forward(X).
Where the parameter X is a torch Tensor.
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]