damccorm commented on code in PR #37428:
URL: https://github.com/apache/beam/pull/37428#discussion_r2737149813


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -301,6 +301,22 @@ def with_postprocess_fn(
     inference result in order from first applied to last applied."""
     return _PostProcessingModelHandler(self, fn)
 
+  def with_element_size_fn(

Review Comment:
   I think it is a little odd to have element_size batch args as a `with` 
function, while our other batching args exist as args to the model handlers 
themselves (e.g. 
https://github.com/apache/beam/blob/f7784d8b187d9807f5263c726584b95a354ade8a/sdks/python/apache_beam/ml/inference/pytorch_inference.py#L194).
   
   I think we would probably be better off adding this as a direct argument to 
our built in model handlers (which also creates an easy pattern for others to 
follow when building their model handlers). That way we're consistent and there 
is just one way to add batching args. Thoughts?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to