TheNeuralBit commented on code in PR #17470:
URL: https://github.com/apache/beam/pull/17470#discussion_r877280669
##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -96,7 +107,9 @@ def expand(self, pcoll: beam.PCollection) ->
beam.PCollection:
pcoll
# TODO(BEAM-14044): Hook into the batching DoFn APIs.
| beam.BatchElements()
- | beam.ParDo(_RunInferenceDoFn(self._model_loader, self._clock)))
+ | beam.ParDo(
+ _RunInferenceDoFn(
+ self._model_loader, self._prediction_params, self._clock)))
Review Comment:
The prediction params here are just a constant, couldn't we just include
that in PyTorch's model object instead of modifying the base RunInference?
I think we'd only need to modify the base RunInferenceDoFn if we want to add
support for passing these in as side inputs
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]