riteshghorse commented on code in PR #29223:
URL: https://github.com/apache/beam/pull/29223#discussion_r1379069274


##########
sdks/python/apache_beam/ml/inference/huggingface_inference.py:
##########
@@ -638,8 +640,36 @@ def __init__(
     if max_batch_size is not None:
       self._batching_kwargs['max_batch_size'] = max_batch_size
     self._large_model = large_model
+
+    # Check if the device is specified twice. If true then the device parameter
+    # of model handler is overridden.
+    self._deduplicate_device_value(device)
     _validate_constructor_args_hf_pipeline(self._task, self._model)
 
+  def _deduplicate_device_value(self, device: str):

Review Comment:
   sounds good! not release blocking. previous version already has a note about 
how to use GPU 
https://github.com/apache/beam/blob/27ccdc9acfc743ed89f38fd3db277cd9fe3a96f0/sdks/python/apache_beam/ml/inference/huggingface_inference.py#L586



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to