riteshghorse commented on code in PR #28474:
URL: https://github.com/apache/beam/pull/28474#discussion_r1330634354


##########
sdks/python/apache_beam/ml/inference/huggingface_inference.py:
##########
@@ -593,9 +594,18 @@ def __init__(
     Args:
       task (str or enum.Enum): task supported by HuggingFace Pipelines.
         Accepts a string task or an enum.Enum from PipelineTask.
-      model : path to pretrained model on Hugging Face Models Hub to use custom
-        model for the chosen task. If the model already defines the task then
-        no need to specify the task parameter.
+      model (str): path to the pretrained *model-id* on Hugging Face Models Hub

Review Comment:
   Well the goal with keeping this model is that it can take both model-id and 
the actual loaded model. The actual loaded model works fine for AutoClasses 
that I tried with but with bigger models like llama2, it threw an error (but 
worked with passing model-id). 
   
   This was the whole point of updating the doc. 
   
   But I guess this would be temporary until we find out why it doesn't take 
the loaded model as param. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to