tvalentyn commented on code in PR #31052:
URL: https://github.com/apache/beam/pull/31052#discussion_r1575502012


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -234,6 +235,9 @@ def __init__(
         memory pressure if you load multiple copies. Given a model that
         consumes N memory and a machine with W cores and M memory, you should
         set this to True if N*W > M.
+      model_copies: The exact number of models that you would like loaded
+        onto your machine. This can be useful if you exactly know your CPU or

Review Comment:
   ah my concern was not an incorrect configuration but cognitive burden for 
users: would they be thinking if they should set only one param, or both in 
their use case, while in the end it doesn't matter. but now it also seems that  
`large_model` becomes redundant as it is equivalent to passing `model_copies = 
1`, right?
   
   Possibly except the fact that   using model_copies is currently disallowed 
with KeyedMH, and large_model might still allow that. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to