damccorm commented on code in PR #31052:
URL: https://github.com/apache/beam/pull/31052#discussion_r1576181499


##########
sdks/python/apache_beam/ml/inference/pytorch_inference.py:
##########
@@ -234,6 +235,9 @@ def __init__(
         memory pressure if you load multiple copies. Given a model that
         consumes N memory and a machine with W cores and M memory, you should
         set this to True if N*W > M.
+      model_copies: The exact number of models that you would like loaded
+        onto your machine. This can be useful if you exactly know your CPU or

Review Comment:
   > but now it also seems that large_model becomes redundant as it is 
equivalent to passing model_copies = 1, right?
   
   That's right, though I think long term I would like for us to do smart 
things here (e.g. `large_model` becomes "pack as many models as you can fit). 
There's some conversation on this general idea in the design doc



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to