damccorm commented on code in PR #31052:
URL: https://github.com/apache/beam/pull/31052#discussion_r1576843804


##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -1434,19 +1519,27 @@ def load():
     if isinstance(side_input_model_path, str) and side_input_model_path != '':
       model_tag = side_input_model_path
     if self._model_handler.share_model_across_processes():
-      model = multi_process_shared.MultiProcessShared(
-          load, tag=model_tag, always_proxy=True).acquire()
+      models = []
+      for i in range(self._model_handler.model_copies()):
+        models.append(
+            multi_process_shared.MultiProcessShared(

Review Comment:
   > do we need to explicitly release the multi-process shared handle for it to 
be freed-up? Guessing we currently don't do that as per TODO on line 1516?
   
   We don't do that, I think it is fine for now though because we'd only want 
to unload models in batch mode when we transition to a new stage (when cleanup 
should happen anyways)
   
   > If we are using the model update functonality via side input , are we 
reusing the same mutliprocess shared handle during the update , or will side 
input update iteration result in creating a new multiprocess shared handle 
without releasing the previous one (trying to answer this myself atm)?
   
   We'll create a new one, but with the same tag. This will result in the 
current one being garbage collected



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to