tvalentyn commented on code in PR #31052:
URL: https://github.com/apache/beam/pull/31052#discussion_r1576760164
##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -1434,19 +1519,27 @@ def load():
if isinstance(side_input_model_path, str) and side_input_model_path != '':
model_tag = side_input_model_path
if self._model_handler.share_model_across_processes():
- model = multi_process_shared.MultiProcessShared(
- load, tag=model_tag, always_proxy=True).acquire()
+ models = []
+ for i in range(self._model_handler.model_copies()):
+ models.append(
+ multi_process_shared.MultiProcessShared(
Review Comment:
do we need to explicitly release the multi-process shared handle for it to
be freed-up? Guessing we currently don't do that as per TODO on line 1516?
If we are using the model update functonality via side input , are we
reusing the same mutliprocess shared handle during the update , or will side
input update result in allocating more memory
##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -1434,19 +1519,27 @@ def load():
if isinstance(side_input_model_path, str) and side_input_model_path != '':
model_tag = side_input_model_path
if self._model_handler.share_model_across_processes():
- model = multi_process_shared.MultiProcessShared(
- load, tag=model_tag, always_proxy=True).acquire()
+ models = []
+ for i in range(self._model_handler.model_copies()):
+ models.append(
+ multi_process_shared.MultiProcessShared(
Review Comment:
do we need to explicitly release the multi-process shared handle for it to
be freed-up? Guessing we currently don't do that as per TODO on line 1516?
If we are using the model update functonality via side input , are we
reusing the same mutliprocess shared handle during the update , or will side
input update iteratation result in allocating more memory
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]