riteshghorse commented on code in PR #28263:
URL: https://github.com/apache/beam/pull/28263#discussion_r1311980103
##########
sdks/python/apache_beam/ml/inference/base.py:
##########
@@ -587,6 +588,14 @@ def run_inference(
keys,
self._unkeyed.run_inference(unkeyed_batch, model, inference_args))
+ # The first time a MultiProcessShared ModelManager is used for inference
+ # from this process, we should increment its max model count
+ if self._max_models_per_worker_hint is not None:
+ lock = threading.Lock()
+ if lock.acquire(blocking=False):
Review Comment:
got it, thanks!
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]