CRZbulabula commented on code in PR #15956:
URL: https://github.com/apache/iotdb/pull/15956#discussion_r2211941760


##########
iotdb-core/ainode/ainode/core/script.py:
##########
@@ -86,6 +88,8 @@ def main():
     command = arguments[1]
     if command == "start":
         try:
+            mp.set_start_method("spawn", force=True)
+            logger.info(f"current_start_method: {mp.get_start_method()}")

Review Comment:
   ```suggestion
               logger.info(f"Current multiprocess start method: 
{mp.get_start_method()}")
   ```



##########
iotdb-core/ainode/ainode/core/inference/inference_request_pool.py:
##########
@@ -123,6 +132,10 @@ def _requests_execute_loop(self):
             self._step()
 
     def run(self):
+        self._model_manager = ModelManager()
+        self.device = torch.device("cuda" if torch.cuda.is_available() else 
"cpu")
+        self.model = self._model_manager.load_model(self.model_id, 
{}).to(self.device)

Review Comment:
   Is this the only way that we can load a model in the request pool? There 
should be only 1 ModelManager across all AINode processes.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to