ashb commented on code in PR #58365:
URL: https://github.com/apache/airflow/pull/58365#discussion_r2538636398


##########
airflow-core/src/airflow/executors/local_executor.py:
##########
@@ -186,9 +193,14 @@ def _check_workers(self):
         # via `sync()` a few times before the spawned process actually starts 
picking up messages. Try not to
         # create too much
         if num_outstanding and len(self.workers) < self.parallelism:
-            # This only creates one worker, which is fine as we call this 
directly after putting a message on
-            # activity_queue in execute_async
-            self._spawn_worker()
+            if self.is_mp_using_fork:
+                # This creates the maximum number of worker processes at once

Review Comment:
   This used to be handled with a multiprocessing pool (or maybe it was a 
concurrent.futures pool) which spawned N procs outside of our control.
   
   There is 0 need for us to spawn these procs eagerly, it was just "how it was 
done before" (I think at one point I tried having them spawn on demand and it 
was just a bit more of a pain to support and manage and I didn't have the 
"itch" to go any further on that path.)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to