o-nikolas opened a new issue, #41162:
URL: https://github.com/apache/airflow/issues/41162

   ### Description
   
   See this [issue](https://github.com/apache/airflow/issues/41055) and this 
subsequent [PR](https://github.com/apache/airflow/pull/41107) for context. But 
in short: There was a undocumented/untested/buggy feature to allow setting 
parallelism to infinity (by setting it to zero) which was dropped during the 
implementation of Multiple Executor Config. This raised a discussion of if this 
feature should really be supported. I propose we drop this feature in Airflow 3 
because:
   
   1. It adds unnecessary complexity to the code
   2. This behaviour can be achieved by setting parallelism to a sufficiently 
high number
   3. (most importantly) I think it's actually important for the user to have 
to do 2), it forces them to actually think "hmm how parallel should I 
_actually_ run tasks? Is infinity appropriated? Will infinity actually cause 
degraded performance?". I think allowing 0 gives an easy way for folks to set 
and forget without weighing the implications.
   4. `scheduler.max_tis_per_query` is a very important config for performance 
and depends on `core.parallelism` if it is set to 0 (which means to track the 
value of parallelism) then we may have infinite query sizes which would 
drastically impact performance. This is an easy trap for users to fall into.
   
   
   ### Use case/motivation
   
   _No response_
   
   ### Related issues
   
   https://github.com/apache/airflow/issues/41055
   https://github.com/apache/airflow/pull/41107
   
   ### Are you willing to submit a PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to