ephraimbuddy commented on pull request #21829:
URL: https://github.com/apache/airflow/pull/21829#issuecomment-1057445922


   > Ah i see. Well, alternatively we could amend the query to filter on pool 
in pools or join to pools table.
   
   In this case, the user won't know why the scheduler is not scheduling the 
task with non-existent pool.
   
   > I'm not sure we'd need to throw an error there. Couldn't we just not 
create the dag run? Or in any case, if we filter on "has existent pool" in the 
query above, it wouldn't even be considered for scheduling.
   
   The Dag would keep coming up for scheduler to create dagrun even when 
ignored and would block other eligible dags from creating dagruns
   
   > Perhaps alongside import_errors we could add an attribute 
configuration_errors or something to DagBag and then use this to bubble up a 
flash alert like 
[here](https://github.com/apache/airflow/blob/08575ddd8a72f96a3439f73e973ee9958188eb83/airflow/www/views.py#L784-L788).
   
   > If we had a configuration_errors thing, this would not be a hard error but 
something we'd want to warn user about. In this scenario we could also warn if 
pool has size 0. Thinking out loud a bit here.
   
   We currently have import error is pool size is less than 0, see 
[here](https://github.com/apache/airflow/blob/5b45a78dca284e504280964951f079fca1866226/airflow/models/baseoperator.py#L783-L785).
 That's still why I think we should have this as import error. I could have 
verified the pool name below the above verification but because we still commit 
when we use `provide_session`, the scheduler would crash.
   
   I may not be understanding fully well the idea you are proposing. If you 
don't mind, you can create a PR so we can compare the two?
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to