VladimirYushkevich opened a new issue, #39680:
URL: https://github.com/apache/airflow/issues/39680

   ### Apache Airflow version
   
   2.9.1
   
   ### If "Other Airflow 2 version" selected, which one?
   
   _No response_
   
   ### What happened?
   
   We are running Airflow on Kubernetes (GCP) with a Postgres database (Cloud 
SQL). We are using `pgbouncer` as a DB connection pool. We have a single DAG in 
a separate Airflow worker pool that runs every hour and creates 1000+ 
Dynamically Mapped Tasks. As mentioned in 
https://github.com/apache/airflow/issues/35267#issuecomment-2113027901 
upgrading to `2.9.1` helped to eliminate long-running transactions. However, it 
introduced another issue that we did not encounter in the previous version:
   * The Postgres instance started reporting many `could not obtain lock on row 
in relation "dag_run"` errors:
   ```
   2024-05-17 09:49:19.191 UTC [3586765]: [131-1] 
db=airflow,[email protected] ERROR:  could not obtain 
lock on row in relation "dag_run"
   ```
   * We also noticed a significant spike in CPU:
   ![Screenshot 2024-05-17 at 12 32 
12](https://github.com/apache/airflow/assets/6008151/0b061b4d-82ab-4dc5-81fa-cf9ec5188974)
   
   ### What you think should happen instead?
   
   _No response_
   
   ### How to reproduce
   
   Create dag with following tasks:
   ![Screenshot 2024-05-17 at 12 47 
25](https://github.com/apache/airflow/assets/6008151/78e0d0b0-9773-417f-84e0-e3ad02e4008f)
   ```
   # DAG deginition
   @dag(
       dag_id="retryable_dag"
       schedule="@hourly",
       start_date=pendulum.today("UTC").add(hours=-1),
       is_paused_upon_creation=False,
       max_active_runs=1,
       default_args={
           "on_failure_callback": send_dag_failure_message_to_slack,
           "pool": "retryable_pool",
           "max_active_tis_per_dagrun": 50,
       },
   )
   def retryable_dag() -> DAG:
       dag_configs = PythonOperator(task_id="load_dag_configs", 
python_callable=list_files)
   
       process_dag_config.expand(source=dag_configs.output)
   
   
   @task_group
   def process_dag_config(source: str) -> None:
       config_file = extract_dag_config(source=source)
       trigger_dag_run(config_file=config_file)
       delete_dag_config(config_file=config_file)
   
   
   def list_files() -> list[str]:
      gcs_hook = GCSHook(
               impersonation_chain="SA",
           )
   
      return gcs_hook.list(
               bucket_name=os.getenv(GCS_BUCKET),
               prefix=f"{os.getenv(GCS_REFIX, GCS_PATH)}/",
               match_glob="**/*.json",
           )
   
   
   @task
   def extract_dag_config...
   
   
   @task
   def trigger_dag_run...
   
   
   @task
   def delete_dag_config...
   ```
   
   
   ### Operating System
   
   Debian GNU/Linux 12 (bookworm)
   
   ### Versions of Apache Airflow Providers
   
   apache-airflow-providers-celery==3.6.2
   apache-airflow-providers-common-io==1.3.1
   apache-airflow-providers-common-sql==1.12.0
   apache-airflow-providers-datadog==3.5.1
   apache-airflow-providers-fab==1.0.4
   apache-airflow-providers-ftp==3.8.0
   apache-airflow-providers-google==10.17.0
   apache-airflow-providers-http==4.10.1
   apache-airflow-providers-imap==3.5.0
   apache-airflow-providers-postgres==5.10.2
   apache-airflow-providers-slack==8.6.2
   apache-airflow-providers-smtp==1.6.1
   apache-airflow-providers-sqlite==3.7.1
   
   ### Deployment
   
   Official Apache Airflow Helm Chart
   
   ### Deployment details
   
   _No response_
   
   ### Anything else?
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to