davidharkis commented on issue #47283: URL: https://github.com/apache/airflow/issues/47283#issuecomment-2771790082
This is actually easier to replicate than I first thought, and does not require large numbers of tasks/pods 1. Deploy airflow onto a K8s cluster (official helm chart), configure two K8s connections, one for the same cluster, one for an external cluster. (Or two external clusters, the deployment location doesn't really matter). 2. Add two DAGs that each create 8 or more `deferrable=True` KubernetesPodOperator tasks, each DAG using a different `kubernetes_conn_id` (I used a for loop to generate multiple tasks, that run for 60 seconds before succeeding). 3. Run DAGs simultaneously until failures (404s) are observed. It seems like there's a fundamental issue with connections being re-used incorrectly -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
