amrit2196 commented on issue #42383:
URL: https://github.com/apache/airflow/issues/42383#issuecomment-2402880774

   @jscheffl  The issue we are facing is , assume a scenario where the task pod 
runs and the base container gets completed and the side cars are terminated 
immediately along with the base container, the task event are properly sent to 
the kubernetes executor, whereas when I add a delay logic in termination of 
side cars, the information is not sent to the executor even though the 
containers are terminated and the pod is deleted after the delay. This results 
in parallelism count reaching to 0. I can see messages like skipping event for 
this task pod as the event has been already sent as well.
   
   This happens even with a simple hello world dag:
   
   from airflow import DAG
   from airflow.operators.python_operator import PythonOperator
   from datetime import datetime
   
   # Define a simple Python function that will print "Hello, World!"
   def hello_world():
       print("Hello, World!")
   
   # Define the default arguments for the DAG
   default_args = {
       'owner': 'airflow',
       'start_date': datetime(2024, 10, 8),  # Update to the current date or 
earlier
       'retries': 1,
   }
   
   # Instantiate the DAG
   with DAG(
       dag_id='hello_world_dag',
       default_args=default_args,
       schedule_interval=None,  # Set to None to disable scheduled runs
       catchup=False,  # Don't backfill or run previous missed schedules
   ) as dag:
   
       # Define a task that uses PythonOperator to run the hello_world function
       hello_world_task = PythonOperator(
           task_id='print_hello_world',
           python_callable=hello_world,
       )
   
       # Define the task dependencies
       hello_world_task
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to