FloChehab edited a comment on pull request #10230:
URL: https://github.com/apache/airflow/pull/10230#issuecomment-679304807


   > Hi @FloChehab,
   > 
   > Can you please post the scheduler logs for the scheduler where it is up 
for retry + the DAG code? Seems odd that on second restart it would come out as 
a success and just want to make sure.
   
   Here you go:
   
   Dag:
   ```python
   from airflow.contrib.operators.kubernetes_pod_operator import 
KubernetesPodOperator
   from airflow.kubernetes.secret import Secret
   from airflow.models import DAG
   from airflow.utils.dates import days_ago
   
   
   default_args = {
       'owner': 'Airflow',
       'start_date': days_ago(2),
       'retries': 3
   }
   
   with DAG(
       dag_id='bug_kuberntes_pod_operator',
       default_args=default_args,
       schedule_interval=None
   ) as dag:
       k = KubernetesPodOperator(
           namespace='dev-airflow-helm',
           image="ubuntu:16.04",
           cmds=["bash", "-cx"],
           arguments=["sleep 100"],
           name="airflow-test-pod",
           task_id="task",
           get_logs=True,
           is_delete_operator_pod=True,
       )
   ```
   
   [Logs during first restart (stuck in up_for_retry in DB and 
UI)](https://github.com/apache/airflow/files/5119714/up_to_retry.log)
   
   
[second_restart.log](https://github.com/apache/airflow/files/5119738/second_restart.log)
   
   
   For some reason I didn't get the `DEBUG - Changing state: ` this time ; but 
the task was correctly labelled with "success" after the second restart.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to