[ 
https://issues.apache.org/jira/browse/AIRFLOW-4526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17029824#comment-17029824
 ] 

Dorran Howell commented on AIRFLOW-4526:
----------------------------------------

cc [~ash]

My team has also been encountering this bug, and after some research it appears 
to be a bug in the Kubernetes client API, where the stream returned by a GET 
request for pod logs never exits in some cases when configured to "follow" 
(example issues below). This appears to be a bit of an ephemeral issue, with a 
number of updates to Kubernetes claiming to have addressed it. 

 

https://github.com/kubernetes/kubernetes/issues/44340
https://github.com/kubernetes/kubernetes/issues/59477
https://github.com/kubernetes/kubernetes/issues/28369

 

On the Airflow side, I'm not sure if you all would like to address this since 
it technically is an issue upstream.  My team has patched this internally since 
it is a major issue for us (these infinitely running tasks slowly reduce the 
number of real tasks we can schedule using the CeleryExecutor), and I would be 
happy to help work on a PR if you all would like to address this in Airflow. 
IMO it would be a worthwhile change based on how long this issue has continued 
to persist in kubernetes and the fact that some other Airflow forks have had to 
deal with this issue: [https://github.com/discordapp/incubator-airflow/pull/8]

 

 

> KubernetesPodOperator gets stuck in Running state when get_logs is set to 
> True and there is a long gap without logs from pod
> ----------------------------------------------------------------------------------------------------------------------------
>
>                 Key: AIRFLOW-4526
>                 URL: https://issues.apache.org/jira/browse/AIRFLOW-4526
>             Project: Apache Airflow
>          Issue Type: Bug
>          Components: operators
>         Environment: Azure Kubernetes Service cluster with Airflow based on 
> puckel/docker-airflow
>            Reporter: Christian Lellmann
>            Priority: Major
>              Labels: kubernetes
>             Fix For: 2.0.0
>
>
> When setting the `get_logs` parameter in the KubernetesPodOperator to True 
> the Operator task get stuck in the Running state if the pod that is run by 
> the task (in_cluster mode) writes some logs and then stops writing logs for a 
> longer time (few minutes) before continuing writing. The continued logging 
> isn't fetched anymore and the pod states aren't checked anymore. So, the 
> completion of the pod isn't recognized and the tasks never finishes.
>  
> Assumption:
> In the `monitor_pod` method of the pod launcher 
> ([https://github.com/apache/airflow/blob/master/airflow/kubernetes/pod_launcher.py#L97])
>  the `read_namespaced_pod_log` method of the kubernetes client get stuck in 
> the `Follow=True` stream 
> ([https://github.com/apache/airflow/blob/master/airflow/kubernetes/pod_launcher.py#L108])
>  because if there is a time without logs from the pod the method doesn't 
> forward the following logs anymore, probably.
> So, the `pod_launcher` doesn't check the pod states later anymore 
> ([https://github.com/apache/airflow/blob/master/airflow/kubernetes/pod_launcher.py#L118])
>  and doesn't recognize the complete state -> the task sticks in Running.
> When disabling the `get_logs` parameter everything works because the log 
> stream is skipped.
>  
> Suggestion:
> Poll the logs actively without the `Follow` parameter set to True in parallel 
> with the pod state checking.
> So, it's possible to fetch the logs without the described connection problem 
> and coincidently check the pod state to be definetly able to recognize the 
> end states of the pods.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to