potiuk commented on issue #26818:
URL: https://github.com/apache/airflow/issues/26818#issuecomment-1264675335

   Then you are asking for something completely different. Context does not 
contain this information. This is the execution context of the task, not the 
context of how runner has been killed. All those:
   
   - The task failed because the workerpod did not start in time
   - The task filed because the k8s-cluster failed to auto-scale
   - The task failed because an out-of-memory situation occured (OOMKiller)
   - The task failed because a running pod was stopped on a preemptible node
   
   We do not have this information as this is purely a deployment specific 
thing and you will not see it in task callbacks, because Airlfow task logic 
should be independent from the underlying executor and runner - it shoudl not 
matter whether you run KubernetesExecutor, CeleryExecutor, 
CeleryKubernetesExecutor or Local Executor.
   
   What you are talking about is very specific Kubernetes-specific behaviour, 
including the fact that you might have preemptible node failures. We do. not 
expose those to tasks. The only information why task failed are placed in logs 
and tasks canot really react differently in those cases.
   
   If you want to implement such custom behaviour, what you ned to do is to 
extend KubernetesExecutor and create your own executor that will analyse the 
status returned by your K8S cluster in either "run_pod_async" or "monitor_pod" 
methods and raise different exceptions to signal what needs to be done. For 
example your code could raise AirflowException if you want the task to retry on 
certain failure type or AirflowFailException if you don't. And you can write 
any other cusom code if you want in your custom variant of KubernetesExecutor.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to