johnhoran commented on code in PR #44357:
URL: https://github.com/apache/airflow/pull/44357#discussion_r1885709195


##########
providers/src/airflow/providers/cncf/kubernetes/callbacks.py:
##########
@@ -83,7 +123,34 @@ def on_pod_completion(*, pod: k8s.V1Pod, client: 
client_type, mode: str, **kwarg
         pass
 
     @staticmethod
-    def on_pod_cleanup(*, pod: k8s.V1Pod, client: client_type, mode: str, 
**kwargs):
+    def on_pod_wrapup(
+        *,
+        pod: k8s.V1Pod,
+        client: client_type,
+        mode: str,
+        operator: KubernetesPodOperator,
+        context: Context,
+        **kwargs,
+    ) -> None:
+        """
+        Invoke this callback after all pod completion callbacks but before the 
pod is deleted.
+
+        :param pod: the completed pod.
+        :param client: the Kubernetes client that can be used in the callback.
+        :param mode: the current execution mode, it's one of (`sync`, `async`).
+        """
+        pass

Review Comment:
   The existing code makes a call to find the pod if the callbacks are attached.
   
https://github.com/apache/airflow/blob/c77c7f003a2458698a1d5a440670b9728783ff78/providers/src/airflow/providers/cncf/kubernetes/operators/pod.py#L614
   Honestly I'd prefer if I was sending a stale reference and that it was the 
responsibility of the callback to get an updated pod if it needs it since we're 
sending the client too.  Especially since its possible a callback might not 
implement the `on_pod_completion` method.  So to maintain existing behaviour 
I'm getting it once.  An alternative might be to test if the 
`on_pod_completion` is implemented in the callback and get an updated pod for 
each callback call, but again this assumes we need an updated pod, which might 
not be the case.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to