potiuk commented on code in PR #31389:
URL: https://github.com/apache/airflow/pull/31389#discussion_r1285117395
##########
airflow/providers/cncf/kubernetes/utils/pod_manager.py:
##########
@@ -153,6 +153,31 @@ def container_is_terminated(pod: V1Pod, container_name:
str) -> bool:
return container_status.state.terminated is not None
+def container_is_completed(pod: V1Pod, container_name: str) -> bool:
+ """
+ Examines V1Pod ``pod`` to determine whether ``container_name`` is
completed.
+ If that container is present and completed, returns True. Returns False
otherwise.
+ """
+ container_status = get_container_status(pod, container_name)
+ if not container_status:
+ return False
+ return container_status.state.terminated is not None
+
+
+def container_is_succeeded(pod: V1Pod, container_name: str) -> bool:
+ """
+ Examines V1Pod ``pod`` to determine whether ``container_name`` is
completed and succeeded.
+ If that container is present and completed and succeeded, returns True.
Returns False otherwise.
+ """
+ if not container_is_completed(pod, container_name):
+ return False
+
+ container_status = get_container_status(pod, container_name)
+ if not container_status:
+ return False
+ return container_status.state.terminated.exit_code == 0
+
+
Review Comment:
Well. You have conflict now and you are essentially 271 commits behind. I
suggest you rebase rather than merge and solve conflicts you have now - it's
not clear whether your merge has been done well. When you rebase you will see
only your changes on top of the current main branch.
And yes, for changes like that, it can happen that it will take long time to
merge them, and that you will have to resolve conflicts and rebase few times,
and that you might have conflicts with other changes.
I strongly suggest to rebase it (and solve conflicts).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]