Aitozi commented on a change in pull request #28:
URL:
https://github.com/apache/flink-kubernetes-operator/pull/28#discussion_r815968841
##########
File path:
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/FlinkUtils.java
##########
@@ -108,21 +111,38 @@ private static void mergeInto(JsonNode toNode, JsonNode
fromNode) {
}
}
- public static void deleteCluster(FlinkDeployment flinkApp,
KubernetesClient kubernetesClient) {
+ public static void deleteCluster(
+ FlinkDeployment flinkApp,
+ KubernetesClient kubernetesClient,
+ boolean deleteHaConfigmaps) {
deleteCluster(
flinkApp.getMetadata().getNamespace(),
flinkApp.getMetadata().getName(),
- kubernetesClient);
+ kubernetesClient,
+ deleteHaConfigmaps);
}
public static void deleteCluster(
- String namespace, String clusterId, KubernetesClient
kubernetesClient) {
+ String namespace,
+ String clusterId,
+ KubernetesClient kubernetesClient,
+ boolean deleteHaConfigmaps) {
kubernetesClient
.apps()
.deployments()
.inNamespace(namespace)
.withName(clusterId)
.cascading(true)
.delete();
+
+ if (deleteHaConfigmaps) {
Review comment:
Make sense to me, I'm OK to merge the current shape with some more
comments for clarification.
But I have sense that we may have hard work to do the nice clean up work for
other HA providers, It's a bit of out of scope of the operator responsibility
or ability, maybe we should extend at the Flink to support `deleteAndCleanUpHA`
?. Do you have some inputs for this cc @wangyang0918
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]