wangyang0918 commented on a change in pull request #28:
URL:
https://github.com/apache/flink-kubernetes-operator/pull/28#discussion_r816388033
##########
File path:
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/FlinkUtils.java
##########
@@ -108,21 +111,38 @@ private static void mergeInto(JsonNode toNode, JsonNode
fromNode) {
}
}
- public static void deleteCluster(FlinkDeployment flinkApp,
KubernetesClient kubernetesClient) {
+ public static void deleteCluster(
+ FlinkDeployment flinkApp,
+ KubernetesClient kubernetesClient,
+ boolean deleteHaConfigmaps) {
deleteCluster(
flinkApp.getMetadata().getNamespace(),
flinkApp.getMetadata().getName(),
- kubernetesClient);
+ kubernetesClient,
+ deleteHaConfigmaps);
}
public static void deleteCluster(
- String namespace, String clusterId, KubernetesClient
kubernetesClient) {
+ String namespace,
+ String clusterId,
+ KubernetesClient kubernetesClient,
+ boolean deleteHaConfigmaps) {
kubernetesClient
.apps()
.deployments()
.inNamespace(namespace)
.withName(clusterId)
.cascading(true)
.delete();
+
+ if (deleteHaConfigmaps) {
Review comment:
The most appropriate way to the HA data clean up is
`HighAvailabilityServices#closeAndCleanupAllData()`. It should work both for
ZooKeeper and K8s HA. But I agree we could comment the limitation here and do
the improvement later.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]