gyfora commented on a change in pull request #28:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/28#discussion_r816481099



##########
File path: 
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/FlinkUtils.java
##########
@@ -108,21 +116,109 @@ private static void mergeInto(JsonNode toNode, JsonNode 
fromNode) {
         }
     }
 
-    public static void deleteCluster(FlinkDeployment flinkApp, 
KubernetesClient kubernetesClient) {
+    public static void deleteCluster(
+            FlinkDeployment flinkApp,
+            KubernetesClient kubernetesClient,
+            boolean deleteHaConfigmaps) {
         deleteCluster(
                 flinkApp.getMetadata().getNamespace(),
                 flinkApp.getMetadata().getName(),
-                kubernetesClient);
+                kubernetesClient,
+                deleteHaConfigmaps);
     }
 
+    /**
+     * Delete Flink kubernetes cluster by deleting the kubernetes resources 
directly. Optionally
+     * allows deleting the native kubernetes HA resources as well.
+     *
+     * @param namespace Namespace where the Flink cluster is deployed
+     * @param clusterId ClusterId of the Flink cluster
+     * @param kubernetesClient Kubernetes client
+     * @param deleteHaConfigmaps Flag to indicate whether k8s HA metadata 
should be removed as well
+     */
     public static void deleteCluster(
-            String namespace, String clusterId, KubernetesClient 
kubernetesClient) {
+            String namespace,
+            String clusterId,
+            KubernetesClient kubernetesClient,
+            boolean deleteHaConfigmaps) {
         kubernetesClient
                 .apps()
                 .deployments()
                 .inNamespace(namespace)
-                .withName(clusterId)
+                .withName(KubernetesUtils.getDeploymentName(clusterId))
                 .cascading(true)
                 .delete();
+
+        if (deleteHaConfigmaps) {
+            // We need to wait for cluster shutdown otherwise confimaps might 
be recreated
+            waitForClusterShutdown(kubernetesClient, namespace, clusterId);
+            kubernetesClient
+                    .configMaps()
+                    .inNamespace(namespace)
+                    .withLabels(
+                            KubernetesUtils.getConfigMapLabels(
+                                    clusterId, 
LABEL_CONFIGMAP_TYPE_HIGH_AVAILABILITY))
+                    .delete();
+        }
+    }
+
+    /** We need this due to the buggy flink kube cluster client behaviour for 
now. */

Review comment:
       what I meant originally by this, is that the deployment client wont let 
you submit a new flink job as long as there is still a service around even if 
marked for deletion. Maybe this is not even a bug, but in any case now the 
comment is irrelevant with the canged behaviour, will remove it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to