This is an automated email from the ASF dual-hosted git repository.

ulyssesyou pushed a commit to branch branch-1.3
in repository https://gitbox.apache.org/repos/asf/incubator-kyuubi.git


The following commit(s) were added to refs/heads/branch-1.3 by this push:
     new bb9aa66  [KYUUBI #1330] fix tool cleaner process bug
bb9aa66 is described below

commit bb9aa660c8b59e12a8e93df02f1e510f1ca79f82
Author: zwangsheng <[email protected]>
AuthorDate: Fri Nov 5 11:26:13 2021 +0800

    [KYUUBI #1330] fix tool cleaner process bug
    
    <!--
    Thanks for sending a pull request!
    
    Here are some tips for you:
      1. If this is your first time, please read our contributor guidelines: 
https://kyuubi.readthedocs.io/en/latest/community/contributions.html
      2. If the PR is related to an issue in 
https://github.com/apache/incubator-kyuubi/issues, add '[KYUUBI #XXXX]' in your 
PR title, e.g., '[KYUUBI #XXXX] Your PR title ...'.
      3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., 
'[WIP][KYUUBI #XXXX] Your PR title ...'.
    -->
    
    ### _Why are the changes needed?_
    <!--
    Please clarify why the changes are needed. For instance,
      1. If you add a feature, you can talk about the use case of it.
      2. If you fix a bug, you can clarify why it is a bug.
    -->
    When using the tool to help clean up spark on K8s residual cache files, I 
encountered unreported sleep conditions.
    After analysis, it was found that the SSD mounted when `needToDeepClean` 
was run encountered an incorrect catch exception.
    Now, `Try Catch` is added to ensure the normal operation of the function, 
and deep cleaning is performed by default if a problem occurs. In this case, 
the disk space still overruns after the cleaning opportunity is missed.
    
    ### _How was this patch tested?_
    - [ ] Add some test cases that check the changes thoroughly including 
negative and positive cases if possible
    
    - [ ] Add screenshots for manual tests if appropriate
    
    - [x] [Run 
test](https://kyuubi.readthedocs.io/en/latest/develop_tools/testing.html#running-tests)
 locally before make a pull request
    
    Closes #1330 from zwangsheng/tools/fix-process.
    
    Closes #1330
    
    6daf483c [zwangsheng] catch nonfatal
    89caf897 [zwangsheng] fix tool cleaner process bug
    b487cf8c [zwangsheng] fix tool cleaner process bug
    
    Authored-by: zwangsheng <[email protected]>
    Signed-off-by: ulysses-you <[email protected]>
    (cherry picked from commit 60392349a777b6cdd02a6413e4871d3cf19969b7)
    Signed-off-by: ulysses-you <[email protected]>
---
 .../kyuubi/tools/KubernetesSparkBlockCleaner.scala  | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git 
a/tools/spark-block-cleaner/src/main/scala/org/apache/kyuubi/tools/KubernetesSparkBlockCleaner.scala
 
b/tools/spark-block-cleaner/src/main/scala/org/apache/kyuubi/tools/KubernetesSparkBlockCleaner.scala
index 801559d..f2c51af 100644
--- 
a/tools/spark-block-cleaner/src/main/scala/org/apache/kyuubi/tools/KubernetesSparkBlockCleaner.scala
+++ 
b/tools/spark-block-cleaner/src/main/scala/org/apache/kyuubi/tools/KubernetesSparkBlockCleaner.scala
@@ -154,13 +154,20 @@ object KubernetesSparkBlockCleaner extends Logging {
   import scala.sys.process._
 
   private def needToDeepClean(dir: String): Boolean = {
-    val used = (s"df $dir" #| s"grep $dir").!!
-      .split(" ").filter(_.endsWith("%")) {
-      0
-    }.replace("%", "")
-    info(s"$dir now used $used% space")
-
-    used.toInt > (100 - freeSpaceThreshold)
+    try {
+      val used = (s"df $dir" #| s"grep $dir").!!
+        .split(" ").filter(_.endsWith("%")) {
+        0
+      }.replace("%", "")
+      info(s"$dir now used $used% space")
+
+      used.toInt > (100 - freeSpaceThreshold)
+    } catch {
+      case NonFatal(e) =>
+        error(s"An error occurs when querying the disk $dir capacity, " +
+          s"return true to make sure the disk space will not overruns: 
${e.getMessage}")
+        true
+    }
   }
 
   private def doCleanJob(dir: String): Unit = {

Reply via email to