Github user ouyangxiaochen commented on the issue:

    https://github.com/apache/spark/pull/22813
  
    Yes, it happened in our real environment. 
    The scenario as follows: 
    Some disk corruption in the production cluster which is normal.
    SPARK_LOCAL_DIRS = /data1/bigdata/spark/tmp, disk `data1 `is broken, the 
maintenance engineer modified `data1 `to `data2 ` or another. Unfortunately. 
the config SPARK_WORK_DIR = /data2/bigdata/spark/tmp, and then we start a 
`Thriftserver` process, some temporay folder will be created at the new config 
path which is the same as SPARK_WORK_DIR, but when the cleanup cycle time is 
reached, the folder created by `Thriftserver` will be removed by 
`WorkDirCleanUP`, so it will cause the Beeline and JDBC query to fail.
    There is a very extreme situation that the user configures the operating 
system directory, which will cause a lot of trouble. So i think add this 
condition could reduce some unnecessary risks.



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to