[ 
https://issues.apache.org/jira/browse/SPARK-8333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17094893#comment-17094893
 ] 

Sunil Kumar Chakrapani commented on SPARK-8333:
-----------------------------------------------

Any plans to fix this issue for Spark 2.4.5, issue still exists on Windows 10 

 

20/04/26 12:39:12 ERROR ShutdownHookManager: Exception while deleting Spark 
temp dir: 
C:\Users\<username>\AppData\Local\Temp\2\spark-1583d46e-c31f-444a-91f1-572c0726b6b1
java.io.IOException: Failed to delete: 
C:\Users\<username>\AppData\Local\Temp\2\spark-1583d46e-c31f-444a-91f1-572c0726b6b1\userFiles-b001454b-80e1-4414-896b-6aee986174e5\test_jar_2.11-0.1.jar
 at 
org.apache.spark.network.util.JavaUtils.deleteRecursivelyUsingJavaIO(JavaUtils.java:144)
 at 
org.apache.spark.network.util.JavaUtils.deleteRecursively(JavaUtils.java:118)
 at 
org.apache.spark.network.util.JavaUtils.deleteRecursivelyUsingJavaIO(JavaUtils.java:128)
 at 
org.apache.spark.network.util.JavaUtils.deleteRecursively(JavaUtils.java:118)
 at 
org.apache.spark.network.util.JavaUtils.deleteRecursivelyUsingJavaIO(JavaUtils.java:128)
 at 
org.apache.spark.network.util.JavaUtils.deleteRecursively(JavaUtils.java:118)
 at org.apache.spark.network.util.JavaUtils.deleteRecursively(JavaUtils.java:91)
 at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:1062)
 at 
org.apache.spark.util.ShutdownHookManager$$anonfun$1$$anonfun$apply$mcV$sp$3.apply(ShutdownHookManager.scala:65)
 at 
org.apache.spark.util.ShutdownHookManager$$anonfun$1$$anonfun$apply$mcV$sp$3.apply(ShutdownHookManager.scala:62)
 at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
 at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
 at 
org.apache.spark.util.ShutdownHookManager$$anonfun$1.apply$mcV$sp(ShutdownHookManager.scala:62)
 at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)
 at 
org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)
 at 
org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
 at 
org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
 at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1945)
 at 
org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)
 at 
org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
 at 
org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
 at scala.util.Try$.apply(Try.scala:192)
 at 
org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
 at 
org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
 at 
org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)

> Spark failed to delete temp directory created by HiveContext
> ------------------------------------------------------------
>
>                 Key: SPARK-8333
>                 URL: https://issues.apache.org/jira/browse/SPARK-8333
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.4.0
>         Environment: Windows7 64bit
>            Reporter: sheng
>            Priority: Minor
>              Labels: Hive, bulk-closed, metastore, sparksql
>         Attachments: test.tar
>
>
> Spark 1.4.0 failed to stop SparkContext.
> {code:title=LocalHiveTest.scala|borderStyle=solid}
>  val sc = new SparkContext("local", "local-hive-test", new SparkConf())
>  val hc = Utils.createHiveContext(sc)
>  ... // execute some HiveQL statements
>  sc.stop()
> {code}
> sc.stop() failed to execute, it threw the following exception:
> {quote}
> 15/06/13 03:19:06 INFO Utils: Shutdown hook called
> 15/06/13 03:19:06 INFO Utils: Deleting directory 
> C:\Users\moshangcheng\AppData\Local\Temp\spark-d6d3c30e-512e-4693-a436-485e2af4baea
> 15/06/13 03:19:06 ERROR Utils: Exception while deleting Spark temp dir: 
> C:\Users\moshangcheng\AppData\Local\Temp\spark-d6d3c30e-512e-4693-a436-485e2af4baea
> java.io.IOException: Failed to delete: 
> C:\Users\moshangcheng\AppData\Local\Temp\spark-d6d3c30e-512e-4693-a436-485e2af4baea
>       at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:963)
>       at 
> org.apache.spark.util.Utils$$anonfun$1$$anonfun$apply$mcV$sp$5.apply(Utils.scala:204)
>       at 
> org.apache.spark.util.Utils$$anonfun$1$$anonfun$apply$mcV$sp$5.apply(Utils.scala:201)
>       at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
>       at org.apache.spark.util.Utils$$anonfun$1.apply$mcV$sp(Utils.scala:201)
>       at org.apache.spark.util.SparkShutdownHook.run(Utils.scala:2292)
>       at 
> org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(Utils.scala:2262)
>       at 
> org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(Utils.scala:2262)
>       at 
> org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(Utils.scala:2262)
>       at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1772)
>       at 
> org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(Utils.scala:2262)
>       at 
> org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(Utils.scala:2262)
>       at 
> org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(Utils.scala:2262)
>       at scala.util.Try$.apply(Try.scala:161)
>       at 
> org.apache.spark.util.SparkShutdownHookManager.runAll(Utils.scala:2262)
>       at 
> org.apache.spark.util.SparkShutdownHookManager$$anon$6.run(Utils.scala:2244)
>       at 
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
> {quote}
> It seems this bug is introduced by this SPARK-6907. In SPARK-6907, a local 
> hive metastore is created in a temp directory. The problem is the local hive 
> metastore is not shut down correctly. At the end of application,  if 
> SparkContext.stop() is called, it tries to delete the temp directory which is 
> still used by the local hive metastore, and throws an exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to