[ 
https://issues.apache.org/jira/browse/SPARK-32529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17170926#comment-17170926
 ] 

Apache Spark commented on SPARK-32529:
--------------------------------------

User 'yanxiaole' has created a pull request for this issue:
https://github.com/apache/spark/pull/29350

> Spark 3.0 History Server May Never Finish One Round Log Dir Scan
> ----------------------------------------------------------------
>
>                 Key: SPARK-32529
>                 URL: https://issues.apache.org/jira/browse/SPARK-32529
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 3.0.0
>            Reporter: Yan Xiaole
>            Priority: Major
>
> If there are a large number (>100k) of applications log dir, listing the log 
> dir will take a few seconds. After getting the path list some applications 
> might have finished already, and the filename will change from 
> "foo.inprogress" to "foo".
> It leads to a problem when adding an entry to the listing, querying file 
> status like `fileSizeForLastIndex` will throw out a `FileNotFoundException` 
> exception if the application was finished. And the exception will abort 
> current loop, in a busy cluster, it will make history server couldn't list 
> and load any application log.
>  
>  
> {code:java}
> 20/08/03 15:17:23 ERROR FsHistoryProvider: Exception in checking for event 
> log updates
>  java.io.FileNotFoundException: File does not exist: 
> hdfs://xx/logs/spark/application_11111111111111.lz4.inprogress
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1527)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1520)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1520)
>  at 
> org.apache.spark.deploy.history.SingleFileEventLogFileReader.status$lzycompute(EventLogFileReaders.scala:170)
>  at 
> org.apache.spark.deploy.history.SingleFileEventLogFileReader.status(EventLogFileReaders.scala:170)
>  at 
> org.apache.spark.deploy.history.SingleFileEventLogFileReader.fileSizeForLastIndex(EventLogFileReaders.scala:174)
>  at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$checkForLogs$7(FsHistoryProvider.scala:523)
>  at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$checkForLogs$7$adapted(FsHistoryProvider.scala:466)
>  at 
> scala.collection.TraversableLike.$anonfun$filterImpl$1(TraversableLike.scala:256)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
>  at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
>  at scala.collection.TraversableLike.filterImpl(TraversableLike.scala:255)
>  at scala.collection.TraversableLike.filterImpl$(TraversableLike.scala:249)
>  at scala.collection.AbstractTraversable.filterImpl(Traversable.scala:108)
>  at scala.collection.TraversableLike.filter(TraversableLike.scala:347)
>  at scala.collection.TraversableLike.filter$(TraversableLike.scala:347)
>  at scala.collection.AbstractTraversable.filter(Traversable.scala:108)
>  at 
> org.apache.spark.deploy.history.FsHistoryProvider.checkForLogs(FsHistoryProvider.scala:466)
>  at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$startPolling$3(FsHistoryProvider.scala:287)
>  at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1302)
>  at 
> org.apache.spark.deploy.history.FsHistoryProvider.$anonfun$getRunner$1(FsHistoryProvider.scala:210)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to