Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/18887#discussion_r133512286
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -316,25 +350,22 @@ private[history] class FsHistoryProvider(conf:
SparkConf, clock: Clock)
try {
val newLastScanTime = getNewLastScanTime()
logDebug(s"Scanning $logDir with lastScanTime==$lastScanTime")
- val statusList = Option(fs.listStatus(new Path(logDir))).map(_.toSeq)
- .getOrElse(Seq.empty[FileStatus])
+ val statusList = Option(fs.listStatus(new
Path(logDir))).map(_.toSeq).getOrElse(Nil)
// scan for modified applications, replay and merge them
- val logInfos: Seq[FileStatus] = statusList
+ val logInfos = statusList
.filter { entry =>
- val fileInfo = fileToAppInfo.get(entry.getPath())
- val prevFileSize = if (fileInfo != null) fileInfo.fileSize else
0L
!entry.isDirectory() &&
// FsHistoryProvider generates a hidden file which can't be
read. Accidentally
// reading a garbage file is safe, but we would log an error
which can be scary to
// the end-user.
!entry.getPath().getName().startsWith(".") &&
- prevFileSize < entry.getLen() &&
- SparkHadoopUtil.get.checkAccessPermission(entry, FsAction.READ)
+ SparkHadoopUtil.get.checkAccessPermission(entry,
FsAction.READ) &&
+ recordedFileSize(entry.getPath()) < entry.getLen()
}
.flatMap { entry => Some(entry) }
--- End diff --
realize this isn't your change, but what is the point of this? isn't it a
no-op?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]