Github user steveloughran commented on a diff in the pull request:

    https://github.com/apache/spark/pull/9238#discussion_r43109175
  
    --- Diff: 
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
    @@ -299,6 +300,54 @@ private[history] class FsHistoryProvider(conf: 
SparkConf, clock: Clock)
         }
       }
     
    +  override def readEventLogs(zipStream: ZipInputStream): Unit = {
    +    val fs = FileSystem.get(hadoopConf)
    +    val tmpPath = new Path(logDir, UUID.randomUUID().toString + 
TEMP_IMPORT_LOG_DIR_SUFFIX)
    +    var zipEntry = zipStream.getNextEntry
    +
    +    try {
    +      // Create tmp dir to store unzipped files temporarily
    +      // Reason to put unzipped files in this temporary file is to avoid 
contention state when
    +      // checkForLogs is occurred during unzipping.
    +      fs.mkdirs(tmpPath)
    +
    +      while (zipEntry != null) {
    +        logInfo(s"Unzipping ${zipEntry.getName}")
    +        val path = new Path(tmpPath, zipEntry.getName)
    +        if (zipEntry.isDirectory) {
    +          // this could possibly be legacy history log
    +          fs.mkdirs(path)
    +        } else {
    +          var out: OutputStream = null
    +          Utils.tryWithSafeFinally {
    +            out = fs.create(path, true, 1 * 1024 * 1024)
    +            Utils.copyStream(zipStream, out)
    +          } {
    +            if (out != null) {
    --- End diff --
    
    Hadoop`s `IOUtils.closeStream` can do the close here, including null check 
& log problems @ debug


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to