Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4821#discussion_r25625032
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
    @@ -279,52 +253,17 @@ private[spark] object EventLoggingListener extends 
Logging {
         }
     
         val in = new BufferedInputStream(fs.open(log))
    -    // Read a single line from the input stream without buffering.
    -    // We cannot use BufferedReader because we must avoid reading
    -    // beyond the end of the header, after which the content of the
    -    // file may be compressed.
    -    def readLine(): String = {
    -      val bytes = new ByteArrayOutputStream()
    -      var next = in.read()
    -      var count = 0
    -      while (next != '\n') {
    -        if (next == -1) {
    -          throw new IOException("Unexpected end of file.")
    -        }
    -        bytes.write(next)
    -        count = count + 1
    -        if (count > MAX_HEADER_LINE_LENGTH) {
    -          throw new IOException("Maximum header line length exceeded.")
    -        }
    -        next = in.read()
    -      }
    -      new String(bytes.toByteArray(), Charsets.UTF_8)
    +
    +    // Compression codec is encoded as an extension, e.g. app_123.lzf
    +    // Since we sanitize the app ID to not include periods, it is safe to 
split on it
    +    val logName = log.getName.replaceAll(IN_PROGRESS, "")
    --- End diff --
    
    `replaceAll` takes a regex, so you need to `Pattern.quote(IN_PROGRESS)`. In 
fact, it might be better to use `stripSuffix`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to