Github user vijoshi commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15556#discussion_r84118567
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/ReplayListenerBus.scala ---
    @@ -43,19 +43,25 @@ private[spark] class ReplayListenerBus extends 
SparkListenerBus with Logging {
        * @param sourceName Filename (or other source identifier) from whence 
@logData is being read
        * @param maybeTruncated Indicate whether log file might be truncated 
(some abnormal situations
        *        encountered, log file might not finished writing) or not
    +   * @param eventsFilter Filter function to select JSON event strings in 
the log data stream that
    +   *        should be parsed and replayed. When not specified, all event 
strings in the log data
    +   *        are parsed and replayed.
        */
       def replay(
           logData: InputStream,
           sourceName: String,
    -      maybeTruncated: Boolean = false): Unit = {
    +      maybeTruncated: Boolean = false,
    +      eventsFilter: (String) => Boolean = (s: String) => true): Unit = {
         var currentLine: String = null
         var lineNumber: Int = 1
         try {
           val lines = Source.fromInputStream(logData).getLines()
    --- End diff --
    
    didn't use a filter was so we could get correct line nos in the logs (catch 
block) in case parsing failed for one of them


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to