Github user vijoshi commented on a diff in the pull request:
https://github.com/apache/spark/pull/15556#discussion_r84162567
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/ReplayListenerBus.scala ---
@@ -43,19 +43,25 @@ private[spark] class ReplayListenerBus extends
SparkListenerBus with Logging {
* @param sourceName Filename (or other source identifier) from whence
@logData is being read
* @param maybeTruncated Indicate whether log file might be truncated
(some abnormal situations
* encountered, log file might not finished writing) or not
+ * @param eventsFilter Filter function to select JSON event strings in
the log data stream that
+ * should be parsed and replayed. When not specified, all event
strings in the log data
+ * are parsed and replayed.
*/
def replay(
logData: InputStream,
sourceName: String,
- maybeTruncated: Boolean = false): Unit = {
+ maybeTruncated: Boolean = false,
+ eventsFilter: (String) => Boolean = (s: String) => true): Unit = {
var currentLine: String = null
var lineNumber: Int = 1
try {
val lines = Source.fromInputStream(logData).getLines()
--- End diff --
I gave this a try in my "incorporate review comments" commit, but backed it
out after I realized that it would have broken the "We can only ignore
exception from last line of the file that might be truncated" logic and only
way to fix that with zipindex/filter in place was to determine the total event
log line count before any iteration over the log entries, which appeared as
defeating the purpose of the zipindex/filter optimization. hence backed it out
to the older way. let me know.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]