[ 
https://issues.apache.org/jira/browse/SPARK-24634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16869179#comment-16869179
 ] 

Jungtaek Lim edited comment on SPARK-24634 at 6/21/19 4:38 AM:
---------------------------------------------------------------

I'm now revisiting this issue to alleviate SPARK-28074 - this would help 
indicating end users that Spark is discarding some rows due to watermark.

I have different approach though - in previous PR I just injected metrics where 
stateful operators discard the rows. In some operators it just works, but if 
the operator brings multiple physical plans and when discarding the rows it is 
no longer representing whole input rows (they were already aggregated).

My idea is adding a new physical node in prior to the stateful operators which 
exactly works as filter, but measure rows which are discarded (in other words, 
filtered out) instead of measuring rows which are filtered in. I'll see how I 
could do it.


was (Author: kabhwan):
I'm now revisiting this issue to alleviate SPARK-28074 - this would help 
indicating end users that Spark is discarding some rows due to watermark.

I have different approach though - in previous PR I just injected metrics where 
stateful operators discard the rows. In some operators it just works, but if 
the operator brings multiple physical plans and when discarding the rows it is 
no longer representing whole input rows (they were already aggregated).

My idea is adding a new physical node which exactly works as filter, but 
measure rows which are discarded (in other words, filtered out) instead of 
measuring rows which are filtered in. I'll see how I could do it.

> Add a new metric regarding number of rows later than watermark
> --------------------------------------------------------------
>
>                 Key: SPARK-24634
>                 URL: https://issues.apache.org/jira/browse/SPARK-24634
>             Project: Spark
>          Issue Type: Improvement
>          Components: Structured Streaming
>    Affects Versions: 2.4.0
>            Reporter: Jungtaek Lim
>            Priority: Major
>
> Spark filters out late rows which are later than watermark while applying 
> operations which leverage window. While Spark exposes information regarding 
> watermark to StreamingQueryListener, there's no information regarding rows 
> being filtered out due to watermark. The information should help end users to 
> adjust watermark while operating their query.
> We could expose metric regarding number of rows later than watermark and 
> being filtered out. It would be ideal to support side-output to consume late 
> rows, but it doesn't look like easy so addressing this first.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to