[ 
https://issues.apache.org/jira/browse/SPARK-36798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mridul Muralidharan resolved SPARK-36798.
-----------------------------------------
    Fix Version/s: 3.3.0
       Resolution: Fixed

Issue resolved by pull request 34039
[https://github.com/apache/spark/pull/34039]

> When SparkContext is stopped, metrics system should be flushed after 
> listeners have finished processing
> -------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-36798
>                 URL: https://issues.apache.org/jira/browse/SPARK-36798
>             Project: Spark
>          Issue Type: Bug
>          Components: Structured Streaming
>    Affects Versions: 2.3.2
>            Reporter: Harsh Panchal
>            Assignee: Harsh Panchal
>            Priority: Minor
>             Fix For: 3.3.0
>
>
> In current implementation, when {{SparkContext.stop()}} is called, 
> {{metricsSystem.report()}} is called before {{listenerBus.stop()}}. In this 
> case, if some listener is producing some metrics, they would never reach sink.
> Background:
> We have some ingestion jobs in Spark Structured Streaming. To monitor them, 
> collect some metrics like number of input rows, trigger time etc. from 
> {{QueryProgressEvent}} received via {{StreamingQueryListener}}. These metrics 
> are then pushed to DB by custom sinks registered in {{MetricsSystem}}. We 
> noticed that these metrics are lost occasionally for last batch.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to