[GitHub] spark pull request #21617: [SPARK-24634][SS] Add a new metric regarding numb...

2018-06-25 Thread HeartSaVioR
Github user HeartSaVioR closed the pull request at:

https://github.com/apache/spark/pull/21617


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #21617: [SPARK-24634][SS] Add a new metric regarding numb...

2018-06-25 Thread HeartSaVioR
Github user HeartSaVioR commented on a diff in the pull request:

https://github.com/apache/spark/pull/21617#discussion_r197986093
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/streaming/progress.scala ---
@@ -48,12 +49,13 @@ class StateOperatorProgress private[sql](
   def prettyJson: String = pretty(render(jsonValue))
 
   private[sql] def copy(newNumRowsUpdated: Long): StateOperatorProgress =
-new StateOperatorProgress(numRowsTotal, newNumRowsUpdated, 
memoryUsedBytes)
+new StateOperatorProgress(numRowsTotal, newNumRowsUpdated, 
memoryUsedBytes, numLateInputRows)
 
   private[sql] def jsonValue: JValue = {
 ("numRowsTotal" -> JInt(numRowsTotal)) ~
 ("numRowsUpdated" -> JInt(numRowsUpdated)) ~
-("memoryUsedBytes" -> JInt(memoryUsedBytes))
+("memoryUsedBytes" -> JInt(memoryUsedBytes)) ~
+("numLateInputRows" -> JInt(numLateInputRows))
--- End diff --

@arunmahadevan Ah yes got it. If we would want to have accurate number we 
need to filter out late events from the first time anyway. I guess we may need 
to defer addressing this until we change the behavior.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #21617: [SPARK-24634][SS] Add a new metric regarding numb...

2018-06-25 Thread arunmahadevan
Github user arunmahadevan commented on a diff in the pull request:

https://github.com/apache/spark/pull/21617#discussion_r197984227
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/streaming/progress.scala ---
@@ -48,12 +49,13 @@ class StateOperatorProgress private[sql](
   def prettyJson: String = pretty(render(jsonValue))
 
   private[sql] def copy(newNumRowsUpdated: Long): StateOperatorProgress =
-new StateOperatorProgress(numRowsTotal, newNumRowsUpdated, 
memoryUsedBytes)
+new StateOperatorProgress(numRowsTotal, newNumRowsUpdated, 
memoryUsedBytes, numLateInputRows)
 
   private[sql] def jsonValue: JValue = {
 ("numRowsTotal" -> JInt(numRowsTotal)) ~
 ("numRowsUpdated" -> JInt(numRowsUpdated)) ~
-("memoryUsedBytes" -> JInt(memoryUsedBytes))
+("memoryUsedBytes" -> JInt(memoryUsedBytes)) ~
+("numLateInputRows" -> JInt(numLateInputRows))
--- End diff --

What I meant was, if the input to the state operator is the result of the 
aggregate, then we would not be counting the actual input rows to the group by. 
There would be max one row per key, so would give the impression that there are 
not as many late events but in reality it may be more.

 If this is not the case then I am fine.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #21617: [SPARK-24634][SS] Add a new metric regarding numb...

2018-06-25 Thread HeartSaVioR
Github user HeartSaVioR commented on a diff in the pull request:

https://github.com/apache/spark/pull/21617#discussion_r197981651
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/streaming/progress.scala ---
@@ -48,12 +49,13 @@ class StateOperatorProgress private[sql](
   def prettyJson: String = pretty(render(jsonValue))
 
   private[sql] def copy(newNumRowsUpdated: Long): StateOperatorProgress =
-new StateOperatorProgress(numRowsTotal, newNumRowsUpdated, 
memoryUsedBytes)
+new StateOperatorProgress(numRowsTotal, newNumRowsUpdated, 
memoryUsedBytes, numLateInputRows)
 
   private[sql] def jsonValue: JValue = {
 ("numRowsTotal" -> JInt(numRowsTotal)) ~
 ("numRowsUpdated" -> JInt(numRowsUpdated)) ~
-("memoryUsedBytes" -> JInt(memoryUsedBytes))
+("memoryUsedBytes" -> JInt(memoryUsedBytes)) ~
+("numLateInputRows" -> JInt(numLateInputRows))
--- End diff --

@arunmahadevan 

> Here you are measuring the number of "keys" filtered out of the state 
store since they have crossed the late threshold correct ?

No, it is based on "input" rows which are filtered out due to watermark 
threshold. Note that the meaning of "input" is relative, cause it doesn't 
represent for input rows in overall query, but represents for input rows in 
state operator.

> Its better if we could rather expose the actual number of events that 
were late.

I guess the comment is based on missing thing, but I would think that it 
would be correct that we filtered out late events from the first phase of query 
(not from state operator) so that we can get correct count of late events. For 
now filters affect the count.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #21617: [SPARK-24634][SS] Add a new metric regarding numb...

2018-06-25 Thread arunmahadevan
Github user arunmahadevan commented on a diff in the pull request:

https://github.com/apache/spark/pull/21617#discussion_r197980605
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/streaming/progress.scala ---
@@ -48,12 +49,13 @@ class StateOperatorProgress private[sql](
   def prettyJson: String = pretty(render(jsonValue))
 
   private[sql] def copy(newNumRowsUpdated: Long): StateOperatorProgress =
-new StateOperatorProgress(numRowsTotal, newNumRowsUpdated, 
memoryUsedBytes)
+new StateOperatorProgress(numRowsTotal, newNumRowsUpdated, 
memoryUsedBytes, numLateInputRows)
 
   private[sql] def jsonValue: JValue = {
 ("numRowsTotal" -> JInt(numRowsTotal)) ~
 ("numRowsUpdated" -> JInt(numRowsUpdated)) ~
-("memoryUsedBytes" -> JInt(memoryUsedBytes))
+("memoryUsedBytes" -> JInt(memoryUsedBytes)) ~
+("numLateInputRows" -> JInt(numLateInputRows))
--- End diff --

Here you are measuring the number of "keys" filtered out of the state store 
since they have crossed the late threshold correct ? It may be better to rename 
this metrics here and at other places to "number of evicted rows". Its better 
if we could rather expose the actual number of events that were late.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #21617: [SPARK-24634][SS] Add a new metric regarding numb...

2018-06-22 Thread HeartSaVioR
GitHub user HeartSaVioR opened a pull request:

https://github.com/apache/spark/pull/21617

[SPARK-24634][SS] Add a new metric regarding number of rows later than 
watermark

## What changes were proposed in this pull request?

This adds a new metric to count the number of rows arrived later than 
watermark. 

The metric will be exposed to two places: 
1. streaming query listener -`numLateInputRows` in `stateOperators`
2. SQL tab in UI - `number of rows which are later than watermark` in state 
operator exec

Please refer https://issues.apache.org/jira/browse/SPARK-24634 to see 
rationalization of the issue.

## How was this patch tested?

Modified existing UTs.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/HeartSaVioR/spark SPARK-24634

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/21617.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #21617


commit ff1b89553acc7ea3a19b586457dd295255047377
Author: Jungtaek Lim 
Date:   2018-06-23T02:34:16Z

SPARK-24634 Add a new metric regarding number of rows later than watermark

* This adds a new metric to count the number of rows arrived later than 
watermark




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org