HeartSaVioR commented on code in PR #42822:
URL: https://github.com/apache/spark/pull/42822#discussion_r1323963218
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/ProgressReporter.scala:
##########
@@ -264,9 +265,16 @@ trait ProgressReporter extends Logging {
if (lastExecution == null) return Nil
// lastExecution could belong to one of the previous triggers if
`!hasExecuted`.
// Walking the plan again should be inexpensive.
+
+ val shufflePartitionValue =
sparkSession.conf.getOption(SHUFFLE_PARTITIONS.key).getOrElse("-1")
+ val numShufflePartitions: Long = try {
+ shufflePartitionValue.toLong
+ } catch {
+ case e: NumberFormatException => -1L
+ }
lastExecution.executedPlan.collect {
case p if p.isInstanceOf[StateStoreWriter] =>
- val progress = p.asInstanceOf[StateStoreWriter].getProgress()
+ val progress =
p.asInstanceOf[StateStoreWriter].getProgress(numShufflePartitions)
Review Comment:
Shall we read the value from the physical plan? StateStoreWriter has a
method `stateInfo` which you can find a number of shuffle partitions. That
value is closer to the reality, as we build a child distribution requirement
based on that value.
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StatefulOperatorPartitioning.scala
This is also future-proof - if we want to set the different shuffle
partitions per operator, we will set the value differently for stateInfo.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]