Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17219#discussion_r105062818
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -377,17 +385,25 @@ class StreamExecution(
private def populateStartOffsets(): Unit = {
offsetLog.getLatest() match {
case Some((batchId, nextOffsets)) =>
- logInfo(s"Resuming streaming query, starting with batch $batchId")
- currentBatchId = batchId
- availableOffsets = nextOffsets.toStreamProgress(sources)
- offsetSeqMetadata =
nextOffsets.metadata.getOrElse(OffsetSeqMetadata())
- logDebug(s"Found possibly unprocessed offsets $availableOffsets " +
- s"at batch timestamp ${offsetSeqMetadata.batchTimestampMs}")
-
- offsetLog.get(batchId - 1).foreach {
- case lastOffsets =>
- committedOffsets = lastOffsets.toStreamProgress(sources)
- logDebug(s"Resuming with committed offsets: $committedOffsets")
+ currentBatchId = commitLog.getLatest() match {
--- End diff --
Mind adding a few more comments here. This logic is getting very dense. I
think that its doing something like the following:
- finding the max committed batch
- checking to see if there is a started but uncommitted batch
- otherwise constructing a new batch
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]