HeartSaVioR commented on a change in pull request #31495:
URL: https://github.com/apache/spark/pull/31495#discussion_r571685452
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/OffsetSeqLog.scala
##########
@@ -46,6 +47,23 @@ import org.apache.spark.sql.connector.read.streaming.{Offset
=> OffsetV2}
class OffsetSeqLog(sparkSession: SparkSession, path: String)
extends HDFSMetadataLog[OffsetSeq](sparkSession, path) {
+ private val cachedMetadata = new ju.TreeMap[Long, OffsetSeq]()
+
+ override def add(batchId: Long, metadata: OffsetSeq): Boolean = {
+ val added = super.add(batchId, metadata)
+ if (added) {
+ // cache metadata as it will be read again
+ cachedMetadata.put(batchId, metadata)
+ // we don't access metadata for (batchId - 2) batches; evict them
Review comment:
Hmm... it looks easier to change here and reflect to both micro-batch
and continuous mode. Let's leave this as it is.
I would worry about not coupling metadata with batch ID - it could bring
critical issue on committing to source when something is slightly messed up.
e.g. In file stream source we delete source files based on the commit. We
cannot trade-off between correctness and performance. I'd prefer safer approach
unless it's identified to have a noticeable performance impact.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]