HeartSaVioR commented on code in PR #38430:
URL: https://github.com/apache/spark/pull/38430#discussion_r1008968288
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/HDFSMetadataLog.scala:
##########
@@ -64,6 +67,17 @@ class HDFSMetadataLog[T <: AnyRef : ClassTag](sparkSession:
SparkSession, path:
fileManager.mkdirs(metadataPath)
}
+ protected val metadataCacheEnabled: Boolean
+ =
sparkSession.sessionState.conf.getConf(SQLConf.STREAMING_METADATA_CACHE_ENABLED)
+
+ /**
+ * Cache the latest two batches. [[StreamExecution]] usually just accesses
the latest two batches
+ * when committing offsets, this cache will save some file system operations.
+ */
+ protected[sql] val batchCache = Collections.synchronizedMap(new
LinkedHashMap[Long, T](2) {
Review Comment:
In the previous implementation we didn't use Guava cache. We don't want to
add more coupling to Guava unless it is quite bothering to implement by our own.
I provided the diff upon caching offset seq, see here
https://github.com/apache/spark/pull/31495
This is a generalization of previous cache from offset seq to
HDFSMetadataLog, which I actually got requested in my PR as well. As of now,
the cache is only effective to offset seq, but we plan to propose
functionalities/features where caching commit log would be helpful.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]