Github user uncleGen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15852#discussion_r87936909
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/CompactibleFileStreamLog.scala
---
@@ -63,7 +63,60 @@ abstract class CompactibleFileStreamLog[T <: AnyRef :
ClassTag](
protected def isDeletingExpiredLog: Boolean
- protected def compactInterval: Int
+ protected def defaultCompactInterval: Int
+
+ protected final lazy val compactInterval: Int = {
+ // SPARK-18187: "compactInterval" can be set by user via
defaultCompactInterval.
+ // If there are existing log entries, then we should ensure a
compatible compactInterval
+ // is used, irrespective of the defaultCompactInterval. There are
three cases:
+ //
+ // 1. If there is no '.compact' file, we can use the default setting
directly.
+ // 2. If there are two or more '.compact' files, we use the interval
of patch id suffix with
+ // '.compact' as compactInterval. It is unclear whether this case will
ever happen in the
+ // current code, since only the latest '.compact' file is retained
i.e., other are garbage
+ // collected.
--- End diff --
The log garbage operation is controlled by
'spark.sql.streaming.fileSource.log.deletion'. When it is 'false', there may be
two or more '.compact' files
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]