Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/13513#discussion_r79668559
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSinkLog.scala
---
@@ -79,213 +76,46 @@ object SinkFileStatus {
* When the reader uses `allFiles` to list all files, this method only
returns the visible files
* (drops the deleted files).
*/
-class FileStreamSinkLog(sparkSession: SparkSession, path: String)
- extends HDFSMetadataLog[Array[SinkFileStatus]](sparkSession, path) {
-
- import FileStreamSinkLog._
+class FileStreamSinkLog(
+ metadataLogVersion: String,
+ sparkSession: SparkSession,
+ path: String)
+ extends CompactibleFileStreamLog[SinkFileStatus](metadataLogVersion,
sparkSession, path) {
private implicit val formats = Serialization.formats(NoTypeHints)
- /**
- * If we delete the old files after compaction at once, there is a race
condition in S3: other
- * processes may see the old files are deleted but still cannot see the
compaction file using
- * "list". The `allFiles` handles this by looking for the next
compaction file directly, however,
- * a live lock may happen if the compaction happens too frequently: one
processing keeps deleting
- * old files while another one keeps retrying. Setting a reasonable
cleanup delay could avoid it.
- */
- private val fileCleanupDelayMs =
sparkSession.sessionState.conf.fileSinkLogCleanupDelay
+ protected override val fileCleanupDelayMs =
--- End diff --
I just noticed some conflicts here. Could you submit a follow up PR to use
the previous `sparkSession.sessionState.conf.fileSinkLogCleanupDelay`? Same as
the other confs. This only exists in master branch, so we don't need to fix
branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]