Github user zsxwing commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15163#discussion_r79668094
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSinkLog.scala
 ---
    @@ -79,213 +76,47 @@ object SinkFileStatus {
      * When the reader uses `allFiles` to list all files, this method only 
returns the visible files
      * (drops the deleted files).
      */
    -class FileStreamSinkLog(sparkSession: SparkSession, path: String)
    -  extends HDFSMetadataLog[Array[SinkFileStatus]](sparkSession, path) {
    -
    -  import FileStreamSinkLog._
    +class FileStreamSinkLog(
    +    metadataLogVersion: String,
    +    sparkSession: SparkSession,
    +    path: String)
    +  extends CompactibleFileStreamLog[SinkFileStatus](metadataLogVersion, 
sparkSession, path) {
     
       private implicit val formats = Serialization.formats(NoTypeHints)
     
    -  /**
    -   * If we delete the old files after compaction at once, there is a race 
condition in S3: other
    -   * processes may see the old files are deleted but still cannot see the 
compaction file using
    -   * "list". The `allFiles` handles this by looking for the next 
compaction file directly, however,
    -   * a live lock may happen if the compaction happens too frequently: one 
processing keeps deleting
    -   * old files while another one keeps retrying. Setting a reasonable 
cleanup delay could avoid it.
    -   */
    -  private val fileCleanupDelayMs = 
sparkSession.conf.get(SQLConf.FILE_SINK_LOG_CLEANUP_DELAY)
    +  protected override val fileCleanupDelayMs =
    --- End diff --
    
    Just resolved conflicts for these 3 confs


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to