[ 
https://issues.apache.org/jira/browse/FLINK-26322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17504708#comment-17504708
 ] 

Gen Luo commented on FLINK-26322:
---------------------------------

I found issues that may relate to 4. The jira is FLINK-26580 and a PR has been 
created.

It seems that the compactor is not properly processing the in-progress files, 
but we failed to find it since the test case is using a rolling policy that 
will force flush the in-progress file before checkpointing. I changed this 
policy and two issues arose, both of which are fixed in the newly created PR. 

> Test FileSink compaction manually
> ---------------------------------
>
>                 Key: FLINK-26322
>                 URL: https://issues.apache.org/jira/browse/FLINK-26322
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Connectors / FileSystem
>    Affects Versions: 1.15.0
>            Reporter: Yun Gao
>            Assignee: Alexander Preuss
>            Priority: Blocker
>              Labels: release-testing
>             Fix For: 1.15.0
>
>
> Documentation of compaction on FileSink: 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/filesystem/#compaction]
> Possible scenarios might include
>  # Enable compaction with file-size based compaction strategy.
>  # Enable compaction with number-checkpoints based compaction strategy.
>  # Enable compaction, stop-with-savepoint and restarted with compaction 
> disabled.
>  # Disable compaction, stop-with-savepoint and restarted with compaction 
> enabled.
> For each scenario, it might need to verify that
>  # No repeat and missed records.
>  # The resulted files' size exceeds the specified condition.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to