[ 
https://issues.apache.org/jira/browse/FLINK-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16373970#comment-16373970
 ] 

Sihua Zhou commented on FLINK-8602:
-----------------------------------

[~stefanrichte...@gmail.com] Could you please have a look at this?

> Accelerate recover from failover when use incremental checkpoint
> ----------------------------------------------------------------
>
>                 Key: FLINK-8602
>                 URL: https://issues.apache.org/jira/browse/FLINK-8602
>             Project: Flink
>          Issue Type: Improvement
>          Components: State Backends, Checkpointing
>    Affects Versions: 1.5.0
>            Reporter: Sihua Zhou
>            Assignee: Sihua Zhou
>            Priority: Major
>
> Currently, when enable incremental checkpoint, if user change the parallelism 
> then ```hasExtraKeys``` may be ```true```. If this occur, flink will loop all 
> rocksdb instance and iterator all data to fetch the data that fails into 
> current ```KeyGroupRange````, this can be improved as follows:
> - 1. For multi rocksdbs, we don't need to iterate the entry of them and 
> insert into another one, we can use the `ingestExternalFile()` api to merge 
> them.
> - 2. For the keyGroup which is not belong the target keyGroupRange, we can 
> delete them lazily by set the `CompactFilter` for the `ColumnFamily`.
> Any advice would be highly appreciated!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to