[ 
https://issues.apache.org/jira/browse/FLINK-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sihua Zhou updated FLINK-8602:
------------------------------
    Description: 
Currently, when enable incremental checkpoint, if user change the parallelism 
then ```hasExtraKeys``` may be ```true```. If this occur, flink will loop all 
rocksdb instance and iterator all data to fetch the data that fails into 
current ```KeyGroupRange````, this can be improved as follows:

- 1. For multi rocksdbs, we don't need to iterate the entry of them and insert 
into another one, we can use the `ingestExternalFile()` api to merge them.
- 2. For the keyGroup which is not belong the target keyGroupRange, we can 
delete them lazily by set the `CompactFilter` for the `ColumnFamily`.

Any advice would be highly appreciated!

 

  was:
Currently, when enable incremental checkpoint, if user change the parallelism 
then ```hasExtraKeys``` may be ```true```. If this occur, flink will loop all 
rocksdb instance and iterator all data to fetch the data that fails into 
current ```KeyGroupRange````, this can be improved because

 if a state handle's ```KeyGroupRange``` is fully covered by the current 
```KeyGroupRange```, we can open the rocksdb it corresponds to directly.

 


> Accelerate recover from failover when use incremental checkpoint
> ----------------------------------------------------------------
>
>                 Key: FLINK-8602
>                 URL: https://issues.apache.org/jira/browse/FLINK-8602
>             Project: Flink
>          Issue Type: Improvement
>          Components: State Backends, Checkpointing
>    Affects Versions: 1.5.0
>            Reporter: Sihua Zhou
>            Assignee: Sihua Zhou
>            Priority: Major
>
> Currently, when enable incremental checkpoint, if user change the parallelism 
> then ```hasExtraKeys``` may be ```true```. If this occur, flink will loop all 
> rocksdb instance and iterator all data to fetch the data that fails into 
> current ```KeyGroupRange````, this can be improved as follows:
> - 1. For multi rocksdbs, we don't need to iterate the entry of them and 
> insert into another one, we can use the `ingestExternalFile()` api to merge 
> them.
> - 2. For the keyGroup which is not belong the target keyGroupRange, we can 
> delete them lazily by set the `CompactFilter` for the `ColumnFamily`.
> Any advice would be highly appreciated!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to