[ 
https://issues.apache.org/jira/browse/FLINK-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16359252#comment-16359252
 ] 

ASF GitHub Bot commented on FLINK-8360:
---------------------------------------

Github user bowenli86 commented on a diff in the pull request:

    https://github.com/apache/flink/pull/5239#discussion_r167355160
  
    --- Diff: docs/ops/state/large_state_tuning.md ---
    @@ -234,4 +234,97 @@ Compression can be activated through the 
`ExecutionConfig`:
     **Notice:** The compression option has no impact on incremental snapshots, 
because they are using RocksDB's internal
     format which is always using snappy compression out of the box.
     
    +## Task-Local Recovery
    +
    +### Motivation
    +
    +In Flink's checkpointing, each task produces a snapshot of its state that 
is then written to a distributed store. Each task acknowledges
    +a successful write of the state to the job manager by sending a handle 
that describes the location of the state in the distributed store.
    +The job manager, in turn, collects the handles from all tasks and bundles 
them into a checkpoint object.
    +
    +In case of recovery, the job manager opens the latest checkpoint object 
and sends the handles back to the corresponding tasks, which can
    +then restore their state from the distributed storage. Using a distributed 
storage to store state has two important advantages. First, the storage
    +is fault tolerant and second, all state in the distributed store is 
accessible to all nodes and can be easily redistributed (e.g. for rescaling).
    +
    +However, using a remote distributed store has also one big disadvantage: 
all tasks must read their state from a remote location, over the network.
    +In many scenarios, recovery could reschedule failed tasks to the same task 
manager as in the previous run (of course there are exceptions like machine
    +failures), but we still have to read remote state. This can result in 
*long recovery times for large states*, even if there was only a small failure 
on
    +a single machine.
    +
    +### Approach
    +
    +Task-local state recovery targets exactly this problem of long recovery 
times and the main idea is the following: for every checkpoint, we do not
    --- End diff --
    
    'we' refers to 'each task'? Better to be explicit about it


> Implement task-local state recovery
> -----------------------------------
>
>                 Key: FLINK-8360
>                 URL: https://issues.apache.org/jira/browse/FLINK-8360
>             Project: Flink
>          Issue Type: New Feature
>          Components: State Backends, Checkpointing
>            Reporter: Stefan Richter
>            Assignee: Stefan Richter
>            Priority: Major
>             Fix For: 1.5.0
>
>
> This issue tracks the development of recovery from task-local state. The main 
> idea is to have a secondary, local copy of the checkpointed state, while 
> there is still a primary copy in DFS that we report to the checkpoint 
> coordinator.
> Recovery can attempt to restore from the secondary local copy, if available, 
> to save network bandwidth. This requires that the assignment from tasks to 
> slots is as sticky is possible.
> For starters, we will implement this feature for all managed keyed states and 
> can easily enhance it to all other state types (e.g. operator state) later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to