Github user bowenli86 commented on a diff in the pull request:

    https://github.com/apache/flink/pull/5239#discussion_r167354757
  
    --- Diff: docs/ops/state/large_state_tuning.md ---
    @@ -234,4 +234,97 @@ Compression can be activated through the 
`ExecutionConfig`:
     **Notice:** The compression option has no impact on incremental snapshots, 
because they are using RocksDB's internal
     format which is always using snappy compression out of the box.
     
    +## Task-Local Recovery
    +
    +### Motivation
    +
    +In Flink's checkpointing, each task produces a snapshot of its state that 
is then written to a distributed store. Each task acknowledges
    +a successful write of the state to the job manager by sending a handle 
that describes the location of the state in the distributed store.
    +The job manager, in turn, collects the handles from all tasks and bundles 
them into a checkpoint object.
    +
    +In case of recovery, the job manager opens the latest checkpoint object 
and sends the handles back to the corresponding tasks, which can
    +then restore their state from the distributed storage. Using a distributed 
storage to store state has two important advantages. First, the storage
    +is fault tolerant and second, all state in the distributed store is 
accessible to all nodes and can be easily redistributed (e.g. for rescaling).
    +
    +However, using a remote distributed store has also one big disadvantage: 
all tasks must read their state from a remote location, over the network.
    +In many scenarios, recovery could reschedule failed tasks to the same task 
manager as in the previous run (of course there are exceptions like machine
    +failures), but we still have to read remote state. This can result in 
*long recovery times for large states*, even if there was only a small failure 
on
    --- End diff --
    
    long recovery time**~s~**


---

Reply via email to