Github user bowenli86 commented on a diff in the pull request:
https://github.com/apache/flink/pull/5239#discussion_r167355583
--- Diff: docs/ops/state/large_state_tuning.md ---
@@ -234,4 +234,97 @@ Compression can be activated through the
`ExecutionConfig`:
**Notice:** The compression option has no impact on incremental snapshots,
because they are using RocksDB's internal
format which is always using snappy compression out of the box.
+## Task-Local Recovery
+
+### Motivation
+
+In Flink's checkpointing, each task produces a snapshot of its state that
is then written to a distributed store. Each task acknowledges
+a successful write of the state to the job manager by sending a handle
that describes the location of the state in the distributed store.
+The job manager, in turn, collects the handles from all tasks and bundles
them into a checkpoint object.
+
+In case of recovery, the job manager opens the latest checkpoint object
and sends the handles back to the corresponding tasks, which can
+then restore their state from the distributed storage. Using a distributed
storage to store state has two important advantages. First, the storage
+is fault tolerant and second, all state in the distributed store is
accessible to all nodes and can be easily redistributed (e.g. for rescaling).
+
+However, using a remote distributed store has also one big disadvantage:
all tasks must read their state from a remote location, over the network.
+In many scenarios, recovery could reschedule failed tasks to the same task
manager as in the previous run (of course there are exceptions like machine
+failures), but we still have to read remote state. This can result in
*long recovery times for large states*, even if there was only a small failure
on
+a single machine.
+
+### Approach
+
+Task-local state recovery targets exactly this problem of long recovery
times and the main idea is the following: for every checkpoint, we do not
+only write task states to the distributed storage, but also keep *a
secondary copy of the state snapshot in a storage that is local to the task*
+(e.g. on local disk or in memory). Notice that the primary store for
snapshots must still be the distributed store, because local storage does not
+ensure durability under node failures abd also does not provide access for
other nodes to redistribute state, this functionality still requires the
--- End diff --
abd -> and
---