Repository: flink
Updated Branches:
  refs/heads/release-1.2 576cc895c -> db3c5f388


[FLINK-5894] [docs] Fix misleading HA docs

This closes #3401.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/db3c5f38
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/db3c5f38
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/db3c5f38

Branch: refs/heads/release-1.2
Commit: db3c5f388b3f9acac19d3d0395df96d7a58fcdc9
Parents: 576cc89
Author: Ufuk Celebi <[email protected]>
Authored: Thu Feb 23 13:30:13 2017 +0100
Committer: Ufuk Celebi <[email protected]>
Committed: Thu Feb 23 13:49:44 2017 +0100

----------------------------------------------------------------------
 docs/setup/jobmanager_high_availability.md | 14 ++++----------
 1 file changed, 4 insertions(+), 10 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/db3c5f38/docs/setup/jobmanager_high_availability.md
----------------------------------------------------------------------
diff --git a/docs/setup/jobmanager_high_availability.md 
b/docs/setup/jobmanager_high_availability.md
index aa18a4b..5949835 100644
--- a/docs/setup/jobmanager_high_availability.md
+++ b/docs/setup/jobmanager_high_availability.md
@@ -84,14 +84,13 @@ In order to start an HA-cluster add the following 
configuration keys to `conf/fl
 
   **Important**: if you are running multiple Flink HA clusters, you have to 
manually configure separate namespaces for each cluster. By default, the Yarn 
cluster and the Yarn session automatically generate namespaces based on Yarn 
application id. A manual configuration overrides this behaviour in Yarn. 
Specifying a namespace with the -z CLI option, in turn, overrides manual 
configuration.
 
-- **State backend and storage directory** (required): JobManager meta data is 
persisted in the *state backend* and only a pointer to this state is stored in 
ZooKeeper. Currently, only the file system state backend is supported in HA 
mode.
+- **Storage directory** (required): JobManager metadata is persisted in the 
file system *storageDir* and only a pointer to this state is stored in 
ZooKeeper.
 
     <pre>
-high-availability.zookeeper.storageDir: hdfs:///flink/recovery</pre>
-state.backend: filesystem
-state.backend.fs.checkpointdir: hdfs:///flink/checkpoints
+high-availability.zookeeper.storageDir: hdfs:///flink/recovery
+    </pre>
 
-    The `storageDir` stores all meta data needed to recover a JobManager 
failure.
+    The `storageDir` stores all metadata needed to recover a JobManager 
failure.
 
 After configuring the masters and the ZooKeeper quorum, you can use the 
provided cluster startup scripts as usual. They will start an HA-cluster. Keep 
in mind that the **ZooKeeper quorum has to be running** when you call the 
scripts and make sure to **configure a separate ZooKeeper root path** for each 
HA cluster you are starting.
 
@@ -106,9 +105,6 @@ high-availability.zookeeper.path.root: /flink
 high-availability.zookeeper.path.namespace: /cluster_one # important: 
customize per cluster
 high-availability.zookeeper.storageDir: hdfs:///flink/recovery</pre>
 
-state.backend: filesystem
-state.backend.fs.checkpointdir: hdfs:///flink/checkpoints
-
 2. **Configure masters** in `conf/masters`:
 
    <pre>
@@ -192,8 +188,6 @@ high-availability.zookeeper.quorum: localhost:2181
 high-availability.zookeeper.storageDir: hdfs:///flink/recovery
 high-availability.zookeeper.path.root: /flink
 high-availability.zookeeper.path.namespace: /cluster_one # important: 
customize per cluster
-state.backend: filesystem
-state.backend.fs.checkpointdir: hdfs:///flink/checkpoints
 yarn.application-attempts: 10</pre>
 
 3. **Configure ZooKeeper server** in `conf/zoo.cfg` (currently it's only 
possible to run a single ZooKeeper server per machine):

Reply via email to