This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch release-1.12
in repository https://gitbox.apache.org/repos/asf/flink.git

commit e81dcef4522c7d26b2421b7e6c2ca5b097f307c6
Author: Till Rohrmann <[email protected]>
AuthorDate: Wed Nov 25 10:53:30 2020 +0100

    [FLINK-20342][docs] Move memory configuration from ops to deployment
---
 docs/deployment/config.md                          |  4 +-
 docs/deployment/config.zh.md                       |  4 +-
 docs/{ops => deployment}/memory/index.md           |  4 +-
 docs/{ops => deployment}/memory/index.zh.md        |  4 +-
 docs/{ops => deployment}/memory/mem_migration.md   | 58 +++++++++++-----------
 .../{ops => deployment}/memory/mem_migration.zh.md | 52 +++++++++----------
 docs/{ops => deployment}/memory/mem_setup.md       | 22 ++++----
 docs/{ops => deployment}/memory/mem_setup.zh.md    | 16 +++---
 .../memory/mem_setup_jobmanager.md                 | 22 ++++----
 .../memory/mem_setup_jobmanager.zh.md              | 20 ++++----
 docs/{ops => deployment}/memory/mem_setup_tm.md    | 16 +++---
 docs/{ops => deployment}/memory/mem_setup_tm.zh.md | 16 +++---
 docs/{ops => deployment}/memory/mem_trouble.md     | 22 ++++----
 docs/{ops => deployment}/memory/mem_trouble.zh.md  | 16 +++---
 docs/{ops => deployment}/memory/mem_tuning.md      | 24 ++++-----
 docs/{ops => deployment}/memory/mem_tuning.zh.md   | 24 ++++-----
 .../deployment/resource-providers/cluster_setup.md |  2 +-
 .../resource-providers/cluster_setup.zh.md         |  2 +-
 docs/ops/config.md                                 |  4 +-
 docs/ops/config.zh.md                              |  4 +-
 docs/ops/state/state_backends.md                   |  6 +--
 docs/ops/state/state_backends.zh.md                |  6 +--
 docs/release-notes/flink-1.10.md                   |  2 +-
 docs/release-notes/flink-1.10.zh.md                |  2 +-
 docs/release-notes/flink-1.11.md                   | 14 +++---
 docs/release-notes/flink-1.11.zh.md                | 14 +++---
 26 files changed, 190 insertions(+), 190 deletions(-)

diff --git a/docs/deployment/config.md b/docs/deployment/config.md
index 83e1762..94a0351 100644
--- a/docs/deployment/config.md
+++ b/docs/deployment/config.md
@@ -157,8 +157,8 @@ Flink tries to shield users as much as possible from the 
complexity of configuri
 In most cases, users should only need to set the values 
`taskmanager.memory.process.size` or `taskmanager.memory.flink.size` (depending 
on how the setup), and possibly adjusting the ratio of JVM heap and Managed 
Memory via `taskmanager.memory.managed.fraction`. The other options below can 
be used for performance tuning and fixing memory related errors.
 
 For a detailed explanation of how these options interact,
-see the documentation on [TaskManager]({% link ops/memory/mem_setup_tm.md %}) 
and
-[JobManager]({% link ops/memory/mem_setup_jobmanager.md %} ) memory 
configurations.
+see the documentation on [TaskManager]({% link 
deployment/memory/mem_setup_tm.md %}) and
+[JobManager]({% link deployment/memory/mem_setup_jobmanager.md %} ) memory 
configurations.
 
 {% include generated/common_memory_section.html %}
 
diff --git a/docs/deployment/config.zh.md b/docs/deployment/config.zh.md
index b1ee265..de0782f 100644
--- a/docs/deployment/config.zh.md
+++ b/docs/deployment/config.zh.md
@@ -157,8 +157,8 @@ Flink tries to shield users as much as possible from the 
complexity of configuri
 In most cases, users should only need to set the values 
`taskmanager.memory.process.size` or `taskmanager.memory.flink.size` (depending 
on how the setup), and possibly adjusting the ratio of JVM heap and Managed 
Memory via `taskmanager.memory.managed.fraction`. The other options below can 
be used for performance tuning and fixing memory related errors.
 
 For a detailed explanation of how these options interact,
-see the documentation on [TaskManager]({% link ops/memory/mem_setup_tm.zh.md 
%}) and
-[JobManager]({% link ops/memory/mem_setup_jobmanager.zh.md %} ) memory 
configurations.
+see the documentation on [TaskManager]({% link 
deployment/memory/mem_setup_tm.zh.md %}) and
+[JobManager]({% link deployment/memory/mem_setup_jobmanager.zh.md %} ) memory 
configurations.
 
 {% include generated/common_memory_section.html %}
 
diff --git a/docs/ops/memory/index.md b/docs/deployment/memory/index.md
similarity index 95%
rename from docs/ops/memory/index.md
rename to docs/deployment/memory/index.md
index 231ad7b..4745b0f 100644
--- a/docs/ops/memory/index.md
+++ b/docs/deployment/memory/index.md
@@ -1,8 +1,8 @@
 ---
 nav-title: 'Memory Configuration'
 nav-id: ops_mem
-nav-parent_id: ops
-nav-pos: 4
+nav-parent_id: deployment
+nav-pos: 10
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/ops/memory/index.zh.md b/docs/deployment/memory/index.zh.md
similarity index 95%
rename from docs/ops/memory/index.zh.md
rename to docs/deployment/memory/index.zh.md
index a96e574..bea415f 100644
--- a/docs/ops/memory/index.zh.md
+++ b/docs/deployment/memory/index.zh.md
@@ -1,8 +1,8 @@
 ---
 nav-title: '内存配置'
 nav-id: ops_mem
-nav-parent_id: ops
-nav-pos: 4
+nav-parent_id: deployment
+nav-pos: 10
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/ops/memory/mem_migration.md 
b/docs/deployment/memory/mem_migration.md
similarity index 83%
rename from docs/ops/memory/mem_migration.md
rename to docs/deployment/memory/mem_migration.md
index 145e719..69a5d65 100644
--- a/docs/ops/memory/mem_migration.md
+++ b/docs/deployment/memory/mem_migration.md
@@ -22,8 +22,8 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-The memory setup has changed a lot with the *1.10* release for 
[TaskManagers]({% link ops/memory/mem_setup_tm.md %}) and with the *1.11*
-release for [JobManagers]({% link ops/memory/mem_setup_jobmanager.md %}). Many 
configuration options were removed or their semantics changed.
+The memory setup has changed a lot with the *1.10* release for 
[TaskManagers]({% link deployment/memory/mem_setup_tm.md %}) and with the *1.11*
+release for [JobManagers]({% link deployment/memory/mem_setup_jobmanager.md 
%}). Many configuration options were removed or their semantics changed.
 This guide will help you to migrate the TaskManager memory configuration from 
Flink
 [<= 
*1.9*](https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/mem_setup.html)
 to >= *1.10* and
 the JobManager memory configuration from Flink <= *1.10* to >= *1.11*.
@@ -40,7 +40,7 @@ the JobManager memory configuration from Flink <= *1.10* to 
>= *1.11*.
 
 <span class="label label-info">Note</span> Before version *1.10* for 
TaskManagers and before *1.11* for JobManagers,
 Flink did not require that memory related options are set at all as they all 
had default values.
-The [new memory configuration]({% link ops/memory/mem_setup.md 
%}#configure-total-memory) requires that at least one subset of
+The [new memory configuration]({% link deployment/memory/mem_setup.md 
%}#configure-total-memory) requires that at least one subset of
 the following options is configured explicitly, otherwise the configuration 
will fail:
 
 | &nbsp;&nbsp;**for TaskManager:**&nbsp;&nbsp;                                 
                                                                                
                       | &nbsp;&nbsp;**for JobManager:**&nbsp;&nbsp;            
                          |
@@ -136,7 +136,7 @@ The following options are deprecated but if they are still 
used they will be int
 
 Although, the network memory configuration has not changed too much it is 
recommended to verify its configuration.
 It can change if other memory components have new sizes, e.g. the total memory 
which the network can be a fraction of.
-See also [new detailed memory model]({% link ops/memory/mem_setup_tm.md 
%}#detailed-memory-model).
+See also [new detailed memory model]({% link deployment/memory/mem_setup_tm.md 
%}#detailed-memory-model).
 
 The container cut-off configuration options, `containerized.heap-cutoff-ratio` 
and `containerized.heap-cutoff-min`,
 have no effect anymore for TaskManagers. See also [how to migrate container 
cut-off](#container-cut-off-memory).
@@ -155,7 +155,7 @@ they will be directly translated into the following new 
options:
 
 It is also recommended using these new options instead of the legacy ones as 
they might be completely removed in the following releases.
 
-See also [how to configure total memory now]({% link ops/memory/mem_setup.md 
%}#configure-total-memory).
+See also [how to configure total memory now]({% link 
deployment/memory/mem_setup.md %}#configure-total-memory).
 
 ### JVM Heap Memory
 
@@ -164,21 +164,21 @@ which included any other usages of heap memory. This rest 
was the remaining part
 see also [how to migrate managed memory](#managed-memory).
 
 Now, if only *total Flink memory* or *total process memory* is configured, 
then the JVM Heap is the rest of
-what is left after subtracting all other components from the total memory, see 
also [how to configure total memory]({% link ops/memory/mem_setup.md 
%}#configure-total-memory).
+what is left after subtracting all other components from the total memory, see 
also [how to configure total memory]({% link deployment/memory/mem_setup.md 
%}#configure-total-memory).
 
 Additionally, you can now have more direct control over the JVM Heap assigned 
to the operator tasks
 ([`taskmanager.memory.task.heap.size`]({% link ops/config.md 
%}#taskmanager-memory-task-heap-size)),
-see also [Task (Operator) Heap Memory]({% link ops/memory/mem_setup_tm.md 
%}#task-operator-heap-memory).
+see also [Task (Operator) Heap Memory]({% link 
deployment/memory/mem_setup_tm.md %}#task-operator-heap-memory).
 The JVM Heap memory is also used by the heap state backends 
([MemoryStateBackend]({% link ops/state/state_backends.md 
%}#the-memorystatebackend)
 or [FsStateBackend]({% link ops/state/state_backends.md 
%}#the-fsstatebackend)) if it is chosen for streaming jobs.
 
 A part of the JVM Heap is now always reserved for the Flink framework
 ([`taskmanager.memory.framework.heap.size`]({% link ops/config.md 
%}#taskmanager-memory-framework-heap-size)).
-See also [Framework memory]({% link ops/memory/mem_setup_tm.md 
%}#framework-memory).
+See also [Framework memory]({% link deployment/memory/mem_setup_tm.md 
%}#framework-memory).
 
 ### Managed Memory
 
-See also [how to configure managed memory now]({% link 
ops/memory/mem_setup_tm.md %}#managed-memory).
+See also [how to configure managed memory now]({% link 
deployment/memory/mem_setup_tm.md %}#managed-memory).
 
 #### Explicit Size
 
@@ -192,15 +192,15 @@ If not set explicitly, the managed memory could be 
previously specified as a fra
 of the total memory minus network memory and container cut-off (only for 
[Yarn]({% link deployment/resource-providers/yarn_setup.md %}) and
 [Mesos]({% link deployment/resource-providers/mesos.md %}) deployments). This 
option has been completely removed and will have no effect if still used.
 Please, use the new option [`taskmanager.memory.managed.fraction`]({% link 
ops/config.md %}#taskmanager-memory-managed-fraction) instead.
-This new option will set the [managed memory]({% link 
ops/memory/mem_setup_tm.md %}#managed-memory) to the specified fraction of the
-[total Flink memory]({% link ops/memory/mem_setup.md 
%}#configure-total-memory) if its size is not set explicitly by
+This new option will set the [managed memory]({% link 
deployment/memory/mem_setup_tm.md %}#managed-memory) to the specified fraction 
of the
+[total Flink memory]({% link deployment/memory/mem_setup.md 
%}#configure-total-memory) if its size is not set explicitly by
 [`taskmanager.memory.managed.size`]({% link ops/config.md 
%}#taskmanager-memory-managed-size).
 
 #### RocksDB state
 
 If the [RocksDBStateBackend]({% link ops/state/state_backends.md 
%}#the-rocksdbstatebackend) is chosen for a streaming job,
-its native memory consumption should now be accounted for in [managed 
memory]({% link ops/memory/mem_setup_tm.md %}#managed-memory).
-The RocksDB memory allocation is limited by the [managed memory]({% link 
ops/memory/mem_setup_tm.md %}#managed-memory) size.
+its native memory consumption should now be accounted for in [managed 
memory]({% link deployment/memory/mem_setup_tm.md %}#managed-memory).
+The RocksDB memory allocation is limited by the [managed memory]({% link 
deployment/memory/mem_setup_tm.md %}#managed-memory) size.
 This should prevent the killing of containers on [Yarn]({% link 
deployment/resource-providers/yarn_setup.md %}) and [Mesos]({% link 
deployment/resource-providers/mesos.md %}).
 You can disable the RocksDB memory control by setting 
[state.backend.rocksdb.memory.managed]({% link ops/config.md 
%}#state-backend-rocksdb-memory-managed)
 to `false`. See also [how to migrate container 
cut-off](#container-cut-off-memory).
@@ -208,9 +208,9 @@ to `false`. See also [how to migrate container 
cut-off](#container-cut-off-memor
 #### Other changes
 
 Additionally, the following changes have been made:
-* The [managed memory]({% link ops/memory/mem_setup_tm.md %}#managed-memory) 
is always off-heap now. The configuration option `taskmanager.memory.off-heap` 
is removed and will have no effect anymore.
-* The [managed memory]({% link ops/memory/mem_setup_tm.md %}#managed-memory) 
now uses native memory which is not direct memory. It means that the managed 
memory is no longer accounted for in the JVM direct memory limit.
-* The [managed memory]({% link ops/memory/mem_setup_tm.md %}#managed-memory) 
is always lazily allocated now. The configuration option 
`taskmanager.memory.preallocate` is removed and will have no effect anymore.
+* The [managed memory]({% link deployment/memory/mem_setup_tm.md 
%}#managed-memory) is always off-heap now. The configuration option 
`taskmanager.memory.off-heap` is removed and will have no effect anymore.
+* The [managed memory]({% link deployment/memory/mem_setup_tm.md 
%}#managed-memory) now uses native memory which is not direct memory. It means 
that the managed memory is no longer accounted for in the JVM direct memory 
limit.
+* The [managed memory]({% link deployment/memory/mem_setup_tm.md 
%}#managed-memory) is always lazily allocated now. The configuration option 
`taskmanager.memory.preallocate` is removed and will have no effect anymore.
 
 ## Migrate Job Manager Memory Configuration
 
@@ -234,10 +234,10 @@ they will be directly translated into the following new 
options:
 
 It is also recommended using these new options instead of the legacy ones as 
they might be completely removed in the following releases.
 
-Now, if only the *total Flink memory* or *total process memory* is configured, 
then the [JVM Heap]({% link ops/memory/mem_setup_jobmanager.md 
%}#configure-jvm-heap)
+Now, if only the *total Flink memory* or *total process memory* is configured, 
then the [JVM Heap]({% link deployment/memory/mem_setup_jobmanager.md 
%}#configure-jvm-heap)
 is also derived as the rest of what is left after subtracting all other 
components from the total memory, see also
-[how to configure total memory]({% link ops/memory/mem_setup.md 
%}#configure-total-memory). Additionally, you can now have more direct
-control over the [JVM Heap]({% link ops/memory/mem_setup_jobmanager.md 
%}#configure-jvm-heap) by adjusting the
+[how to configure total memory]({% link deployment/memory/mem_setup.md 
%}#configure-total-memory). Additionally, you can now have more direct
+control over the [JVM Heap]({% link deployment/memory/mem_setup_jobmanager.md 
%}#configure-jvm-heap) by adjusting the
 [`jobmanager.memory.heap.size`]({% link ops/config.md 
%}#jobmanager-memory-heap-size) option.
 
 ## Flink JVM process memory limits
@@ -246,12 +246,12 @@ Since *1.10* release, Flink sets the *JVM Metaspace* and 
*JVM Direct Memory* lim
 by adding the corresponding JVM arguments. Since *1.11* release, Flink also 
sets the *JVM Metaspace* limit for the JobManager process.
 You can enable the *JVM Direct Memory* limit for JobManager process if you set 
the
 [`jobmanager.memory.enable-jvm-direct-memory-limit`]({% link ops/config.md 
%}#jobmanager-memory-enable-jvm-direct-memory-limit) option.
-See also [JVM parameters]({% link ops/memory/mem_setup.md %}#jvm-parameters).
+See also [JVM parameters]({% link deployment/memory/mem_setup.md 
%}#jvm-parameters).
 
 Flink sets the mentioned JVM memory limits to simplify debugging of the 
corresponding memory leaks and avoid
-[the container out-of-memory errors]({% link ops/memory/mem_trouble.md 
%}#container-memory-exceeded).
-See also the troubleshooting guide for details about the [JVM Metaspace]({% 
link ops/memory/mem_trouble.md %}#outofmemoryerror-metaspace)
-and [JVM Direct Memory]({% link ops/memory/mem_trouble.md 
%}#outofmemoryerror-direct-buffer-memory) *OutOfMemoryErrors*.
+[the container out-of-memory errors]({% link deployment/memory/mem_trouble.md 
%}#container-memory-exceeded).
+See also the troubleshooting guide for details about the [JVM Metaspace]({% 
link deployment/memory/mem_trouble.md %}#outofmemoryerror-metaspace)
+and [JVM Direct Memory]({% link deployment/memory/mem_trouble.md 
%}#outofmemoryerror-direct-buffer-memory) *OutOfMemoryErrors*.
 
 ## Container Cut-Off Memory
 
@@ -263,22 +263,22 @@ will have no effect anymore. The new memory model 
introduced more specific memor
 ### for TaskManagers
 
 In streaming jobs which use [RocksDBStateBackend]({% link 
ops/state/state_backends.md %}#the-rocksdbstatebackend), the RocksDB
-native memory consumption should be accounted for as a part of the [managed 
memory]({% link ops/memory/mem_setup_tm.md %}#managed-memory) now.
-The RocksDB memory allocation is also limited by the configured size of the 
[managed memory]({% link ops/memory/mem_setup.md %}#managed-memory).
-See also [migrating managed memory](#managed-memory) and [how to configure 
managed memory now]({% link ops/memory/mem_setup_tm.md %}#managed-memory).
+native memory consumption should be accounted for as a part of the [managed 
memory]({% link deployment/memory/mem_setup_tm.md %}#managed-memory) now.
+The RocksDB memory allocation is also limited by the configured size of the 
[managed memory]({% link deployment/memory/mem_setup.md %}#managed-memory).
+See also [migrating managed memory](#managed-memory) and [how to configure 
managed memory now]({% link deployment/memory/mem_setup_tm.md 
%}#managed-memory).
 
 The other direct or native off-heap memory consumers can now be addressed by 
the following new configuration options:
 * Task off-heap memory ([`taskmanager.memory.task.off-heap.size`]({% link 
ops/config.md %}#taskmanager-memory-task-off-heap-size))
 * Framework off-heap memory ([`taskmanager.memory.framework.off-heap.size`]({% 
link ops/config.md %}#taskmanager-memory-framework-off-heap-size))
 * JVM metaspace ([`taskmanager.memory.jvm-metaspace.size`]({% link 
ops/config.md %}#taskmanager-memory-jvm-metaspace-size))
-* [JVM overhead]({% link ops/memory/mem_setup_tm.md %}#detailed-memory-model)
+* [JVM overhead]({% link deployment/memory/mem_setup_tm.md 
%}#detailed-memory-model)
 
 ### for JobManagers
 
 The direct or native off-heap memory consumers can now be addressed by the 
following new configuration options:
 * Off-heap memory ([`jobmanager.memory.off-heap.size`]({% link ops/config.md 
%}#jobmanager-memory-off-heap-size))
 * JVM metaspace ([`jobmanager.memory.jvm-metaspace.size`]({% link 
ops/config.md %}#jobmanager-memory-jvm-metaspace-size))
-* [JVM overhead]({% link ops/memory/mem_setup_jobmanager.md 
%}#detailed-configuration)
+* [JVM overhead]({% link deployment/memory/mem_setup_jobmanager.md 
%}#detailed-configuration)
 
 ## Default Configuration in flink-conf.yaml
 
@@ -290,7 +290,7 @@ in the default `flink-conf.yaml`. The value increased from 
1024Mb to 1728Mb.
 The total memory for JobManagers (`jobmanager.heap.size`) is replaced by 
[`jobmanager.memory.process.size`]({% link ops/config.md 
%}#jobmanager-memory-process-size)
 in the default `flink-conf.yaml`. The value increased from 1024Mb to 1600Mb.
 
-See also [how to configure total memory now]({% link ops/memory/mem_setup.md 
%}#configure-total-memory).
+See also [how to configure total memory now]({% link 
deployment/memory/mem_setup.md %}#configure-total-memory).
 
 <div class="alert alert-warning">
   <strong>Warning:</strong> If you use the new default `flink-conf.yaml` it 
can result in different sizes of memory components and can lead to performance 
changes.
diff --git a/docs/ops/memory/mem_migration.zh.md 
b/docs/deployment/memory/mem_migration.zh.md
similarity index 82%
rename from docs/ops/memory/mem_migration.zh.md
rename to docs/deployment/memory/mem_migration.zh.md
index ab69b73..5781cc3 100644
--- a/docs/ops/memory/mem_migration.zh.md
+++ b/docs/deployment/memory/mem_migration.zh.md
@@ -22,7 +22,7 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-在 *1.10* 和 *1.11* 版本中,Flink 分别对 [TaskManager]({% link 
ops/memory/mem_setup_tm.zh.md %}) 和 [JobManager]({% link 
ops/memory/mem_setup_jobmanager.zh.md %}) 的内存配置方法做出了较大的改变。
+在 *1.10* 和 *1.11* 版本中,Flink 分别对 [TaskManager]({% link 
deployment/memory/mem_setup_tm.zh.md %}) 和 [JobManager]({% link 
deployment/memory/mem_setup_jobmanager.zh.md %}) 的内存配置方法做出了较大的改变。
 部分配置参数被移除了,或是语义上发生了变化。
 本篇升级指南将介绍如何将 [*Flink 1.9 
及以前版本*](https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/mem_setup.html)的
 TaskManager 内存配置升级到 *Flink 1.10 及以后版本*,
 以及如何将 *Flink 1.10 及以前版本*的 JobManager 内存配置升级到 *Flink 1.11 及以后版本*。
@@ -38,7 +38,7 @@ under the License.
 
 <span class="label label-info">提示</span>
 在 *1.10/1.11* 版本之前,Flink 不要求用户一定要配置 TaskManager/JobManager 
内存相关的参数,因为这些参数都具有默认值。
-[新的内存配置]({% link ops/memory/mem_setup.zh.md 
%}#configure-total-memory)要求用户至少指定下列配置参数(或参数组合)的其中之一,否则 Flink 将无法启动。
+[新的内存配置]({% link deployment/memory/mem_setup.zh.md 
%}#configure-total-memory)要求用户至少指定下列配置参数(或参数组合)的其中之一,否则 Flink 将无法启动。
 
 | &nbsp;&nbsp;**TaskManager:**&nbsp;&nbsp;                                     
                                                                                
                   | &nbsp;&nbsp;**JobManager:**&nbsp;&nbsp;                    
                  |
 | 
:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 | 
:--------------------------------------------------------------------------------
 |
@@ -132,7 +132,7 @@ Flink 自带的[默认 
flink-conf.yaml](#default-configuration-in-flink-confyaml
 
 尽管网络内存的配置参数没有发生太多变化,我们仍建议您检查其配置结果。
 网络内存的大小可能会受到其他内存部分大小变化的影响,例如总内存变化时,根据占比计算出的网络内存也可能发生变化。
-请参考[内存模型详解]({% link ops/memory/mem_setup_tm.zh.md %}#detailed-memory-model)。
+请参考[内存模型详解]({% link deployment/memory/mem_setup_tm.zh.md 
%}#detailed-memory-model)。
 
 容器切除(Cut-Off)内存相关的配置参数(`containerized.heap-cutoff-ratio` 和 
`containerized.heap-cutoff-min`)将不再对 TaskManager 进程生效。
 请参考[如何升级容器切除内存](#container-cut-off-memory)。
@@ -153,7 +153,7 @@ Flink 在 Mesos 上还有另一个具有同样语义的配置参数 `mesos.resou
 
 建议您尽早使用新的配置参数取代启用的配置参数,它们在今后的版本中可能会被彻底移除。
 
-请参考[如何配置总内存]({% link ops/memory/mem_setup.zh.md %}#configure-total-memory).
+请参考[如何配置总内存]({% link deployment/memory/mem_setup.zh.md 
%}#configure-total-memory).
 
 <a name="jvm-heap-memory" />
 
@@ -164,20 +164,20 @@ Flink 在 Mesos 上还有另一个具有同样语义的配置参数 `mesos.resou
 请参考[如何升级托管内存](#managed-memory)。
 
 现在,如果仅配置了*Flink总内存*或*进程总内存*,JVM 的堆空间依然是根据总内存减去所有其他非堆内存得到的。
-请参考[如何配置总内存]({% link ops/memory/mem_setup.zh.md %}#configure-total-memory)。
+请参考[如何配置总内存]({% link deployment/memory/mem_setup.zh.md 
%}#configure-total-memory)。
 
-此外,你现在可以更直接地控制用于任务和算子的 JVM 的堆内存([`taskmanager.memory.task.heap.size`]({% link 
ops/config.zh.md %}#taskmanager-memory-task-heap-size)),详见[任务堆内存]({% link 
ops/memory/mem_setup_tm.zh.md %}#task-operator-heap-memory)。
+此外,你现在可以更直接地控制用于任务和算子的 JVM 的堆内存([`taskmanager.memory.task.heap.size`]({% link 
ops/config.zh.md %}#taskmanager-memory-task-heap-size)),详见[任务堆内存]({% link 
deployment/memory/mem_setup_tm.zh.md %}#task-operator-heap-memory)。
 如果流处理作业选择使用 Heap State Backend([MemoryStateBackend]({% link 
ops/state/state_backends.zh.md %}#memorystatebackend)
 或 [FsStateBackend]({% link ops/state/state_backends.zh.md 
%}#fsstatebackend)),那么它同样需要使用 JVM 堆内存。
 
 Flink 现在总是会预留一部分 JVM 堆内存供框架使用([`taskmanager.memory.framework.heap.size`]({% 
link ops/config.zh.md %}#taskmanager-memory-framework-heap-size))。
-请参考[框架内存]({% link ops/memory/mem_setup_tm.zh.md %}#framework-memory)。
+请参考[框架内存]({% link deployment/memory/mem_setup_tm.zh.md %}#framework-memory)。
 
 <a name="managed-memory" />
 
 ### 托管内存
 
-请参考[如何配置托管内存]({% link ops/memory/mem_setup_tm.zh.md %}#managed-memory)。
+请参考[如何配置托管内存]({% link deployment/memory/mem_setup_tm.zh.md %}#managed-memory)。
 
 <a name="explicit-size" />
 
@@ -194,14 +194,14 @@ Flink 现在总是会预留一部分 JVM 堆内存供框架使用([`taskmanage
 [Mesos]({% link deployment/resource-providers/mesos.zh.md %}) 
上)之后剩余部分的固定比例(`taskmanager.memory.fraction`)。
 该配置参数已经被彻底移除,配置它不会产生任何效果。
 请使用新的配置参数 [`taskmanager.memory.managed.fraction`]({% link ops/config.zh.md 
%}#taskmanager-memory-managed-fraction)。
-在未通过 [`taskmanager.memory.managed.size`]({% link ops/config.zh.md 
%}#taskmanager-memory-managed-size) 指定明确大小的情况下,新的配置参数将指定[托管内存]({% link 
ops/memory/mem_setup_tm.zh.md %}#managed-memory)在 [Flink 总内存]({% link 
ops/memory/mem_setup.zh.md %}#configure-total-memory)中的所占比例。
+在未通过 [`taskmanager.memory.managed.size`]({% link ops/config.zh.md 
%}#taskmanager-memory-managed-size) 指定明确大小的情况下,新的配置参数将指定[托管内存]({% link 
deployment/memory/mem_setup_tm.zh.md %}#managed-memory)在 [Flink 总内存]({% link 
deployment/memory/mem_setup.zh.md %}#configure-total-memory)中的所占比例。
 
 <a name="rocksdb-state" />
 
 #### RocksDB State Backend
 
-流处理作业如果选择使用 [RocksDBStateBackend]({% link ops/state/state_backends.zh.md 
%}#rocksdbstatebackend),它使用的本地内存现在也被归为[托管内存]({% link 
ops/memory/mem_setup_tm.zh.md %}#managed-memory)。
-默认情况下,RocksDB 将限制其内存用量不超过[托管内存]({% link ops/memory/mem_setup_tm.zh.md 
%}#managed-memory)大小,以避免在 [Yarn]({% link 
deployment/resource-providers/yarn_setup.zh.md %}) 或 [Mesos]({% link 
deployment/resource-providers/mesos.zh.md %}) 上容器被杀。你也可以通过设置 
[state.backend.rocksdb.memory.managed]({% link ops/config.zh.md 
%}#state-backend-rocksdb-memory-managed) 来关闭 RocksDB 的内存控制。
+流处理作业如果选择使用 [RocksDBStateBackend]({% link ops/state/state_backends.zh.md 
%}#rocksdbstatebackend),它使用的本地内存现在也被归为[托管内存]({% link 
deployment/memory/mem_setup_tm.zh.md %}#managed-memory)。
+默认情况下,RocksDB 将限制其内存用量不超过[托管内存]({% link deployment/memory/mem_setup_tm.zh.md 
%}#managed-memory)大小,以避免在 [Yarn]({% link 
deployment/resource-providers/yarn_setup.zh.md %}) 或 [Mesos]({% link 
deployment/resource-providers/mesos.zh.md %}) 上容器被杀。你也可以通过设置 
[state.backend.rocksdb.memory.managed]({% link ops/config.zh.md 
%}#state-backend-rocksdb-memory-managed) 来关闭 RocksDB 的内存控制。
 请参考[如何升级容器切除内存](#container-cut-off-memory)。
 
 <a name="other-changes" />
@@ -209,9 +209,9 @@ Flink 现在总是会预留一部分 JVM 堆内存供框架使用([`taskmanage
 #### 其他变化
 
 此外,Flink 1.10 对托管内存还引入了下列变化:
-* [托管内存]({% link ops/memory/mem_setup_tm.zh.md %}#managed-memory)现在总是在堆外。配置参数 
`taskmanager.memory.off-heap` 已被彻底移除,配置它不会产生任何效果。
-* [托管内存]({% link ops/memory/mem_setup_tm.zh.md 
%}#managed-memory)现在使用本地内存而非直接内存。这意味着托管内存将不在 JVM 直接内存限制的范围内。
-* [托管内存]({% link ops/memory/mem_setup_tm.zh.md 
%}#managed-memory)现在总是惰性分配的。配置参数 `taskmanager.memory.preallocate` 
已被彻底移除,配置它不会产生任何效果。
+* [托管内存]({% link deployment/memory/mem_setup_tm.zh.md 
%}#managed-memory)现在总是在堆外。配置参数 `taskmanager.memory.off-heap` 已被彻底移除,配置它不会产生任何效果。
+* [托管内存]({% link deployment/memory/mem_setup_tm.zh.md 
%}#managed-memory)现在使用本地内存而非直接内存。这意味着托管内存将不在 JVM 直接内存限制的范围内。
+* [托管内存]({% link deployment/memory/mem_setup_tm.zh.md 
%}#managed-memory)现在总是惰性分配的。配置参数 `taskmanager.memory.preallocate` 
已被彻底移除,配置它不会产生任何效果。
 
 <a name="migrate-job-manager-memory-configuration" />
 
@@ -237,9 +237,9 @@ Flink 在 Mesos 上启动 JobManager 进程时并未设置任何 JVM 内存参
 
 建议您尽早使用新的配置参数取代启用的配置参数,它们在今后的版本中可能会被彻底移除。
 
-如果仅配置了 *Flink 总内存*或*进程总内存*,那么 [JVM 堆内存]({% link 
ops/memory/mem_setup_jobmanager.zh.md %}#configure-jvm-heap)将是总内存减去其他内存部分后剩余的部分。
-请参考[如何配置总内存]({% link ops/memory/mem_setup.zh.md %}#configure-total-memory)。
-此外,也可以通过配置 [`jobmanager.memory.heap.size`]({% link ops/config.zh.md 
%}#jobmanager-memory-heap-size) 的方式直接指定 [JVM 堆内存]({% link 
ops/memory/mem_setup_jobmanager.zh.md %}#configure-jvm-heap)。
+如果仅配置了 *Flink 总内存*或*进程总内存*,那么 [JVM 堆内存]({% link 
deployment/memory/mem_setup_jobmanager.zh.md 
%}#configure-jvm-heap)将是总内存减去其他内存部分后剩余的部分。
+请参考[如何配置总内存]({% link deployment/memory/mem_setup.zh.md 
%}#configure-total-memory)。
+此外,也可以通过配置 [`jobmanager.memory.heap.size`]({% link ops/config.zh.md 
%}#jobmanager-memory-heap-size) 的方式直接指定 [JVM 堆内存]({% link 
deployment/memory/mem_setup_jobmanager.zh.md %}#configure-jvm-heap)。
 
 <a name="flink-jvm-process-memory-limits" />
 
@@ -248,10 +248,10 @@ Flink 在 Mesos 上启动 JobManager 进程时并未设置任何 JVM 内存参
 从 *1.10* 版本开始,Flink 通过设置相应的 JVM 参数,对 TaskManager 进程使用的 *JVM Metaspace* 和 *JVM 
直接内存*进行限制。
 从 *1.11* 版本开始,Flink 同样对 JobManager 进程使用的 *JVM Metaspace* 进行限制。
 此外,还可以通过设置 [`jobmanager.memory.enable-jvm-direct-memory-limit`]({% link 
ops/config.zh.md %}#jobmanager-memory-enable-jvm-direct-memory-limit) 对 
JobManager 进程的 *JVM 直接内存*进行限制。
-请参考 [JVM 参数]({% link ops/memory/mem_setup.zh.md %}#jvm-parameters)。
+请参考 [JVM 参数]({% link deployment/memory/mem_setup.zh.md %}#jvm-parameters)。
 
-Flink 通过设置上述 JVM 内存限制降低内存泄漏问题的排查难度,以避免出现[容器内存溢出]({% link 
ops/memory/mem_trouble.zh.md %}#container-memory-exceeded)等问题。
-请参考常见问题中关于 [JVM Metaspace]({% link ops/memory/mem_trouble.zh.md 
%}#outofmemoryerror-metaspace) 和 [JVM 直接内存]({% link 
ops/memory/mem_trouble.zh.md %}#outofmemoryerror-direct-buffer-memory) 
*OutOfMemoryError* 异常的描述。
+Flink 通过设置上述 JVM 内存限制降低内存泄漏问题的排查难度,以避免出现[容器内存溢出]({% link 
deployment/memory/mem_trouble.zh.md %}#container-memory-exceeded)等问题。
+请参考常见问题中关于 [JVM Metaspace]({% link deployment/memory/mem_trouble.zh.md 
%}#outofmemoryerror-metaspace) 和 [JVM 直接内存]({% link 
deployment/memory/mem_trouble.zh.md %}#outofmemoryerror-direct-buffer-memory) 
*OutOfMemoryError* 异常的描述。
 
 <a name="container-cut-off-memory" />
 
@@ -267,15 +267,15 @@ Flink 通过设置上述 JVM 内存限制降低内存泄漏问题的排查难度
 
 ### TaskManager
 
-流处理作业如果使用了 [RocksDBStateBackend]({% link ops/state/state_backends.zh.md 
%}#the-rocksdbstatebackend),RocksDB 使用的本地内存现在将被归为[托管内存]({% link 
ops/memory/mem_setup_tm.zh.md %}#managed-memory)。
-默认情况下,RocksDB 将限制其内存用量不超过[托管内存]({% link ops/memory/mem_setup_tm.zh.md 
%}#managed-memory)大小。
-请同时参考[如何升级托管内存](#managed-memory)以及[如何配置托管内存]({% link 
ops/memory/mem_setup_tm.zh.md %}#managed-memory)。
+流处理作业如果使用了 [RocksDBStateBackend]({% link ops/state/state_backends.zh.md 
%}#the-rocksdbstatebackend),RocksDB 使用的本地内存现在将被归为[托管内存]({% link 
deployment/memory/mem_setup_tm.zh.md %}#managed-memory)。
+默认情况下,RocksDB 将限制其内存用量不超过[托管内存]({% link deployment/memory/mem_setup_tm.zh.md 
%}#managed-memory)大小。
+请同时参考[如何升级托管内存](#managed-memory)以及[如何配置托管内存]({% link 
deployment/memory/mem_setup_tm.zh.md %}#managed-memory)。
 
 其他堆外(直接或本地)内存开销,现在可以通过下列配置参数进行设置:
 * 任务堆外内存([`taskmanager.memory.task.off-heap.size`]({% link ops/config.zh.md 
%}#taskmanager-memory-task-off-heap-size))
 * 框架堆外内存([`taskmanager.memory.framework.off-heap.size`]({% link 
ops/config.zh.md %}#taskmanager-memory-framework-off-heap-size))
 * JVM Metaspace([`taskmanager.memory.jvm-metaspace.size`]({% link 
ops/config.zh.md %}#taskmanager-memory-jvm-metaspace-size))
-* [JVM 开销]({% link ops/memory/mem_setup_tm.zh.md %}#detailed-memory-model)
+* [JVM 开销]({% link deployment/memory/mem_setup_tm.zh.md 
%}#detailed-memory-model)
 
 <a name="for-jobmanagers" />
 
@@ -284,7 +284,7 @@ Flink 通过设置上述 JVM 内存限制降低内存泄漏问题的排查难度
 可以通过下列配置参数设置堆外(直接或本地)内存开销:
 * 堆外内存 ([`jobmanager.memory.off-heap.size`]({% link ops/config.zh.md 
%}#jobmanager-memory-off-heap-size))
 * JVM Metaspace ([`jobmanager.memory.jvm-metaspace.size`]({% link 
ops/config.zh.md %}#jobmanager-memory-jvm-metaspace-size))
-* [JVM 开销]({% link ops/memory/mem_setup_jobmanager.zh.md 
%}#detailed-configuration)
+* [JVM 开销]({% link deployment/memory/mem_setup_jobmanager.zh.md 
%}#detailed-configuration)
 
 <a name="default-configuration-in-flink-confyaml" />
 
@@ -298,7 +298,7 @@ Flink 通过设置上述 JVM 内存限制降低内存泄漏问题的排查难度
 原本的 JobManager 总内存(`jobmanager.heap.size`)被新的配置项 
[`jobmanager.memory.process.size`]({% link ops/config.zh.md 
%}#taskmanager-memory-process-size) 所取代。
 默认值从 1024Mb 增加到了 1600Mb。
 
-请参考[如何配置总内存]({% link ops/memory/mem_setup.zh.md %}#configure-total-memory)。
+请参考[如何配置总内存]({% link deployment/memory/mem_setup.zh.md 
%}#configure-total-memory)。
 
 <div class="alert alert-warning">
   <strong>注意:</strong> 使用新的默认 `flink-conf.yaml` 可能会造成各内存部分的大小发生变化,从而产生性能变化。
diff --git a/docs/ops/memory/mem_setup.md b/docs/deployment/memory/mem_setup.md
similarity index 88%
rename from docs/ops/memory/mem_setup.md
rename to docs/deployment/memory/mem_setup.md
index a4b9f13..aee462e 100644
--- a/docs/ops/memory/mem_setup.md
+++ b/docs/deployment/memory/mem_setup.md
@@ -31,7 +31,7 @@ Flink allows both high level and fine-grained tuning of 
memory allocation within
 {:toc}
 
 The further described memory configuration is applicable starting with the 
release version *1.10* for TaskManager and
-*1.11* for JobManager processes. If you upgrade Flink from earlier versions, 
check the [migration guide]({% link ops/memory/mem_migration.md %})
+*1.11* for JobManager processes. If you upgrade Flink from earlier versions, 
check the [migration guide]({% link deployment/memory/mem_migration.md %})
 because many changes were introduced with the *1.10* and *1.11* releases.
 
 ## Configure Total Memory
@@ -54,24 +54,24 @@ The simplest way to setup memory in Flink is to configure 
either of the two foll
 {:.table-bordered}
 <br/>
 
-<span class="label label-info">Note</span> For local execution, see detailed 
information for [TaskManager]({% link ops/memory/mem_setup_tm.md 
%}#local-execution) and [JobManager]({% link ops/memory/mem_setup_jobmanager.md 
%}#local-execution) processes.
+<span class="label label-info">Note</span> For local execution, see detailed 
information for [TaskManager]({% link deployment/memory/mem_setup_tm.md 
%}#local-execution) and [JobManager]({% link 
deployment/memory/mem_setup_jobmanager.md %}#local-execution) processes.
 
 The rest of the memory components will be adjusted automatically, based on 
default values or additionally configured options.
-See also how to set up other components for [TaskManager]({% link 
ops/memory/mem_setup_tm.md %}) and [JobManager]({% link 
ops/memory/mem_setup_jobmanager.md %}) memory.
+See also how to set up other components for [TaskManager]({% link 
deployment/memory/mem_setup_tm.md %}) and [JobManager]({% link 
deployment/memory/mem_setup_jobmanager.md %}) memory.
 
 Configuring *total Flink memory* is better suited for [standalone 
deployments]({% link deployment/resource-providers/cluster_setup.md %})
 where you want to declare how much memory is given to Flink itself. The *total 
Flink memory* splits up into *JVM Heap*
 and *Off-heap* memory.
-See also [how to configure memory for standalone deployments]({% link 
ops/memory/mem_tuning.md %}#configure-memory-for-standalone-deployment).
+See also [how to configure memory for standalone deployments]({% link 
deployment/memory/mem_tuning.md %}#configure-memory-for-standalone-deployment).
 
 If you configure *total process memory* you declare how much memory in total 
should be assigned to the Flink *JVM process*.
 For the containerized deployments it corresponds to the size of the requested 
container, see also
-[how to configure memory for containers]({% link ops/memory/mem_tuning.md 
%}#configure-memory-for-containers)
+[how to configure memory for containers]({% link 
deployment/memory/mem_tuning.md %}#configure-memory-for-containers)
 ([Kubernetes]({% link deployment/resource-providers/kubernetes.md %}), 
[Yarn]({% link deployment/resource-providers/yarn_setup.md %}) or [Mesos]({% 
link deployment/resource-providers/mesos.md %})).
 
 Another way to set up the memory is to configure the required internal 
components of the *total Flink memory* which are
-specific to the concrete Flink process. Check how to configure them for 
[TaskManager]({% link ops/memory/mem_setup_tm.md 
%}#configure-heap-and-managed-memory)
-and for [JobManager]({% link ops/memory/mem_setup_jobmanager.md 
%}#configure-jvm-heap).
+specific to the concrete Flink process. Check how to configure them for 
[TaskManager]({% link deployment/memory/mem_setup_tm.md 
%}#configure-heap-and-managed-memory)
+and for [JobManager]({% link deployment/memory/mem_setup_jobmanager.md 
%}#configure-jvm-heap).
 
 <span class="label label-info">Note</span> One of the three mentioned ways has 
to be used to configure Flink’s memory
 (except for local execution), or the Flink startup will fail. This means that 
one of the following option subsets,
@@ -109,8 +109,8 @@ This will lead to a different maximum being returned by the 
[Heap metrics]({% li
 [`jobmanager.memory.enable-jvm-direct-memory-limit`]({% link ops/config.md 
%}#jobmanager-memory-enable-jvm-direct-memory-limit) is set. 
 <br/><br/>
 
-Check also the detailed memory model for [TaskManager]({% link 
ops/memory/mem_setup_tm.md %}#detailed-memory-model) and
-[JobManager]({% link ops/memory/mem_setup_jobmanager.md 
%}#detailed-configuration) to understand how to configure the relevant 
components.
+Check also the detailed memory model for [TaskManager]({% link 
deployment/memory/mem_setup_tm.md %}#detailed-memory-model) and
+[JobManager]({% link deployment/memory/mem_setup_jobmanager.md 
%}#detailed-configuration) to understand how to configure the relevant 
components.
 
 ## Capped Fractionated Components
 
@@ -119,8 +119,8 @@ This section describes the configuration details of options 
which can be a fract
 * *JVM Overhead* can be a fraction of the *total process memory*
 * *Network memory* can be a fraction of the *total Flink memory* (only for 
TaskManager)
 
-Check also the detailed memory model for [TaskManager]({% link 
ops/memory/mem_setup_tm.md %}#detailed-memory-model) and
-[JobManager]({% link ops/memory/mem_setup_jobmanager.md 
%}#detailed-configuration) to understand how to configure the relevant 
components.
+Check also the detailed memory model for [TaskManager]({% link 
deployment/memory/mem_setup_tm.md %}#detailed-memory-model) and
+[JobManager]({% link deployment/memory/mem_setup_jobmanager.md 
%}#detailed-configuration) to understand how to configure the relevant 
components.
 
 The size of those components always has to be between its maximum and minimum 
value, otherwise Flink startup will fail.
 The maximum and minimum values have defaults or can be explicitly set by 
corresponding configuration options.
diff --git a/docs/ops/memory/mem_setup.zh.md 
b/docs/deployment/memory/mem_setup.zh.md
similarity index 87%
rename from docs/ops/memory/mem_setup.zh.md
rename to docs/deployment/memory/mem_setup.zh.md
index 2647814..eb0e73f 100644
--- a/docs/ops/memory/mem_setup.zh.md
+++ b/docs/deployment/memory/mem_setup.zh.md
@@ -30,7 +30,7 @@ Apache Flink 基于 JVM 的高效处理能力,依赖于其对各组件内存
 {:toc}
 
 本文接下来介绍的内存配置方法适用于 *1.10* 及以上版本的 TaskManager 进程和 *1.11* 及以上版本的 JobManager 进程。
-Flink 在 *1.10* 和 *1.11* 版本中对内存配置部分进行了较大幅度的改动,从早期版本升级的用户请参考[升级指南]({% link 
ops/memory/mem_migration.zh.md %})。
+Flink 在 *1.10* 和 *1.11* 版本中对内存配置部分进行了较大幅度的改动,从早期版本升级的用户请参考[升级指南]({% link 
deployment/memory/mem_migration.zh.md %})。
 
 <a name="configure-total-memory" />
 
@@ -55,21 +55,21 @@ Flink JVM 进程的*进程总内存(Total Process Memory)*包含了由 Flink
 <br/>
 
 <span class="label label-info">提示</span>
-关于本地执行,请分别参考 [TaskManager]({% link ops/memory/mem_setup_tm.zh.md 
%}#local-execution) 和 [JobManager]({% link 
ops/memory/mem_setup_jobmanager.zh.md %}#local-execution) 的相关文档。
+关于本地执行,请分别参考 [TaskManager]({% link deployment/memory/mem_setup_tm.zh.md 
%}#local-execution) 和 [JobManager]({% link 
deployment/memory/mem_setup_jobmanager.zh.md %}#local-execution) 的相关文档。
 
 Flink 会根据默认值或其他配置参数自动调整剩余内存部分的大小。
-关于各内存部分的更多细节,请分别参考 [TaskManager]({% link ops/memory/mem_setup_tm.zh.md %}) 和 
[JobManager]({% link ops/memory/mem_setup_jobmanager.zh.md %}) 的相关文档。
+关于各内存部分的更多细节,请分别参考 [TaskManager]({% link deployment/memory/mem_setup_tm.zh.md 
%}) 和 [JobManager]({% link deployment/memory/mem_setup_jobmanager.zh.md %}) 
的相关文档。
 
 对于[独立部署模式(Standalone Deployment)]({% link 
deployment/resource-providers/cluster_setup.zh.md %}),如果你希望指定由 Flink 
应用本身使用的内存大小,最好选择配置 *Flink 总内存*。
 *Flink 总内存*会进一步划分为 *JVM 堆内存*和*堆外内存*。
-更多详情请参考[如何为独立部署模式配置内存]({% link ops/memory/mem_tuning.zh.md 
%}#configure-memory-for-standalone-deployment)。
+更多详情请参考[如何为独立部署模式配置内存]({% link deployment/memory/mem_tuning.zh.md 
%}#configure-memory-for-standalone-deployment)。
 
 通过配置*进程总内存*可以指定由 Flink *JVM 进程*使用的总内存大小。
-对于容器化部署模式(Containerized Deployment),这相当于申请的容器(Container)大小,详情请参考[如何配置容器内存]({% 
link ops/memory/mem_tuning.zh.md 
%}#configure-memory-for-containers)([Kubernetes]({% link 
deployment/resource-providers/kubernetes.zh.md %})、[Yarn]({% link 
deployment/resource-providers/yarn_setup.zh.md %}) 或 [Mesos]({% link 
deployment/resource-providers/mesos.zh.md %}))。
+对于容器化部署模式(Containerized Deployment),这相当于申请的容器(Container)大小,详情请参考[如何配置容器内存]({% 
link deployment/memory/mem_tuning.zh.md 
%}#configure-memory-for-containers)([Kubernetes]({% link 
deployment/resource-providers/kubernetes.zh.md %})、[Yarn]({% link 
deployment/resource-providers/yarn_setup.zh.md %}) 或 [Mesos]({% link 
deployment/resource-providers/mesos.zh.md %}))。
 
 此外,还可以通过设置 *Flink 总内存*的特定内部组成部分的方式来进行内存配置。
 不同进程需要设置的内存组成部分是不一样的。
-详情请分别参考 [TaskManager]({% link ops/memory/mem_setup_tm.zh.md 
%}#configure-heap-and-managed-memory) 和 [JobManager]({% link 
ops/memory/mem_setup_jobmanager.zh.md %}#configure-jvm-heap) 的相关文档。
+详情请分别参考 [TaskManager]({% link deployment/memory/mem_setup_tm.zh.md 
%}#configure-heap-and-managed-memory) 和 [JobManager]({% link 
deployment/memory/mem_setup_jobmanager.zh.md %}#configure-jvm-heap) 的相关文档。
 
 <span class="label label-info">提示</span>
 以上三种方式中,用户需要至少选择其中一种进行配置(本地运行除外),否则 Flink 将无法启动。
@@ -107,7 +107,7 @@ Flink 进程启动时,会根据配置的和自动推导出的各内存部分
 (\*\*\*) 只有在 [`jobmanager.memory.enable-jvm-direct-memory-limit`]({% link 
ops/config.zh.md %}#jobmanager-memory-enable-jvm-direct-memory-limit) 设置为 
`true` 时,JobManager 才会设置 *JVM 直接内存限制*。
 <br/><br/>
 
-相关内存部分的配置方法,请同时参考 [TaskManager]({% link ops/memory/mem_setup_tm.zh.md 
%}#detailed-memory-model) 和 [JobManager]({% link 
ops/memory/mem_setup_jobmanager.zh.md %}#detailed-configuration) 的详细内存模型。
+相关内存部分的配置方法,请同时参考 [TaskManager]({% link deployment/memory/mem_setup_tm.zh.md 
%}#detailed-memory-model) 和 [JobManager]({% link 
deployment/memory/mem_setup_jobmanager.zh.md %}#detailed-configuration) 的详细内存模型。
 
 <a name="capped-fractionated-components" />
 
@@ -117,7 +117,7 @@ Flink 进程启动时,会根据配置的和自动推导出的各内存部分
 * *JVM 开销*:可以配置占用*进程总内存*的固定比例
 * *网络内存*:可以配置占用 *Flink 总内存*的固定比例(仅针对 TaskManager)
 
-相关内存部分的配置方法,请同时参考 [TaskManager]({% link ops/memory/mem_setup_tm.zh.md 
%}#detailed-memory-model) 和 [JobManager]({% link 
ops/memory/mem_setup_jobmanager.zh.md %}#detailed-configuration) 的详细内存模型。
+相关内存部分的配置方法,请同时参考 [TaskManager]({% link deployment/memory/mem_setup_tm.zh.md 
%}#detailed-memory-model) 和 [JobManager]({% link 
deployment/memory/mem_setup_jobmanager.zh.md %}#detailed-configuration) 的详细内存模型。
 
 这些内存部分的大小必须在相应的最大值、最小值范围内,否则 Flink 将无法启动。
 最大值、最小值具有默认值,也可以通过相应的配置参数进行设置。
diff --git a/docs/ops/memory/mem_setup_jobmanager.md 
b/docs/deployment/memory/mem_setup_jobmanager.md
similarity index 83%
rename from docs/ops/memory/mem_setup_jobmanager.md
rename to docs/deployment/memory/mem_setup_jobmanager.md
index 323d577..67e409e 100644
--- a/docs/ops/memory/mem_setup_jobmanager.md
+++ b/docs/deployment/memory/mem_setup_jobmanager.md
@@ -30,14 +30,14 @@ This guide walks you through high level and fine-grained 
memory configurations f
 {:toc}
 
 The further described memory configuration is applicable starting with the 
release version *1.11*. If you upgrade Flink
-from earlier versions, check the [migration guide]({% link 
ops/memory/mem_migration.md %}) because many changes were introduced with the 
*1.11* release.
+from earlier versions, check the [migration guide]({% link 
deployment/memory/mem_migration.md %}) because many changes were introduced 
with the *1.11* release.
 
 <span class="label label-info">Note</span> This memory setup guide is relevant 
<strong>only for the JobManager</strong>!
-The JobManager memory components have a similar but simpler structure compared 
to the [TaskManagers' memory configuration]({% link ops/memory/mem_setup_tm.md 
%}).
+The JobManager memory components have a similar but simpler structure compared 
to the [TaskManagers' memory configuration]({% link 
deployment/memory/mem_setup_tm.md %}).
 
 ## Configure Total Memory
 
-The simplest way to set up the memory configuration is to configure the [total 
memory]({% link ops/memory/mem_setup.md %}#configure-total-memory) for the 
process.
+The simplest way to set up the memory configuration is to configure the [total 
memory]({% link deployment/memory/mem_setup.md %}#configure-total-memory) for 
the process.
 If you run the JobManager process using local [execution 
mode](#local-execution) you do not need to configure memory options, they will 
have no effect.
 
 ## Detailed configuration
@@ -54,14 +54,14 @@ affect the size of the respective components:
 | :------------------------------------------------------------- | 
:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 | 
:----------------------------------------------------------------------------------------------------------------------------------
 [...]
 | [JVM Heap](#configure-jvm-heap)                                | 
[`jobmanager.memory.heap.size`]({% link ops/config.md 
%}#jobmanager-memory-heap-size)                                                 
                                                                                
                                                                                
        | *JVM Heap* memory size for job manager.                               
                                                    [...]
 | [Off-heap Memory](#configure-off-heap-memory)                  | 
[`jobmanager.memory.off-heap.size`]({% link ops/config.md 
%}#jobmanager-memory-off-heap-size)                                             
                                                                                
                                                                                
    | *Off-heap* memory size for job manager. This option covers all off-heap 
memory usage including direct and native memory a [...]
-| [JVM metaspace]({% link ops/memory/mem_setup.md %}#jvm-parameters)           
      | [`jobmanager.memory.jvm-metaspace.size`]({% link ops/config.md 
%}#jobmanager-memory-jvm-metaspace-size)                                        
                                                                                
                                                                               
| Metaspace size of the Flink JVM process                                       
                        [...]
-| JVM Overhead                                                   | 
[`jobmanager.memory.jvm-overhead.min`]({% link ops/config.md 
%}#jobmanager-memory-jvm-overhead-min) <br/> 
[`jobmanager.memory.jvm-overhead.max`]({% link ops/config.md 
%}#jobmanager-memory-jvm-overhead-max) <br/> 
[`jobmanager.memory.jvm-overhead.fraction`]({% link ops/config.md 
%}#jobmanager-memory-jvm-overhead-fraction) | Native memory reserved for other 
JVM overhead: e.g. thread stacks, code cache, garbage collection spa [...]
+| [JVM metaspace]({% link deployment/memory/mem_setup.md %}#jvm-parameters)    
             | [`jobmanager.memory.jvm-metaspace.size`]({% link ops/config.md 
%}#jobmanager-memory-jvm-metaspace-size)                                        
                                                                                
                                                                               
| Metaspace size of the Flink JVM process                                       
                 [...]
+| JVM Overhead                                                   | 
[`jobmanager.memory.jvm-overhead.min`]({% link ops/config.md 
%}#jobmanager-memory-jvm-overhead-min) <br/> 
[`jobmanager.memory.jvm-overhead.max`]({% link ops/config.md 
%}#jobmanager-memory-jvm-overhead-max) <br/> 
[`jobmanager.memory.jvm-overhead.fraction`]({% link ops/config.md 
%}#jobmanager-memory-jvm-overhead-fraction) | Native memory reserved for other 
JVM overhead: e.g. thread stacks, code cache, garbage collection spa [...]
 {:.table-bordered}
 <br/>
 
 ### Configure JVM Heap
 
-As mentioned before in the [total memory description]({% link 
ops/memory/mem_setup.md %}#configure-total-memory), another way to set up the 
memory
+As mentioned before in the [total memory description]({% link 
deployment/memory/mem_setup.md %}#configure-total-memory), another way to set 
up the memory
 for the JobManager is to specify explicitly the *JVM Heap* size 
([`jobmanager.memory.heap.size`]({% link ops/config.md 
%}#jobmanager-memory-heap-size)).
 It gives more control over the available *JVM Heap* which is used by:
 
@@ -74,27 +74,27 @@ the mentioned user code.
 <span class="label label-info">Note</span> If you have configured the *JVM 
Heap* explicitly, it is recommended to set
 neither *total process memory* nor *total Flink memory*. Otherwise, it may 
easily lead to memory configuration conflicts.
 
-The Flink scripts and CLI set the *JVM Heap* size via the JVM parameters 
*-Xms* and *-Xmx* when they start the JobManager process, see also [JVM 
parameters]({% link ops/memory/mem_setup.md %}#jvm-parameters).
+The Flink scripts and CLI set the *JVM Heap* size via the JVM parameters 
*-Xms* and *-Xmx* when they start the JobManager process, see also [JVM 
parameters]({% link deployment/memory/mem_setup.md %}#jvm-parameters).
 
 ### Configure Off-heap Memory
 
 The *Off-heap* memory component accounts for any type of *JVM direct memory* 
and *native memory* usage. Therefore,
 you can also enable the *JVM Direct Memory* limit by setting the 
[`jobmanager.memory.enable-jvm-direct-memory-limit`]({% link ops/config.md 
%}#jobmanager-memory-enable-jvm-direct-memory-limit) option.
 If this option is configured, Flink will set the limit to the *Off-heap* 
memory size via the corresponding JVM argument: *-XX:MaxDirectMemorySize*.
-See also [JVM parameters]({% link ops/memory/mem_setup.md %}#jvm-parameters).
+See also [JVM parameters]({% link deployment/memory/mem_setup.md 
%}#jvm-parameters).
 
 The size of this component can be configured by 
[`jobmanager.memory.off-heap.size`]({% link ops/config.md 
%}#jobmanager-memory-off-heap-size)
 option. This option can be tuned e.g. if the JobManager process throws 
‘OutOfMemoryError: Direct buffer memory’, see
-[the troubleshooting guide]({% link ops/memory/mem_trouble.md 
%}#outofmemoryerror-direct-buffer-memory) for more information.
+[the troubleshooting guide]({% link deployment/memory/mem_trouble.md 
%}#outofmemoryerror-direct-buffer-memory) for more information.
 
 There can be the following possible sources of *Off-heap* memory consumption:
 
 * Flink framework dependencies (e.g. Akka network communication)
 * User code executed during job submission (e.g. for certain batch sources) or 
in checkpoint completion callbacks
 
-<span class="label label-info">Note</span> If you have configured the [Total 
Flink Memory]({% link ops/memory/mem_setup.md %}#configure-total-memory)
+<span class="label label-info">Note</span> If you have configured the [Total 
Flink Memory]({% link deployment/memory/mem_setup.md %}#configure-total-memory)
 and the [JVM Heap](#configure-jvm-heap) explicitly but you have not configured 
the *Off-heap* memory, the size of the *Off-heap* memory
-will be derived as the [Total Flink Memory]({% link ops/memory/mem_setup.md 
%}#configure-total-memory) minus the [JVM Heap](#configure-jvm-heap).
+will be derived as the [Total Flink Memory]({% link 
deployment/memory/mem_setup.md %}#configure-total-memory) minus the [JVM 
Heap](#configure-jvm-heap).
 The default value of the *Off-heap* memory option will be ignored.
 
 ## Local Execution
diff --git a/docs/ops/memory/mem_setup_jobmanager.zh.md 
b/docs/deployment/memory/mem_setup_jobmanager.zh.md
similarity index 80%
rename from docs/ops/memory/mem_setup_jobmanager.zh.md
rename to docs/deployment/memory/mem_setup_jobmanager.zh.md
index 6cff7a5..e0e86e1 100644
--- a/docs/ops/memory/mem_setup_jobmanager.zh.md
+++ b/docs/deployment/memory/mem_setup_jobmanager.zh.md
@@ -30,17 +30,17 @@ JobManager 是 Flink 集群的控制单元。
 {:toc}
 
 本文接下来介绍的内存配置方法适用于 *1.11* 及以上版本。
-Flink 在 *1.11* 版本中对内存配置部分进行了较大幅度的改动,从早期版本升级的用户请参考[升级指南]({% link 
ops/memory/mem_migration.zh.md %})。
+Flink 在 *1.11* 版本中对内存配置部分进行了较大幅度的改动,从早期版本升级的用户请参考[升级指南]({% link 
deployment/memory/mem_migration.zh.md %})。
 
 <span class="label label-info">提示</span>
 本篇内存配置文档<strong>仅针对 JobManager</strong>!
-与 [TaskManager]({% link ops/memory/mem_setup_tm.zh.md %}) 相比,JobManager 
具有相似但更加简单的内存模型。
+与 [TaskManager]({% link deployment/memory/mem_setup_tm.zh.md %}) 相比,JobManager 
具有相似但更加简单的内存模型。
 
 <a name="configure-total-memory" />
 
 ## 配置总内存
 
-配置 JobManager 内存最简单的方法就是进程的[配置总内存]({% link ops/memory/mem_setup.zh.md 
%}#configure-total-memory)。
+配置 JobManager 内存最简单的方法就是进程的[配置总内存]({% link deployment/memory/mem_setup.zh.md 
%}#configure-total-memory)。
 [本地执行模式](#local-execution)下不需要为 JobManager 进行内存配置,配置参数将不会生效。
 
 <a name="detailed-configuration" />
@@ -58,8 +58,8 @@ Flink 在 *1.11* 版本中对内存配置部分进行了较大幅度的改动,
 | :------------------------------------------------------------- | 
:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 | 
:----------------------------------------------------------------------------------------------------------------------------------
 [...]
 | [JVM 堆内存](#configure-jvm-heap)                                | 
[`jobmanager.memory.heap.size`]({% link ops/config.zh.md 
%}#jobmanager-memory-heap-size)                                                 
                                                                                
                                                                                
        | JobManager 的 *JVM 堆内存*。                                               
                                                  [...]
 | [堆外内存](#configure-off-heap-memory)                  | 
[`jobmanager.memory.off-heap.size`]({% link ops/config.zh.md 
%}#jobmanager-memory-off-heap-size)                                             
                                                                                
                                                                                
    | JobManager 的*堆外内存(直接内存或本地内存)*。                                            
                                                        [...]
-| [JVM Metaspace]({% link ops/memory/mem_setup.zh.md %}#jvm-parameters)        
         | [`jobmanager.memory.jvm-metaspace.size`]({% link ops/config.zh.md 
%}#jobmanager-memory-jvm-metaspace-size)                                        
                                                                                
                                                                               
| Flink JVM 进程的 Metaspace。                                                      
                  [...]
-| JVM 开销                                                   | 
[`jobmanager.memory.jvm-overhead.min`]({% link ops/config.zh.md 
%}#jobmanager-memory-jvm-overhead-min) <br/> 
[`jobmanager.memory.jvm-overhead.max`]({% link ops/config.zh.md 
%}#jobmanager-memory-jvm-overhead-max) <br/> 
[`jobmanager.memory.jvm-overhead.fraction`]({% link ops/config.zh.md 
%}#jobmanager-memory-jvm-overhead-fraction) | 用于其他 JVM 
开销的本地内存,例如栈空间、垃圾回收空间等。该内存部分为基于[进程总内存]({% link ops/memory/mem_setup.zh.md 
%}#configure-tot [...]
+| [JVM Metaspace]({% link deployment/memory/mem_setup.zh.md %}#jvm-parameters) 
                | [`jobmanager.memory.jvm-metaspace.size`]({% link 
ops/config.zh.md %}#jobmanager-memory-jvm-metaspace-size)                       
                                                                                
                                                                                
                | Flink JVM 进程的 Metaspace。                                      
                           [...]
+| JVM 开销                                                   | 
[`jobmanager.memory.jvm-overhead.min`]({% link ops/config.zh.md 
%}#jobmanager-memory-jvm-overhead-min) <br/> 
[`jobmanager.memory.jvm-overhead.max`]({% link ops/config.zh.md 
%}#jobmanager-memory-jvm-overhead-max) <br/> 
[`jobmanager.memory.jvm-overhead.fraction`]({% link ops/config.zh.md 
%}#jobmanager-memory-jvm-overhead-fraction) | 用于其他 JVM 
开销的本地内存,例如栈空间、垃圾回收空间等。该内存部分为基于[进程总内存]({% link deployment/memory/mem_setup.zh.md 
%}#config [...]
 {:.table-bordered}
 <br/>
 
@@ -67,7 +67,7 @@ Flink 在 *1.11* 版本中对内存配置部分进行了较大幅度的改动,
 
 ### 配置 JVM 堆内存
 
-如[配置总内存]({% link ops/memory/mem_setup.zh.md 
%}#configure-total-memory)中所述,另一种配置 JobManager 内存的方式是明确指定 *JVM 
堆内存*的大小([`jobmanager.memory.heap.size`]({% link ops/config.zh.md 
%}#jobmanager-memory-heap-size))。
+如[配置总内存]({% link deployment/memory/mem_setup.zh.md 
%}#configure-total-memory)中所述,另一种配置 JobManager 内存的方式是明确指定 *JVM 
堆内存*的大小([`jobmanager.memory.heap.size`]({% link ops/config.zh.md 
%}#jobmanager-memory-heap-size))。
 通过这种方式,用户可以更好地掌控用于以下用途的 *JVM 堆内存*大小。
 * Flink 框架
 * 在作业提交时(例如一些特殊的批处理 Source)及 Checkpoint 完成的回调函数中执行的用户代码
@@ -78,7 +78,7 @@ Flink 需要多少 *JVM 堆内存*,很大程度上取决于运行的作业数
 如果已经明确设置了 *JVM 堆内存*,建议不要再设置*进程总内存*或 *Flink 总内存*,否则可能会造成内存配置冲突。
 
 在启动 JobManager 进程时,Flink 启动脚本及客户端通过设置 JVM 参数 *-Xms* 和 *-Xmx* 来管理 JVM 堆空间的大小。
-请参考 [JVM 参数]({% link ops/memory/mem_setup.zh.md %}#jvm-parameters)。
+请参考 [JVM 参数]({% link deployment/memory/mem_setup.zh.md %}#jvm-parameters)。
 
 <a name="configure-off-heap-memory" />
 
@@ -87,18 +87,18 @@ Flink 需要多少 *JVM 堆内存*,很大程度上取决于运行的作业数
 *堆外内存*包括 *JVM 直接内存* 和 *本地内存*。
 可以通过配置参数 [`jobmanager.memory.enable-jvm-direct-memory-limit`]({% link 
ops/config.zh.md %}#jobmanager-memory-enable-jvm-direct-memory-limit) 设置是否启用 
*JVM 直接内存限制*。
 如果该配置项设置为 `true`,Flink 会根据配置的*堆外内存*大小设置 JVM 参数 *-XX:MaxDirectMemorySize*。
-请参考 [JVM 参数]({% link ops/memory/mem_setup.zh.md %}#jvm-parameters)。
+请参考 [JVM 参数]({% link deployment/memory/mem_setup.zh.md %}#jvm-parameters)。
 
 可以通过配置参数 [`jobmanager.memory.off-heap.size`]({% link ops/config.zh.md 
%}#jobmanager-memory-off-heap-size) 设置堆外内存的大小。
 如果遇到 JobManager 进程抛出 “OutOfMemoryError: Direct buffer memory” 的异常,可以尝试调大这项配置。
-请参考[常见问题]({% link ops/memory/mem_trouble.zh.md 
%}#outofmemoryerror-direct-buffer-memory)。
+请参考[常见问题]({% link deployment/memory/mem_trouble.zh.md 
%}#outofmemoryerror-direct-buffer-memory)。
 
 以下情况可能用到堆外内存:
 * Flink 框架依赖(例如 Akka 的网络通信)
 * 在作业提交时(例如一些特殊的批处理 Source)及 Checkpoint 完成的回调函数中执行的用户代码
 
 <span class="label label-info">提示</span>
-如果同时配置了 [Flink 总内存]({% link ops/memory/mem_setup.zh.md 
%}#configure-total-memory)和 [JVM 
堆内存](#configure-jvm-heap),且没有配置*堆外内存*,那么*堆外内存*的大小将会是 [Flink 总内存]({% link 
ops/memory/mem_setup.zh.md %}#configure-total-memory)减去[JVM 
堆内存](#configure-jvm-heap)。
+如果同时配置了 [Flink 总内存]({% link deployment/memory/mem_setup.zh.md 
%}#configure-total-memory)和 [JVM 
堆内存](#configure-jvm-heap),且没有配置*堆外内存*,那么*堆外内存*的大小将会是 [Flink 总内存]({% link 
deployment/memory/mem_setup.zh.md %}#configure-total-memory)减去[JVM 
堆内存](#configure-jvm-heap)。
 这种情况下,*堆外内存*的默认大小将不会生效。
 
 <a name="local-execution" />
diff --git a/docs/ops/memory/mem_setup_tm.md 
b/docs/deployment/memory/mem_setup_tm.md
similarity index 92%
rename from docs/ops/memory/mem_setup_tm.md
rename to docs/deployment/memory/mem_setup_tm.md
index 8764820..96013d6 100644
--- a/docs/ops/memory/mem_setup_tm.md
+++ b/docs/deployment/memory/mem_setup_tm.md
@@ -29,10 +29,10 @@ Configuring memory usage for your needs can greatly reduce 
Flink's resource foot
 {:toc}
 
 The further described memory configuration is applicable starting with the 
release version *1.10*. If you upgrade Flink
-from earlier versions, check the [migration guide]({% link 
ops/memory/mem_migration.md %}) because many changes were introduced with the 
*1.10* release.
+from earlier versions, check the [migration guide]({% link 
deployment/memory/mem_migration.md %}) because many changes were introduced 
with the *1.10* release.
 
 <span class="label label-info">Note</span> This memory setup guide is relevant 
<strong>only for TaskManagers</strong>!
-The TaskManager memory components have a similar but more sophisticated 
structure compared to the [memory model of the JobManager process]({% link 
ops/memory/mem_setup_jobmanager.md %}).
+The TaskManager memory components have a similar but more sophisticated 
structure compared to the [memory model of the JobManager process]({% link 
deployment/memory/mem_setup_jobmanager.md %}).
 
 ## Configure Total Memory
 
@@ -48,7 +48,7 @@ and by the JVM to run the process. The *total Flink memory* 
consumption includes
 If you run Flink locally (e.g. from your IDE) without creating a cluster, then 
only a subset of the memory configuration
 options are relevant, see also [local execution](#local-execution) for more 
details.
 
-Otherwise, the simplest way to setup memory for TaskManagers is to [configure 
the total memory]({% link ops/memory/mem_setup.md %}#configure-total-memory).
+Otherwise, the simplest way to setup memory for TaskManagers is to [configure 
the total memory]({% link deployment/memory/mem_setup.md 
%}#configure-total-memory).
 A more fine-grained approach is described in more detail 
[here](#configure-heap-and-managed-memory).
 
 The rest of the memory components will be adjusted automatically, based on 
default values or additionally configured options.
@@ -86,7 +86,7 @@ The size of *managed memory* can be
 *Size* will override *fraction*, if both are set.
 If neither *size* nor *fraction* is explicitly configured, the [default 
fraction]({% link ops/config.md %}#taskmanager-memory-managed-fraction) will be 
used.
 
-See also [how to configure memory for state backends]({% link 
ops/memory/mem_tuning.md %}#configure-memory-for-state-backends) and [batch 
jobs]({% link ops/memory/mem_tuning.md %}#configure-memory-for-batch-jobs).
+See also [how to configure memory for state backends]({% link 
deployment/memory/mem_tuning.md %}#configure-memory-for-state-backends) and 
[batch jobs]({% link deployment/memory/mem_tuning.md 
%}#configure-memory-for-batch-jobs).
 
 #### Consumer Weights
 
@@ -117,7 +117,7 @@ The off-heap memory which is allocated by user code should 
be accounted for in *
 You should only change this value if you are sure that the Flink framework 
needs more memory. 
 
 Flink includes the *framework off-heap memory* and *task off-heap memory* into 
the *direct memory* limit of the JVM,
-see also [JVM parameters]({% link ops/memory/mem_setup.md %}#jvm-parameters).
+see also [JVM parameters]({% link deployment/memory/mem_setup.md 
%}#jvm-parameters).
 
 <span class="label label-info">Note</span> Although, native non-direct memory 
usage can be accounted for as a part of the
 *framework off-heap memory* or *task off-heap memory*, it will result in a 
higher JVM's *direct memory* limit in this case.
@@ -145,9 +145,9 @@ which affect the size of the respective components:
 | [Managed memory](#managed-memory)                                  | 
[`taskmanager.memory.managed.size`]({% link ops/config.md 
%}#taskmanager-memory-managed-size) <br/> 
[`taskmanager.memory.managed.fraction`]({% link ops/config.md 
%}#taskmanager-memory-managed-fraction)                                         
                                                                            | 
Native memory managed by Flink, reserved for sorting, hash tables, caching of 
intermediate results an [...]
 | [Framework Off-heap Memory](#framework-memory)                     | 
[`taskmanager.memory.framework.off-heap.size`]({% link ops/config.md 
%}#taskmanager-memory-framework-off-heap-size)                                  
                                                                                
                                                                               
| [Off-heap direct (or native) 
memory](#configure-off-heap-memory-direct-or-native) dedicated to Flink 
framework  [...]
 | [Task Off-heap Memory](#configure-off-heap-memory-direct-or-native)| 
[`taskmanager.memory.task.off-heap.size`]({% link ops/config.md 
%}#taskmanager-memory-task-off-heap-size)                                       
                                                                                
                                                                                
    | [Off-heap direct (or native) 
memory](#configure-off-heap-memory-direct-or-native) dedicated to Flink 
applicatio [...]
-| Network Memory                                                     | 
[`taskmanager.memory.network.min`]({% link ops/config.md 
%}#taskmanager-memory-network-min) <br/> [`taskmanager.memory.network.max`]({% 
link ops/config.md %}#taskmanager-memory-network-max) <br/> 
[`taskmanager.memory.network.fraction`]({% link ops/config.md 
%}#taskmanager-memory-network-fraction)                               | Direct 
memory reserved for data record exchange between tasks (e.g. buffering for the 
trans [...]
-| [JVM metaspace]({% link ops/memory/mem_setup.md %}#jvm-parameters)           
          | [`taskmanager.memory.jvm-metaspace.size`]({% link ops/config.md 
%}#taskmanager-memory-jvm-metaspace-size)                                       
                                                                                
                                                                                
    | Metaspace size of the Flink JVM process                                   
                  [...]
-| JVM Overhead                                                       | 
[`taskmanager.memory.jvm-overhead.min`]({% link ops/config.md 
%}#taskmanager-memory-jvm-overhead-min) <br/> 
[`taskmanager.memory.jvm-overhead.max`]({% link ops/config.md 
%}#taskmanager-memory-jvm-overhead-max) <br/> 
[`taskmanager.memory.jvm-overhead.fraction`]({% link ops/config.md 
%}#taskmanager-memory-jvm-overhead-fraction) | Native memory reserved for other 
JVM overhead: e.g. thread stacks, code cache, garbage coll [...]
+| Network Memory                                                     | 
[`taskmanager.memory.network.min`]({% link ops/config.md 
%}#taskmanager-memory-network-min) <br/> [`taskmanager.memory.network.max`]({% 
link ops/config.md %}#taskmanager-memory-network-max) <br/> 
[`taskmanager.memory.network.fraction`]({% link ops/config.md 
%}#taskmanager-memory-network-fraction)                               | Direct 
memory reserved for data record exchange between tasks (e.g. buffering for the 
trans [...]
+| [JVM metaspace]({% link deployment/memory/mem_setup.md %}#jvm-parameters)    
                 | [`taskmanager.memory.jvm-metaspace.size`]({% link 
ops/config.md %}#taskmanager-memory-jvm-metaspace-size)                         
                                                                                
                                                                                
                  | Metaspace size of the Flink JVM process                     
                         [...]
+| JVM Overhead                                                       | 
[`taskmanager.memory.jvm-overhead.min`]({% link ops/config.md 
%}#taskmanager-memory-jvm-overhead-min) <br/> 
[`taskmanager.memory.jvm-overhead.max`]({% link ops/config.md 
%}#taskmanager-memory-jvm-overhead-max) <br/> 
[`taskmanager.memory.jvm-overhead.fraction`]({% link ops/config.md 
%}#taskmanager-memory-jvm-overhead-fraction) | Native memory reserved for other 
JVM overhead: e.g. thread stacks, code cache, garbage coll [...]
 {:.table-bordered}
 <br/>
 
diff --git a/docs/ops/memory/mem_setup_tm.zh.md 
b/docs/deployment/memory/mem_setup_tm.zh.md
similarity index 92%
rename from docs/ops/memory/mem_setup_tm.zh.md
rename to docs/deployment/memory/mem_setup_tm.zh.md
index c8160fb..b709f8e 100644
--- a/docs/ops/memory/mem_setup_tm.zh.md
+++ b/docs/deployment/memory/mem_setup_tm.zh.md
@@ -29,11 +29,11 @@ Flink 的 TaskManager 负责执行用户代码。
 {:toc}
 
 本文接下来介绍的内存配置方法适用于 *1.10* 及以上版本。
-Flink 在 1.10 版本中对内存配置部分进行了较大幅度的改动,从早期版本升级的用户请参考[升级指南]({% link 
ops/memory/mem_migration.zh.md %})。
+Flink 在 1.10 版本中对内存配置部分进行了较大幅度的改动,从早期版本升级的用户请参考[升级指南]({% link 
deployment/memory/mem_migration.zh.md %})。
 
 <span class="label label-info">提示</span>
 本篇内存配置文档<strong>仅针对 TaskManager</strong>!
-与 [JobManager]({% link ops/memory/mem_setup_jobmanager.zh.md %}) 
相比,TaskManager 具有相似但更加复杂的内存模型。
+与 [JobManager]({% link deployment/memory/mem_setup_jobmanager.zh.md %}) 
相比,TaskManager 具有相似但更加复杂的内存模型。
 
 <a name="configure-total-memory" />
 
@@ -49,7 +49,7 @@ Flink JVM 进程的*进程总内存(Total Process Memory)*包含了由 Flink
 
 如果你是在本地运行 Flink(例如在 IDE 
中)而非创建一个集群,那么本文介绍的配置并非所有都是适用的,详情请参考[本地执行](#local-execution)。
 
-其他情况下,配置 Flink 内存最简单的方法就是[配置总内存]({% link ops/memory/mem_setup.zh.md 
%}#configure-total-memory)。
+其他情况下,配置 Flink 内存最简单的方法就是[配置总内存]({% link deployment/memory/mem_setup.zh.md 
%}#configure-total-memory)。
 此外,Flink 也支持[更细粒度的内存配置方式](#configure-heap-and-managed-memory)。
 
 Flink 会根据默认值或其他配置参数自动调整剩余内存部分的大小。
@@ -92,7 +92,7 @@ Flink 会根据默认值或其他配置参数自动调整剩余内存部分的
 当同时指定二者时,会优先采用指定的大小(Size)。
 若二者均未指定,会根据[默认占比]({% link ops/config.zh.md 
%}#taskmanager-memory-managed-fraction)进行计算。
 
-请同时参考[如何配置 State Backend 内存]({% link ops/memory/mem_tuning.zh.md 
%}#configure-memory-for-state-backends)以及[如何配置批处理作业内存]({% link 
ops/memory/mem_tuning.zh.md %}#configure-memory-for-batch-jobs)。
+请同时参考[如何配置 State Backend 内存]({% link deployment/memory/mem_tuning.zh.md 
%}#configure-memory-for-state-backends)以及[如何配置批处理作业内存]({% link 
deployment/memory/mem_tuning.zh.md %}#configure-memory-for-batch-jobs)。
 
 <a name="consumer-weights" />
 
@@ -126,7 +126,7 @@ Flink 会根据默认值或其他配置参数自动调整剩余内存部分的
 你也可以调整[框架堆外内存(Framework Off-heap Memory)](#framework-memory)。
 这是一个进阶配置,建议仅在确定 Flink 框架需要更多的内存时调整该配置。
 
-Flink 将*框架堆外内存*和*任务堆外内存*都计算在 JVM 的*直接内存*限制中,请参考 [JVM 参数]({% link 
ops/memory/mem_setup.zh.md %}#jvm-parameters)。
+Flink 将*框架堆外内存*和*任务堆外内存*都计算在 JVM 的*直接内存*限制中,请参考 [JVM 参数]({% link 
deployment/memory/mem_setup.zh.md %}#jvm-parameters)。
 
 <span class="label label-info">提示</span>
 本地内存(非直接内存)也可以被归在*框架堆外内存*或*任务堆外内存*中,在这种情况下 JVM 的*直接内存*限制可能会高于实际需求。
@@ -157,9 +157,9 @@ Flink 会负责管理网络内存,保证其实际用量不会超过配置大
 | [托管内存(Managed memory)](#managed-memory)                                  | 
[`taskmanager.memory.managed.size`]({% link ops/config.zh.md 
%}#taskmanager-memory-managed-size) <br/> 
[`taskmanager.memory.managed.fraction`]({% link ops/config.zh.md 
%}#taskmanager-memory-managed-fraction)                                         
                                                                            | 由 
Flink 管理的用于排序、哈希表、缓存中间结果及 RocksDB State Backend 的本地内存。                          
        [...]
 | [框架堆外内存(Framework Off-heap Memory)](#framework-memory)                     | 
[`taskmanager.memory.framework.off-heap.size`]({% link ops/config.zh.md 
%}#taskmanager-memory-framework-off-heap-size)                                  
                                                                                
                                                                               
| 用于 Flink 
框架的[堆外内存(直接内存或本地内存)](#configure-off-heap-memory-direct-or-native)(进阶配置)。        
            [...]
 | [任务堆外内存(Task Off-heap Memory)](#configure-off-heap-memory-direct-or-native)| 
[`taskmanager.memory.task.off-heap.size`]({% link ops/config.zh.md 
%}#taskmanager-memory-task-off-heap-size)                                       
                                                                                
                                                                                
    |        用于 Flink 
应用的算子及用户代码的[堆外内存(直接内存或本地内存)](#configure-off-heap-memory-direct-or-native)。      
           [...]
-| 网络内存(Network Memory)                                                     | 
[`taskmanager.memory.network.min`]({% link ops/config.zh.md 
%}#taskmanager-memory-network-min) <br/> [`taskmanager.memory.network.max`]({% 
link ops/config.zh.md %}#taskmanager-memory-network-max) <br/> 
[`taskmanager.memory.network.fraction`]({% link ops/config.zh.md 
%}#taskmanager-memory-network-fraction)                               | 
用于任务之间数据传输的直接内存(例如网络传输缓冲)。该内存部分为基于 [Flink 总内存]({% link ops/memory/mem_setup. 
[...]
-| [JVM Metaspace]({% link ops/memory/mem_setup.zh.md %}#jvm-parameters)        
             | [`taskmanager.memory.jvm-metaspace.size`]({% link 
ops/config.zh.md %}#taskmanager-memory-jvm-metaspace-size)                      
                                                                                
                                                                                
                     | Flink JVM 进程的 Metaspace。                                 
                             [...]
-| JVM 开销                                                       | 
[`taskmanager.memory.jvm-overhead.min`]({% link ops/config.zh.md 
%}#taskmanager-memory-jvm-overhead-min) <br/> 
[`taskmanager.memory.jvm-overhead.max`]({% link ops/config.zh.md 
%}#taskmanager-memory-jvm-overhead-max) <br/> 
[`taskmanager.memory.jvm-overhead.fraction`]({% link ops/config.zh.md 
%}#taskmanager-memory-jvm-overhead-fraction) | 用于其他 JVM 
开销的本地内存,例如栈空间、垃圾回收空间等。该内存部分为基于[进程总内存]({% link ops/memory/mem_setup.zh.md %}#con 
[...]
+| 网络内存(Network Memory)                                                     | 
[`taskmanager.memory.network.min`]({% link ops/config.zh.md 
%}#taskmanager-memory-network-min) <br/> [`taskmanager.memory.network.max`]({% 
link ops/config.zh.md %}#taskmanager-memory-network-max) <br/> 
[`taskmanager.memory.network.fraction`]({% link ops/config.zh.md 
%}#taskmanager-memory-network-fraction)                               | 
用于任务之间数据传输的直接内存(例如网络传输缓冲)。该内存部分为基于 [Flink 总内存]({% link deployment/memory/mem 
[...]
+| [JVM Metaspace]({% link deployment/memory/mem_setup.zh.md %}#jvm-parameters) 
                    | [`taskmanager.memory.jvm-metaspace.size`]({% link 
ops/config.zh.md %}#taskmanager-memory-jvm-metaspace-size)                      
                                                                                
                                                                                
                     | Flink JVM 进程的 Metaspace。                                 
                      [...]
+| JVM 开销                                                       | 
[`taskmanager.memory.jvm-overhead.min`]({% link ops/config.zh.md 
%}#taskmanager-memory-jvm-overhead-min) <br/> 
[`taskmanager.memory.jvm-overhead.max`]({% link ops/config.zh.md 
%}#taskmanager-memory-jvm-overhead-max) <br/> 
[`taskmanager.memory.jvm-overhead.fraction`]({% link ops/config.zh.md 
%}#taskmanager-memory-jvm-overhead-fraction) | 用于其他 JVM 
开销的本地内存,例如栈空间、垃圾回收空间等。该内存部分为基于[进程总内存]({% link deployment/memory/mem_setup.zh.md 
[...]
 {:.table-bordered}
 <br/>
 
diff --git a/docs/ops/memory/mem_trouble.md 
b/docs/deployment/memory/mem_trouble.md
similarity index 76%
rename from docs/ops/memory/mem_trouble.md
rename to docs/deployment/memory/mem_trouble.md
index d47241f..d5f1f0e 100644
--- a/docs/ops/memory/mem_trouble.md
+++ b/docs/deployment/memory/mem_trouble.md
@@ -35,11 +35,11 @@ greater than 1, etc.) or configuration conflicts. Check the 
documentation chapte
 ## OutOfMemoryError: Java heap space
 
 The exception usually indicates that the *JVM Heap* is too small. You can try 
to increase the JVM Heap size
-by increasing [total memory]({% link ops/memory/mem_setup.md 
%}#configure-total-memory). You can also directly increase
-[task heap memory]({% link ops/memory/mem_setup_tm.md 
%}#task-operator-heap-memory) for TaskManagers or
-[JVM Heap memory]({% link ops/memory/mem_setup_jobmanager.md 
%}#configure-jvm-heap) for JobManagers.
+by increasing [total memory]({% link deployment/memory/mem_setup.md 
%}#configure-total-memory). You can also directly increase
+[task heap memory]({% link deployment/memory/mem_setup_tm.md 
%}#task-operator-heap-memory) for TaskManagers or
+[JVM Heap memory]({% link deployment/memory/mem_setup_jobmanager.md 
%}#configure-jvm-heap) for JobManagers.
 
-<span class="label label-info">Note</span> You can also increase the 
[framework heap memory]({% link ops/memory/mem_setup_tm.md %}#framework-memory)
+<span class="label label-info">Note</span> You can also increase the 
[framework heap memory]({% link deployment/memory/mem_setup_tm.md 
%}#framework-memory)
 for TaskManagers, but you should only change this option if you are sure the 
Flink framework itself needs more memory.
 
 ## OutOfMemoryError: Direct buffer memory
@@ -47,12 +47,12 @@ for TaskManagers, but you should only change this option if 
you are sure the Fli
 The exception usually indicates that the JVM *direct memory* limit is too 
small or that there is a *direct memory leak*.
 Check whether user code or other external dependencies use the JVM *direct 
memory* and that it is properly accounted for.
 You can try to increase its limit by adjusting direct off-heap memory.
-See also how to configure off-heap memory for [TaskManagers]({% link 
ops/memory/mem_setup_tm.md %}#configure-off-heap-memory-direct-or-native),
-[JobManagers]({% link ops/memory/mem_setup_jobmanager.md 
%}#configure-off-heap-memory) and the [JVM arguments]({% link 
ops/memory/mem_setup.md %}#jvm-parameters) which Flink sets.
+See also how to configure off-heap memory for [TaskManagers]({% link 
deployment/memory/mem_setup_tm.md 
%}#configure-off-heap-memory-direct-or-native),
+[JobManagers]({% link deployment/memory/mem_setup_jobmanager.md 
%}#configure-off-heap-memory) and the [JVM arguments]({% link 
deployment/memory/mem_setup.md %}#jvm-parameters) which Flink sets.
 
 ## OutOfMemoryError: Metaspace
 
-The exception usually indicates that [JVM metaspace limit]({% link 
ops/memory/mem_setup.md %}#jvm-parameters) is configured too small.
+The exception usually indicates that [JVM metaspace limit]({% link 
deployment/memory/mem_setup.md %}#jvm-parameters) is configured too small.
 You can try to increase the JVM metaspace option for [TaskManagers]({% link 
ops/config.md %}#taskmanager-memory-jvm-metaspace-size)
 or [JobManagers]({% link ops/config.md 
%}#jobmanager-memory-jvm-metaspace-size).
 
@@ -60,7 +60,7 @@ or [JobManagers]({% link ops/config.md 
%}#jobmanager-memory-jvm-metaspace-size).
 
 This is only relevant for TaskManagers.
 
-The exception usually indicates that the size of the configured [network 
memory]({% link ops/memory/mem_setup_tm.md %}#detailed-memory-model)
+The exception usually indicates that the size of the configured [network 
memory]({% link deployment/memory/mem_setup_tm.md %}#detailed-memory-model)
 is not big enough. You can try to increase the *network memory* by adjusting 
the following options:
 * [`taskmanager.memory.network.min`]({% link ops/config.md 
%}#taskmanager-memory-network-min)
 * [`taskmanager.memory.network.max`]({% link ops/config.md 
%}#taskmanager-memory-network-max)
@@ -77,8 +77,8 @@ If you encounter this problem in the *JobManager* process, 
you can also enable t
 to exclude possible *JVM Direct Memory* leak.
 
 If [RocksDBStateBackend]({% link ops/state/state_backends.md 
%}#the-rocksdbstatebackend) is used, and the memory controlling is disabled,
-you can try to increase the TaskManager's [managed memory]({% link 
ops/memory/mem_setup.md %}#managed-memory).
+you can try to increase the TaskManager's [managed memory]({% link 
deployment/memory/mem_setup.md %}#managed-memory).
 
-Alternatively, you can increase the [JVM Overhead]({% link 
ops/memory/mem_setup.md %}#capped-fractionated-components).
+Alternatively, you can increase the [JVM Overhead]({% link 
deployment/memory/mem_setup.md %}#capped-fractionated-components).
 
-See also [how to configure memory for containers]({% link 
ops/memory/mem_tuning.md %}#configure-memory-for-containers).
+See also [how to configure memory for containers]({% link 
deployment/memory/mem_tuning.md %}#configure-memory-for-containers).
diff --git a/docs/ops/memory/mem_trouble.zh.md 
b/docs/deployment/memory/mem_trouble.zh.md
similarity index 71%
rename from docs/ops/memory/mem_trouble.zh.md
rename to docs/deployment/memory/mem_trouble.zh.md
index 31acbf2..2d7664f 100644
--- a/docs/ops/memory/mem_trouble.zh.md
+++ b/docs/deployment/memory/mem_trouble.zh.md
@@ -33,10 +33,10 @@ under the License.
 ## OutOfMemoryError: Java heap space
 
 该异常说明 JVM 的堆空间过小。
-可以通过增大[总内存]({% link ops/memory/mem_setup.zh.md 
%}#configure-total-memory)、TaskManager 的[任务堆内存]({% link 
ops/memory/mem_setup_tm.zh.md %}#task-operator-heap-memory)、JobManager 的 [JVM 
堆内存]({% link ops/memory/mem_setup_jobmanager.zh.md %}#configure-jvm-heap)等方法来增大 
JVM 堆空间。
+可以通过增大[总内存]({% link deployment/memory/mem_setup.zh.md 
%}#configure-total-memory)、TaskManager 的[任务堆内存]({% link 
deployment/memory/mem_setup_tm.zh.md %}#task-operator-heap-memory)、JobManager 的 
[JVM 堆内存]({% link deployment/memory/mem_setup_jobmanager.zh.md 
%}#configure-jvm-heap)等方法来增大 JVM 堆空间。
 
 <span class="label label-info">提示</span>
-也可以增大 TaskManager 的[框架堆内存]({% link ops/memory/mem_setup_tm.zh.md 
%}#framework-memory)。
+也可以增大 TaskManager 的[框架堆内存]({% link deployment/memory/mem_setup_tm.zh.md 
%}#framework-memory)。
 这是一个进阶配置,只有在确认是 Flink 框架自身需要更多内存时才应该去调整。
 
 ## OutOfMemoryError: Direct buffer memory
@@ -44,18 +44,18 @@ under the License.
 该异常通常说明 JVM 的*直接内存*限制过小,或者存在*直接内存泄漏(Direct Memory Leak)*。
 请确认用户代码及外部依赖中是否使用了 JVM *直接内存*,以及如果使用了直接内存,是否配置了足够的内存空间。
 可以通过调整堆外内存来增大直接内存限制。
-有关堆外内存的配置方法,请参考 [TaskManager]({% link ops/memory/mem_setup_tm.zh.md 
%}#configure-off-heap-memory-direct-or-native)、[JobManager]({% link 
ops/memory/mem_setup_jobmanager.zh.md %}#configure-off-heap-memory) 以及 [JVM 
参数]({% link ops/memory/mem_setup.zh.md %}#jvm-parameters)的相关文档。
+有关堆外内存的配置方法,请参考 [TaskManager]({% link deployment/memory/mem_setup_tm.zh.md 
%}#configure-off-heap-memory-direct-or-native)、[JobManager]({% link 
deployment/memory/mem_setup_jobmanager.zh.md %}#configure-off-heap-memory) 以及 
[JVM 参数]({% link deployment/memory/mem_setup.zh.md %}#jvm-parameters)的相关文档。
 
 ## OutOfMemoryError: Metaspace
 
-该异常说明 [JVM Metaspace 限制]({% link ops/memory/mem_setup.zh.md 
%}#jvm-parameters)过小。
+该异常说明 [JVM Metaspace 限制]({% link deployment/memory/mem_setup.zh.md 
%}#jvm-parameters)过小。
 可以尝试调整 [TaskManager]({% link ops/config.zh.md 
%}#taskmanager-memory-jvm-metaspace-size)、[JobManager]({% link ops/config.zh.md 
%}#jobmanager-memory-jvm-metaspace-size) 的 JVM Metaspace。
 
 ## IOException: Insufficient number of network buffers
 
 该异常仅与 TaskManager 相关。
 
-该异常通常说明[网络内存]({% link ops/memory/mem_setup_tm.zh.md 
%}#detailed-memory-model)过小。
+该异常通常说明[网络内存]({% link deployment/memory/mem_setup_tm.zh.md 
%}#detailed-memory-model)过小。
 可以通过调整以下配置参数增大*网络内存*:
 * [`taskmanager.memory.network.min`]({% link ops/config.zh.md 
%}#taskmanager-memory-network-min)
 * [`taskmanager.memory.network.max`]({% link ops/config.zh.md 
%}#taskmanager-memory-network-max)
@@ -70,8 +70,8 @@ under the License.
 
 对于 *JobManager* 进程,你还可以尝试启用 *JVM 
直接内存限制*([`jobmanager.memory.enable-jvm-direct-memory-limit`]({% link 
ops/config.zh.md %}#jobmanager-memory-enable-jvm-direct-memory-limit)),以排除 *JVM 
直接内存泄漏*的可能性。
 
-如果使用了 [RocksDBStateBackend]({% link ops/state/state_backends.zh.md 
%}#rocksdbstatebackend) 且没有开启内存控制,也可以尝试增大 TaskManager 的[托管内存]({% link 
ops/memory/mem_setup.zh.md %}#managed-memory)。
+如果使用了 [RocksDBStateBackend]({% link ops/state/state_backends.zh.md 
%}#rocksdbstatebackend) 且没有开启内存控制,也可以尝试增大 TaskManager 的[托管内存]({% link 
deployment/memory/mem_setup.zh.md %}#managed-memory)。
 
-此外,还可以尝试增大 [JVM 开销]({% link ops/memory/mem_setup.zh.md 
%}#capped-fractionated-components)。
+此外,还可以尝试增大 [JVM 开销]({% link deployment/memory/mem_setup.zh.md 
%}#capped-fractionated-components)。
 
-请参考[如何配置容器内存]({% link ops/memory/mem_tuning.zh.md 
%}#configure-memory-for-containers)。
+请参考[如何配置容器内存]({% link deployment/memory/mem_tuning.zh.md 
%}#configure-memory-for-containers)。
diff --git a/docs/ops/memory/mem_tuning.md 
b/docs/deployment/memory/mem_tuning.md
similarity index 81%
rename from docs/ops/memory/mem_tuning.md
rename to docs/deployment/memory/mem_tuning.md
index bf2a88f..048e115 100644
--- a/docs/ops/memory/mem_tuning.md
+++ b/docs/deployment/memory/mem_tuning.md
@@ -22,7 +22,7 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-In addition to the [main memory setup guide]({% link ops/memory/mem_setup.md 
%}), this section explains how to set up memory
+In addition to the [main memory setup guide]({% link 
deployment/memory/mem_setup.md %}), this section explains how to set up memory
 depending on the use case and which options are important for each case.
 
 * toc
@@ -30,17 +30,17 @@ depending on the use case and which options are important 
for each case.
 
 ## Configure memory for standalone deployment
 
-It is recommended to configure [total Flink memory]({% link 
ops/memory/mem_setup.md %}#configure-total-memory)
+It is recommended to configure [total Flink memory]({% link 
deployment/memory/mem_setup.md %}#configure-total-memory)
 ([`taskmanager.memory.flink.size`]({% link ops/config.md 
%}#taskmanager-memory-flink-size) or [`jobmanager.memory.flink.size`]({% link 
ops/config.md %}#jobmanager-memory-flink-size))
 or its components for [standalone deployment]({% link 
deployment/resource-providers/cluster_setup.md %}) where you want to declare 
how much memory
-is given to Flink itself. Additionally, you can adjust *JVM metaspace* if it 
causes [problems]({% link ops/memory/mem_trouble.md 
%}#outofmemoryerror-metaspace).
+is given to Flink itself. Additionally, you can adjust *JVM metaspace* if it 
causes [problems]({% link deployment/memory/mem_trouble.md 
%}#outofmemoryerror-metaspace).
 
 The *total Process memory* is not relevant because *JVM overhead* is not 
controlled by Flink or the deployment environment,
 only physical resources of the executing machine matter in this case.
 
 ## Configure memory for containers
 
-It is recommended to configure [total process memory]({% link 
ops/memory/mem_setup.md %}#configure-total-memory)
+It is recommended to configure [total process memory]({% link 
deployment/memory/mem_setup.md %}#configure-total-memory)
 ([`taskmanager.memory.process.size`]({% link ops/config.md 
%}#taskmanager-memory-process-size) or [`jobmanager.memory.process.size`]({% 
link ops/config.md %}#jobmanager-memory-process-size))
 for the containerized deployments ([Kubernetes]({% link 
deployment/resource-providers/kubernetes.md %}), [Yarn]({% link 
deployment/resource-providers/yarn_setup.md %}) or [Mesos]({% link 
deployment/resource-providers/mesos.md %})).
 It declares how much memory in total should be assigned to the Flink *JVM 
process* and corresponds to the size of the requested container.
@@ -52,7 +52,7 @@ to derive the *total process memory* and request a container 
with the memory of
   <strong>Warning:</strong> If Flink or user code allocates unmanaged off-heap 
(native) memory beyond the container size
   the job can fail because the deployment environment can kill the offending 
containers.
 </div>
-See also description of [container memory exceeded]({% link 
ops/memory/mem_trouble.md %}#container-memory-exceeded) failure.
+See also description of [container memory exceeded]({% link 
deployment/memory/mem_trouble.md %}#container-memory-exceeded) failure.
 
 ## Configure memory for state backends
 
@@ -64,16 +64,16 @@ will dictate the optimal memory configurations of your 
cluster.
 ### Heap state backend
 
 When running a stateless job or using a heap state backend 
([MemoryStateBackend]({% link ops/state/state_backends.md 
%}#the-memorystatebackend)
-or [FsStateBackend]({% link ops/state/state_backends.md 
%}#the-fsstatebackend)), set [managed memory]({% link 
ops/memory/mem_setup_tm.md %}#managed-memory) to zero.
+or [FsStateBackend]({% link ops/state/state_backends.md 
%}#the-fsstatebackend)), set [managed memory]({% link 
deployment/memory/mem_setup_tm.md %}#managed-memory) to zero.
 This will ensure that the maximum amount of heap memory is allocated for user 
code on the JVM.
 
 ### RocksDB state backend
 
 The [RocksDBStateBackend]({% link ops/state/state_backends.md 
%}#the-rocksdbstatebackend) uses native memory. By default,
-RocksDB is set up to limit native memory allocation to the size of the 
[managed memory]({% link ops/memory/mem_setup_tm.md %}#managed-memory).
+RocksDB is set up to limit native memory allocation to the size of the 
[managed memory]({% link deployment/memory/mem_setup_tm.md %}#managed-memory).
 Therefore, it is important to reserve enough *managed memory* for your state. 
If you disable the default RocksDB memory control,
 TaskManagers can be killed in containerized deployments if RocksDB allocates 
memory above the limit of the requested container size
-(the [total process memory]({% link ops/memory/mem_setup.md 
%}#configure-total-memory)).
+(the [total process memory]({% link deployment/memory/mem_setup.md 
%}#configure-total-memory)).
 See also [how to tune RocksDB memory]({% link ops/state/large_state_tuning.md 
%}#tuning-rocksdb-memory)
 and [state.backend.rocksdb.memory.managed]({% link ops/config.md 
%}#state-backend-rocksdb-memory-managed).
 
@@ -81,12 +81,12 @@ and [state.backend.rocksdb.memory.managed]({% link 
ops/config.md %}#state-backen
 
 This is only relevant for TaskManagers.
 
-Flink's batch operators leverage [managed memory]({% link 
ops/memory/mem_setup_tm.md %}#managed-memory) to run more efficiently.
+Flink's batch operators leverage [managed memory]({% link 
deployment/memory/mem_setup_tm.md %}#managed-memory) to run more efficiently.
 In doing so, some operations can be performed directly on raw data without 
having to be deserialized into Java objects.
-This means that [managed memory]({% link ops/memory/mem_setup_tm.md 
%}#managed-memory) configurations have practical effects
-on the performance of your applications. Flink will attempt to allocate and 
use as much [managed memory]({% link ops/memory/mem_setup_tm.md 
%}#managed-memory)
+This means that [managed memory]({% link deployment/memory/mem_setup_tm.md 
%}#managed-memory) configurations have practical effects
+on the performance of your applications. Flink will attempt to allocate and 
use as much [managed memory]({% link deployment/memory/mem_setup_tm.md 
%}#managed-memory)
 as configured for batch jobs but not go beyond its limits. This prevents 
`OutOfMemoryError`'s because Flink knows precisely
-how much memory it has to leverage. If the [managed memory]({% link 
ops/memory/mem_setup_tm.md %}#managed-memory) is not sufficient,
+how much memory it has to leverage. If the [managed memory]({% link 
deployment/memory/mem_setup_tm.md %}#managed-memory) is not sufficient,
 Flink will gracefully spill to disk.
 
 ## Configure memory for sort-merge blocking shuffle
diff --git a/docs/ops/memory/mem_tuning.zh.md 
b/docs/deployment/memory/mem_tuning.zh.md
similarity index 72%
rename from docs/ops/memory/mem_tuning.zh.md
rename to docs/deployment/memory/mem_tuning.zh.md
index 9b47be4..4734e45 100644
--- a/docs/ops/memory/mem_tuning.zh.md
+++ b/docs/deployment/memory/mem_tuning.zh.md
@@ -22,7 +22,7 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-本文在的基本的[配置指南]({% link ops/memory/mem_setup.zh.md 
%})的基础上,介绍如何根据具体的使用场景调整内存配置,以及在不同使用场景下分别需要重点关注哪些配置参数。
+本文在的基本的[配置指南]({% link deployment/memory/mem_setup.zh.md 
%})的基础上,介绍如何根据具体的使用场景调整内存配置,以及在不同使用场景下分别需要重点关注哪些配置参数。
 
 * toc
 {:toc}
@@ -32,8 +32,8 @@ under the License.
 ## 独立部署模式(Standalone Deployment)下的内存配置
 
 [独立部署模式]({% link deployment/resource-providers/cluster_setup.zh.md 
%})下,我们通常更关注 Flink 应用本身使用的内存大小。
-建议配置 [Flink 总内存]({% link ops/memory/mem_setup.zh.md 
%}#configure-total-memory)([`taskmanager.memory.flink.size`]({% link 
ops/config.zh.md %}#taskmanager-memory-flink-size) 或者 
[`jobmanager.memory.flink.size`]({% link ops/config.zh.md 
%}#jobmanager-memory-flink-size.zh.md %}))或其组成部分。
-此外,如果出现 [Metaspace 不足的问题]({% link ops/memory/mem_trouble.zh.md 
%}#outofmemoryerror-metaspace),可以调整 *JVM Metaspace* 的大小。
+建议配置 [Flink 总内存]({% link deployment/memory/mem_setup.zh.md 
%}#configure-total-memory)([`taskmanager.memory.flink.size`]({% link 
ops/config.zh.md %}#taskmanager-memory-flink-size) 或者 
[`jobmanager.memory.flink.size`]({% link ops/config.zh.md 
%}#jobmanager-memory-flink-size.zh.md %}))或其组成部分。
+此外,如果出现 [Metaspace 不足的问题]({% link deployment/memory/mem_trouble.zh.md 
%}#outofmemoryerror-metaspace),可以调整 *JVM Metaspace* 的大小。
 
 这种情况下通常无需配置*进程总内存*,因为不管是 Flink 还是部署环境都不会对 *JVM 开销* 进行限制,它只与机器的物理资源相关。
 
@@ -41,7 +41,7 @@ under the License.
 
 ## 容器(Container)的内存配置
 
-在容器化部署模式(Containerized Deployment)下([Kubernetes]({% link 
deployment/resource-providers/kubernetes.zh.md %})、[Yarn]({% link 
deployment/resource-providers/yarn_setup.zh.md %}) 或 [Mesos]({% link 
deployment/resource-providers/mesos.zh.md %})),建议配置[进程总内存]({% link 
ops/memory/mem_setup.zh.md 
%}#configure-total-memory)([`taskmanager.memory.process.size`]({% link 
ops/config.zh.md %}#taskmanager-memory-process-size) 或者 
[`jobmanager.memory.process.size`]({% link ops/config.zh.md %}#jobmanager-memor 
[...]
+在容器化部署模式(Containerized Deployment)下([Kubernetes]({% link 
deployment/resource-providers/kubernetes.zh.md %})、[Yarn]({% link 
deployment/resource-providers/yarn_setup.zh.md %}) 或 [Mesos]({% link 
deployment/resource-providers/mesos.zh.md %})),建议配置[进程总内存]({% link 
deployment/memory/mem_setup.zh.md 
%}#configure-total-memory)([`taskmanager.memory.process.size`]({% link 
ops/config.zh.md %}#taskmanager-memory-process-size) 或者 
[`jobmanager.memory.process.size`]({% link ops/config.zh.md %}#jobmanage [...]
 该配置参数用于指定分配给 Flink *JVM 进程*的总内存,也就是需要申请的容器大小。
 
 <span class="label label-info">提示</span>
@@ -51,7 +51,7 @@ under the License.
   <strong>注意:</strong> 如果 Flink 
或者用户代码分配超过容器大小的非托管的堆外(本地)内存,部署环境可能会杀掉超用内存的容器,造成作业执行失败。
 </div>
 
-请参考[容器内存超用]({% link ops/memory/mem_trouble.zh.md 
%}#container-memory-exceeded)中的相关描述。
+请参考[容器内存超用]({% link deployment/memory/mem_trouble.zh.md 
%}#container-memory-exceeded)中的相关描述。
 
 <a name="configure-memory-for-state-backends" />
 
@@ -64,27 +64,27 @@ under the License.
 ### Heap State Backend
 
 执行无状态作业或者使用 Heap State Backend([MemoryStateBackend]({% link 
ops/state/state_backends.zh.md %}#memorystatebackend)
-或 [FsStateBackend]({% link ops/state/state_backends.zh.md 
%}#fsstatebackend))时,建议将[托管内存]({% link ops/memory/mem_setup_tm.zh.md 
%}#managed-memory)设置为 0。
+或 [FsStateBackend]({% link ops/state/state_backends.zh.md 
%}#fsstatebackend))时,建议将[托管内存]({% link deployment/memory/mem_setup_tm.zh.md 
%}#managed-memory)设置为 0。
 这样能够最大化分配给 JVM 上用户代码的内存。
 
 ### RocksDB State Backend
 
 [RocksDBStateBackend]({% link ops/state/state_backends.zh.md 
%}#rocksdbstatebackend) 使用本地内存。
-默认情况下,RocksDB 会限制其内存用量不超过用户配置的[*托管内存*]({% link ops/memory/mem_setup_tm.zh.md 
%}#managed-memory)。
+默认情况下,RocksDB 会限制其内存用量不超过用户配置的[*托管内存*]({% link 
deployment/memory/mem_setup_tm.zh.md %}#managed-memory)。
 因此,使用这种方式存储状态时,配置足够多的*托管内存*是十分重要的。
-如果你关闭了 RocksDB 的内存控制,那么在容器化部署模式下如果 RocksDB 分配的内存超出了申请容器的大小([进程总内存]({% link 
ops/memory/mem_setup.zh.md %}#configure-total-memory)),可能会造成 TaskExecutor 
被部署环境杀掉。
+如果你关闭了 RocksDB 的内存控制,那么在容器化部署模式下如果 RocksDB 分配的内存超出了申请容器的大小([进程总内存]({% link 
deployment/memory/mem_setup.zh.md %}#configure-total-memory)),可能会造成 
TaskExecutor 被部署环境杀掉。
 请同时参考[如何调整 RocksDB 内存]({% link ops/state/large_state_tuning.zh.md 
%}#tuning-rocksdb-memory)以及 [state.backend.rocksdb.memory.managed]({% link 
ops/config.zh.md %}#state-backend-rocksdb-memory-managed)。
 
 <a name="configure-memory-for-batch-jobs" />
 
 ## 批处理作业的内存配置
 
-Flink 批处理算子使用[托管内存]({% link ops/memory/mem_setup_tm.zh.md 
%}#managed-memory)来提高处理效率。
+Flink 批处理算子使用[托管内存]({% link deployment/memory/mem_setup_tm.zh.md 
%}#managed-memory)来提高处理效率。
 算子运行时,部分操作可以直接在原始数据上进行,而无需将数据反序列化成 Java 对象。
-这意味着[托管内存]({% link ops/memory/mem_setup_tm.zh.md 
%}#managed-memory)对应用的性能具有实质上的影响。
-因此 Flink 会在不超过其配置限额的前提下,尽可能分配更多的[托管内存]({% link ops/memory/mem_setup_tm.zh.md 
%}#managed-memory)。
+这意味着[托管内存]({% link deployment/memory/mem_setup_tm.zh.md 
%}#managed-memory)对应用的性能具有实质上的影响。
+因此 Flink 会在不超过其配置限额的前提下,尽可能分配更多的[托管内存]({% link 
deployment/memory/mem_setup_tm.zh.md %}#managed-memory)。
 Flink 明确知道可以使用的内存大小,因此可以有效避免 `OutOfMemoryError` 的发生。
-当[托管内存]({% link ops/memory/mem_setup_tm.zh.md %}#managed-memory)不足时,Flink 
会优雅地将数据落盘。
+当[托管内存]({% link deployment/memory/mem_setup_tm.zh.md 
%}#managed-memory)不足时,Flink 会优雅地将数据落盘。
 
 ## SortMerge数据Shuffle内存配置
 
diff --git a/docs/deployment/resource-providers/cluster_setup.md 
b/docs/deployment/resource-providers/cluster_setup.md
index 9f2f444..1e57b2d 100644
--- a/docs/deployment/resource-providers/cluster_setup.md
+++ b/docs/deployment/resource-providers/cluster_setup.md
@@ -107,7 +107,7 @@ Please see the [configuration page]({% link ops/config.md 
%}) for details and ad
 In particular,
 
  * the amount of available memory per JobManager 
(`jobmanager.memory.process.size`),
- * the amount of available memory per TaskManager 
(`taskmanager.memory.process.size` and check [memory setup guide]({% link 
ops/memory/mem_tuning.md %}#configure-memory-for-standalone-deployment)),
+ * the amount of available memory per TaskManager 
(`taskmanager.memory.process.size` and check [memory setup guide]({% link 
deployment/memory/mem_tuning.md %}#configure-memory-for-standalone-deployment)),
  * the number of available CPUs per machine (`taskmanager.numberOfTaskSlots`),
  * the total number of CPUs in the cluster (`parallelism.default`) and
  * the temporary directories (`io.tmp.dirs`)
diff --git a/docs/deployment/resource-providers/cluster_setup.zh.md 
b/docs/deployment/resource-providers/cluster_setup.zh.md
index 5d05471..bab6a7d 100644
--- a/docs/deployment/resource-providers/cluster_setup.zh.md
+++ b/docs/deployment/resource-providers/cluster_setup.zh.md
@@ -112,7 +112,7 @@ Flink 目录必须放在所有 worker 节点的相同目录下。你可以使用
 特别地,
 
 * 每个 JobManager 的可用内存值(`jobmanager.memory.process.size`),
-* 每个 TaskManager 的可用内存值 (`taskmanager.memory.process.size`,并检查 [内存调优指南]({% 
link ops/memory/mem_tuning.zh.md 
%}#configure-memory-for-standalone-deployment)),
+* 每个 TaskManager 的可用内存值 (`taskmanager.memory.process.size`,并检查 [内存调优指南]({% 
link deployment/memory/mem_tuning.zh.md 
%}#configure-memory-for-standalone-deployment)),
 * 每台机器的可用 CPU 数(`taskmanager.numberOfTaskSlots`),
 * 集群中所有 CPU 数(`parallelism.default`)和
 * 临时目录(`io.tmp.dirs`)
diff --git a/docs/ops/config.md b/docs/ops/config.md
index 277c8a7..2c5b1e1 100644
--- a/docs/ops/config.md
+++ b/docs/ops/config.md
@@ -157,8 +157,8 @@ Flink tries to shield users as much as possible from the 
complexity of configuri
 In most cases, users should only need to set the values 
`taskmanager.memory.process.size` or `taskmanager.memory.flink.size` (depending 
on how the setup), and possibly adjusting the ratio of JVM heap and Managed 
Memory via `taskmanager.memory.managed.fraction`. The other options below can 
be used for performance tuning and fixing memory related errors.
 
 For a detailed explanation of how these options interact,
-see the documentation on [TaskManager]({% link ops/memory/mem_setup_tm.md %}) 
and
-[JobManager]({% link ops/memory/mem_setup_jobmanager.md %} ) memory 
configurations.
+see the documentation on [TaskManager]({% link 
deployment/memory/mem_setup_tm.md %}) and
+[JobManager]({% link deployment/memory/mem_setup_jobmanager.md %} ) memory 
configurations.
 
 {% include generated/common_memory_section.html %}
 
diff --git a/docs/ops/config.zh.md b/docs/ops/config.zh.md
index 0b51c05..505d113 100644
--- a/docs/ops/config.zh.md
+++ b/docs/ops/config.zh.md
@@ -157,8 +157,8 @@ Flink tries to shield users as much as possible from the 
complexity of configuri
 In most cases, users should only need to set the values 
`taskmanager.memory.process.size` or `taskmanager.memory.flink.size` (depending 
on how the setup), and possibly adjusting the ratio of JVM heap and Managed 
Memory via `taskmanager.memory.managed.fraction`. The other options below can 
be used for performance tuning and fixing memory related errors.
 
 For a detailed explanation of how these options interact,
-see the documentation on [TaskManager]({% link ops/memory/mem_setup_tm.zh.md 
%}) and
-[JobManager]({% link ops/memory/mem_setup_jobmanager.zh.md %} ) memory 
configurations.
+see the documentation on [TaskManager]({% link 
deployment/memory/mem_setup_tm.zh.md %}) and
+[JobManager]({% link deployment/memory/mem_setup_jobmanager.zh.md %} ) memory 
configurations.
 
 {% include generated/common_memory_section.html %}
 
diff --git a/docs/ops/state/state_backends.md b/docs/ops/state/state_backends.md
index 4d8e41a..45c99ce 100644
--- a/docs/ops/state/state_backends.md
+++ b/docs/ops/state/state_backends.md
@@ -74,7 +74,7 @@ The MemoryStateBackend is encouraged for:
   - Local development and debugging
   - Jobs that do hold little state, such as jobs that consist only of 
record-at-a-time functions (Map, FlatMap, Filter, ...). The Kafka Consumer 
requires very little state.
 
-It is also recommended to set [managed memory]({% link 
ops/memory/mem_setup_tm.md %}#managed-memory) to zero.
+It is also recommended to set [managed memory]({% link 
deployment/memory/mem_setup_tm.md %}#managed-memory) to zero.
 This will ensure that the maximum amount of memory is allocated for user code 
on the JVM.
 
 ### The FsStateBackend
@@ -94,7 +94,7 @@ The FsStateBackend is encouraged for:
   - Jobs with large state, long windows, large key/value states.
   - All high-availability setups.
 
-It is also recommended to set [managed memory]({% link 
ops/memory/mem_setup_tm.md %}#managed-memory) to zero.
+It is also recommended to set [managed memory]({% link 
deployment/memory/mem_setup_tm.md %}#managed-memory) to zero.
 This will ensure that the maximum amount of memory is allocated for user code 
on the JVM.
 
 ### The RocksDBStateBackend
@@ -124,7 +124,7 @@ This also means, however, that the maximum throughput that 
can be achieved will
 this state backend. All reads/writes from/to this backend have to go through 
de-/serialization to retrieve/store the state objects, which is also more 
expensive than always working with the
 on-heap representation as the heap-based backends are doing.
 
-Check also recommendations about the [task executor memory configuration]({% 
link ops/memory/mem_tuning.md %}#rocksdb-state-backend) for the 
RocksDBStateBackend.
+Check also recommendations about the [task executor memory configuration]({% 
link deployment/memory/mem_tuning.md %}#rocksdb-state-backend) for the 
RocksDBStateBackend.
 
 RocksDBStateBackend is currently the only backend that offers incremental 
checkpoints (see [here]({% link ops/state/large_state_tuning.md %})). 
 
diff --git a/docs/ops/state/state_backends.zh.md 
b/docs/ops/state/state_backends.zh.md
index 4994c40..e421f7a 100644
--- a/docs/ops/state/state_backends.zh.md
+++ b/docs/ops/state/state_backends.zh.md
@@ -71,7 +71,7 @@ MemoryStateBackend 适用场景:
   - 本地开发和调试。
   - 状态很小的 Job,例如:由每次只处理一条记录的函数(Map、FlatMap、Filter 等)构成的 Job。Kafka Consumer 
仅仅需要非常小的状态。
 
-建议同时将 [managed memory]({% link ops/memory/mem_setup_tm.zh.md 
%}#managed-memory) 设为0,以保证将最大限度的内存分配给 JVM 上的用户代码。
+建议同时将 [managed memory]({% link deployment/memory/mem_setup_tm.zh.md 
%}#managed-memory) 设为0,以保证将最大限度的内存分配给 JVM 上的用户代码。
 
 ### FsStateBackend
 
@@ -92,7 +92,7 @@ FsStateBackend 适用场景:
   - 状态比较大、窗口比较长、key/value 状态比较大的 Job。
   - 所有高可用的场景。
 
-建议同时将 [managed memory]({% link ops/memory/mem_setup_tm.zh.md 
%}#managed-memory) 设为0,以保证将最大限度的内存分配给 JVM 上的用户代码。
+建议同时将 [managed memory]({% link deployment/memory/mem_setup_tm.zh.md 
%}#managed-memory) 设为0,以保证将最大限度的内存分配给 JVM 上的用户代码。
 
 <a name="the-rocksdbstatebackend" />
 
@@ -120,7 +120,7 @@ RocksDBStateBackend 的适用场景:
 然而,这也意味着使用 RocksDBStateBackend 将会使应用程序的最大吞吐量降低。
 所有的读写都必须序列化、反序列化操作,这个比基于堆内存的 state backend 的效率要低很多。
 
-请同时参考 [Task Executor 内存配置]({% link ops/memory/mem_tuning.zh.md 
%}#rocksdb-state-backend) 中关于 RocksDBStateBackend 的建议。
+请同时参考 [Task Executor 内存配置]({% link deployment/memory/mem_tuning.zh.md 
%}#rocksdb-state-backend) 中关于 RocksDBStateBackend 的建议。
 
 RocksDBStateBackend 是目前唯一支持增量 CheckPoint 的 State Backend (见 [这里]({% link 
ops/state/large_state_tuning.zh.md %}))。
 
diff --git a/docs/release-notes/flink-1.10.md b/docs/release-notes/flink-1.10.md
index 0a98033..feb4af4 100644
--- a/docs/release-notes/flink-1.10.md
+++ b/docs/release-notes/flink-1.10.md
@@ -157,7 +157,7 @@ If you try to reuse your previous Flink configuration 
without any adjustments,
 the new memory model can result in differently computed memory parameters for
 the JVM and, thus, performance changes.
 
-Please, check [the user documentation](../ops/memory/mem_setup.html) for more 
details.
+Please, check [the user documentation](../deployment/memory/mem_setup.html) 
for more details.
 
 ##### Deprecation and breaking changes
 The following options have been removed and have no effect anymore:
diff --git a/docs/release-notes/flink-1.10.zh.md 
b/docs/release-notes/flink-1.10.zh.md
index 0a98033..feb4af4 100644
--- a/docs/release-notes/flink-1.10.zh.md
+++ b/docs/release-notes/flink-1.10.zh.md
@@ -157,7 +157,7 @@ If you try to reuse your previous Flink configuration 
without any adjustments,
 the new memory model can result in differently computed memory parameters for
 the JVM and, thus, performance changes.
 
-Please, check [the user documentation](../ops/memory/mem_setup.html) for more 
details.
+Please, check [the user documentation](../deployment/memory/mem_setup.html) 
for more details.
 
 ##### Deprecation and breaking changes
 The following options have been removed and have no effect anymore:
diff --git a/docs/release-notes/flink-1.11.md b/docs/release-notes/flink-1.11.md
index 6c9f2af..8febdb0 100644
--- a/docs/release-notes/flink-1.11.md
+++ b/docs/release-notes/flink-1.11.md
@@ -79,11 +79,11 @@ Check the updated user documentation for [Flink Docker 
integration](https://ci.a
 ##### Overview
 With 
[FLIP-116](https://cwiki.apache.org/confluence/display/FLINK/FLIP-116%3A+Unified+Memory+Configuration+for+Job+Managers),
 a new memory model has been introduced for the JobManager. New configuration 
options have been introduced to control the memory consumption of the 
JobManager process. This affects all types of deployments: standalone, YARN, 
Mesos, and the new active Kubernetes integration.
 
-Please, check the user documentation for [more 
details](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_setup_jobmanager.html).
+Please, check the user documentation for [more 
details](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_setup_jobmanager.html).
 
 If you try to reuse your previous Flink configuration without any adjustments, 
the new memory model can result in differently computed memory parameters for 
the JVM and, thus, performance changes or even failures.
 In order to start the JobManager process, you have to specify at least one of 
the following options 
[`jobmanager.memory.flink.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-flink-size),
 
[`jobmanager.memory.process.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-process-size)
 or 
[`jobmanager.memory.heap.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-me
 [...]
-See also [the migration 
guide](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_migration.html#migrate-job-manager-memory-configuration)
 for more information.
+See also [the migration 
guide](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_migration.html#migrate-job-manager-memory-configuration)
 for more information.
 
 ##### Deprecation and breaking changes
 The following options are deprecated:
@@ -91,23 +91,23 @@ The following options are deprecated:
  * `jobmanager.heap.mb`
 
 If these deprecated options are still used, they will be interpreted as one of 
the following new options in order to maintain backwards compatibility:
- * [JVM 
Heap](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_setup_jobmanager.html#configure-jvm-heap)
 
([`jobmanager.memory.heap.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-heap-size))
 for standalone and Mesos deployments
- * [Total Process 
Memory](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_setup_jobmanager.html#configure-total-memory)
 
([`jobmanager.memory.process.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-process-size))
 for containerized deployments (Kubernetes and Yarn)
+ * [JVM 
Heap](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_setup_jobmanager.html#configure-jvm-heap)
 
([`jobmanager.memory.heap.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-heap-size))
 for standalone and Mesos deployments
+ * [Total Process 
Memory](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_setup_jobmanager.html#configure-total-memory)
 
([`jobmanager.memory.process.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-process-size))
 for containerized deployments (Kubernetes and Yarn)
 
 The following options have been removed and have no effect anymore:
  * `containerized.heap-cutoff-ratio`
  * `containerized.heap-cutoff-min`
 
-There is [no container 
cut-off](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_migration.html#container-cut-off-memory)
 anymore.
+There is [no container 
cut-off](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_migration.html#container-cut-off-memory)
 anymore.
 
 ##### JVM arguments
 The `direct` and `metaspace` memory of the JobManager's JVM process are now 
limited by configurable values:
  * 
[`jobmanager.memory.off-heap.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-off-heap-size)
  * 
[`jobmanager.memory.jvm-metaspace.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-jvm-metaspace-size)
 
-See also [JVM 
Parameters](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_setup.html#jvm-parameters).
+See also [JVM 
Parameters](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_setup.html#jvm-parameters).
 
-<span class="label label-warning">Attention</span> These new limits can 
produce the respective `OutOfMemoryError` exceptions if they are not configured 
properly or there is a respective memory leak. See also [the troubleshooting 
guide](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_trouble.html#outofmemoryerror-direct-buffer-memory).
+<span class="label label-warning">Attention</span> These new limits can 
produce the respective `OutOfMemoryError` exceptions if they are not configured 
properly or there is a respective memory leak. See also [the troubleshooting 
guide](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_trouble.html#outofmemoryerror-direct-buffer-memory).
 
 #### Removal of deprecated mesos.resourcemanager.tasks.mem 
([FLINK-15198](https://issues.apache.org/jira/browse/FLINK-15198))
 The `mesos.resourcemanager.tasks.mem` option, deprecated in 1.10 in favour of 
`taskmanager.memory.process.size`, has been completely removed and will have no 
effect anymore in 1.11+.
diff --git a/docs/release-notes/flink-1.11.zh.md 
b/docs/release-notes/flink-1.11.zh.md
index c6fc7b5..0344fa1 100644
--- a/docs/release-notes/flink-1.11.zh.md
+++ b/docs/release-notes/flink-1.11.zh.md
@@ -79,11 +79,11 @@ Check the updated user documentation for [Flink Docker 
integration](https://ci.a
 ##### Overview
 With 
[FLIP-116](https://cwiki.apache.org/confluence/display/FLINK/FLIP-116%3A+Unified+Memory+Configuration+for+Job+Managers),
 a new memory model has been introduced for the JobManager. New configuration 
options have been introduced to control the memory consumption of the 
JobManager process. This affects all types of deployments: standalone, YARN, 
Mesos, and the new active Kubernetes integration.
 
-Please, check the user documentation for [more 
details](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_setup_jobmanager.html).
+Please, check the user documentation for [more 
details](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_setup_jobmanager.html).
 
 If you try to reuse your previous Flink configuration without any adjustments, 
the new memory model can result in differently computed memory parameters for 
the JVM and, thus, performance changes or even failures.
 In order to start the JobManager process, you have to specify at least one of 
the following options 
[`jobmanager.memory.flink.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-flink-size),
 
[`jobmanager.memory.process.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-process-size)
 or 
[`jobmanager.memory.heap.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-me
 [...]
-See also [the migration 
guide](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_migration.html#migrate-job-manager-memory-configuration)
 for more information.
+See also [the migration 
guide](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_migration.html#migrate-job-manager-memory-configuration)
 for more information.
 
 ##### Deprecation and breaking changes
 The following options are deprecated:
@@ -91,23 +91,23 @@ The following options are deprecated:
  * `jobmanager.heap.mb`
 
 If these deprecated options are still used, they will be interpreted as one of 
the following new options in order to maintain backwards compatibility:
- * [JVM 
Heap](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_setup_jobmanager.html#configure-jvm-heap)
 
([`jobmanager.memory.heap.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-heap-size))
 for standalone and Mesos deployments
- * [Total Process 
Memory](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_setup_jobmanager.html#configure-total-memory)
 
([`jobmanager.memory.process.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-process-size))
 for containerized deployments (Kubernetes and Yarn)
+ * [JVM 
Heap](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_setup_jobmanager.html#configure-jvm-heap)
 
([`jobmanager.memory.heap.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-heap-size))
 for standalone and Mesos deployments
+ * [Total Process 
Memory](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_setup_jobmanager.html#configure-total-memory)
 
([`jobmanager.memory.process.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-process-size))
 for containerized deployments (Kubernetes and Yarn)
 
 The following options have been removed and have no effect anymore:
  * `containerized.heap-cutoff-ratio`
  * `containerized.heap-cutoff-min`
 
-There is [no container 
cut-off](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_migration.html#container-cut-off-memory)
 anymore.
+There is [no container 
cut-off](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_migration.html#container-cut-off-memory)
 anymore.
 
 ##### JVM arguments
 The `direct` and `metaspace` memory of the JobManager's JVM process are now 
limited by configurable values:
  * 
[`jobmanager.memory.off-heap.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-off-heap-size)
  * 
[`jobmanager.memory.jvm-metaspace.size`](https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#jobmanager-memory-jvm-metaspace-size)
 
-See also [JVM 
Parameters](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_setup.html#jvm-parameters).
+See also [JVM 
Parameters](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_setup.html#jvm-parameters).
 
-<span class="label label-warning">Attention</span> These new limits can 
produce the respective `OutOfMemoryError` exceptions if they are not configured 
properly or there is a respective memory leak. See also [the troubleshooting 
guide](https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_trouble.html#outofmemoryerror-direct-buffer-memory).
+<span class="label label-warning">Attention</span> These new limits can 
produce the respective `OutOfMemoryError` exceptions if they are not configured 
properly or there is a respective memory leak. See also [the troubleshooting 
guide](https://ci.apache.org/projects/flink/flink-docs-master/deployment/memory/mem_trouble.html#outofmemoryerror-direct-buffer-memory).
 
 #### Removal of deprecated mesos.resourcemanager.tasks.mem 
([FLINK-15198](https://issues.apache.org/jira/browse/FLINK-15198))
 The `mesos.resourcemanager.tasks.mem` option, deprecated in 1.10 in favour of 
`taskmanager.memory.process.size`, has been completely removed and will have no 
effect anymore in 1.11+.

Reply via email to