This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch release-1.14
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.14 by this push:
     new 2a873eb  [FLINK-24120] Add docs for using environment variable 
MALLOC_ARENA_MAX to avoid unlimited memory increasing
2a873eb is described below

commit 2a873eb05263b7550f98160a3314f5c39feec1a0
Author: Jiayi Liao <[email protected]>
AuthorDate: Fri Sep 3 13:44:56 2021 +0800

    [FLINK-24120] Add docs for using environment variable MALLOC_ARENA_MAX to 
avoid unlimited memory increasing
    
    This closes #17132.
---
 docs/content.zh/docs/deployment/memory/mem_trouble.md            | 5 ++++-
 .../docs/deployment/resource-providers/standalone/docker.md      | 9 +++++++++
 docs/content/docs/deployment/memory/mem_trouble.md               | 6 ++++--
 .../docs/deployment/resource-providers/standalone/docker.md      | 9 +++++++++
 4 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/docs/content.zh/docs/deployment/memory/mem_trouble.md 
b/docs/content.zh/docs/deployment/memory/mem_trouble.md
index 7f8bc58..6edbfc5 100644
--- a/docs/content.zh/docs/deployment/memory/mem_trouble.md
+++ b/docs/content.zh/docs/deployment/memory/mem_trouble.md
@@ -72,7 +72,10 @@ under the License.
 
 对于 *JobManager* 进程,你还可以尝试启用 *JVM 
直接内存限制*([`jobmanager.memory.enable-jvm-direct-memory-limit`]({{< ref 
"docs/deployment/config" 
>}}#jobmanager-memory-enable-jvm-direct-memory-limit)),以排除 *JVM 直接内存泄漏*的可能性。
 
-如果使用了 [RocksDBStateBackend]({{< ref "docs/ops/state/state_backends" 
>}}#rocksdbstatebackend) 且没有开启内存控制,也可以尝试增大 TaskManager 的[托管内存]({{< ref 
"docs/deployment/memory/mem_setup" >}}#managed-memory)。
+If [RocksDBStateBackend]({{< ref "docs/ops/state/state_backends" 
>}}#the-rocksdbstatebackend) is used:
+* and memory controlling is disabled: You can try to increase the 
TaskManager's [managed memory]({{< ref "docs/deployment/memory/mem_setup" 
>}}#managed-memory).
+* and memory controlling is enabled and non-heap memory increases during 
savepoint or full checkpoints: This may happen due to the `glibc` memory 
allocator (see [glibc 
bug](https://sourceware.org/bugzilla/show_bug.cgi?id=15321)).
+  You can try to add the [environment variable]({{< ref 
"docs/deployment/config" >}}#forwarding-environment-variables) 
`MALLOC_ARENA_MAX=1` for TaskManagers.
 
 此外,还可以尝试增大 [JVM 开销]({{< ref "docs/deployment/memory/mem_setup" 
>}}#capped-fractionated-components)。
 
diff --git 
a/docs/content.zh/docs/deployment/resource-providers/standalone/docker.md 
b/docs/content.zh/docs/deployment/resource-providers/standalone/docker.md
index 244731d..3164485 100644
--- a/docs/content.zh/docs/deployment/resource-providers/standalone/docker.md
+++ b/docs/content.zh/docs/deployment/resource-providers/standalone/docker.md
@@ -349,6 +349,15 @@ You could switch back to use `glibc` as the memory 
allocator to restore the old
       flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}} 
<jobmanager|standalone-job|taskmanager>
 ```
 
+For users that are still using `glibc` memory allocator, the [glibc 
bug](https://sourceware.org/bugzilla/show_bug.cgi?id=15321) can easily be 
reproduced, especially while savepoints or full checkpoints with 
RocksDBStateBackend are created. 
+Setting the environment variable `MALLOC_ARENA_MAX` can avoid unlimited memory 
growth:
+
+```sh
+    $ docker run \
+      --env MALLOC_ARENA_MAX=1 \
+      flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}} 
<jobmanager|standalone-job|taskmanager>
+```
+
 ### Advanced customization
 
 There are several ways in which you can further customize the Flink image:
diff --git a/docs/content/docs/deployment/memory/mem_trouble.md 
b/docs/content/docs/deployment/memory/mem_trouble.md
index a623f4b..e0f40d0 100644
--- a/docs/content/docs/deployment/memory/mem_trouble.md
+++ b/docs/content/docs/deployment/memory/mem_trouble.md
@@ -80,8 +80,10 @@ If you encounter this problem in the *JobManager* process, 
you can also enable t
 [`jobmanager.memory.enable-jvm-direct-memory-limit`]({{< ref 
"docs/deployment/config" >}}#jobmanager-memory-enable-jvm-direct-memory-limit) 
option
 to exclude possible *JVM Direct Memory* leak.
 
-If [RocksDBStateBackend]({{< ref "docs/ops/state/state_backends" 
>}}#the-rocksdbstatebackend) is used, and the memory controlling is disabled,
-you can try to increase the TaskManager's [managed memory]({{< ref 
"docs/deployment/memory/mem_setup" >}}#managed-memory).
+If [RocksDBStateBackend]({{< ref "docs/ops/state/state_backends" 
>}}#the-rocksdbstatebackend) is used:
+* and memory controlling is disabled: You can try to increase the 
TaskManager's [managed memory]({{< ref "docs/deployment/memory/mem_setup" 
>}}#managed-memory).
+* and memory controlling is enabled and non-heap memory increases during 
savepoint or full checkpoints: This may happen due to the `glibc` memory 
allocator (see [glibc 
bug](https://sourceware.org/bugzilla/show_bug.cgi?id=15321)).
+You can try to add the [environment variable]({{< ref "docs/deployment/config" 
>}}#forwarding-environment-variables) `MALLOC_ARENA_MAX=1` for TaskManagers.
 
 Alternatively, you can increase the [JVM Overhead]({{< ref 
"docs/deployment/memory/mem_setup" >}}#capped-fractionated-components).
 
diff --git 
a/docs/content/docs/deployment/resource-providers/standalone/docker.md 
b/docs/content/docs/deployment/resource-providers/standalone/docker.md
index 1577cc3..9db4b5f 100644
--- a/docs/content/docs/deployment/resource-providers/standalone/docker.md
+++ b/docs/content/docs/deployment/resource-providers/standalone/docker.md
@@ -349,6 +349,15 @@ You could switch back to use `glibc` as the memory 
allocator to restore the old
       flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}} 
<jobmanager|standalone-job|taskmanager>
 ```
 
+For users that are still using `glibc` memory allocator, the [glibc 
bug](https://sourceware.org/bugzilla/show_bug.cgi?id=15321) can easily be 
reproduced, especially while savepoints or full checkpoints with 
RocksDBStateBackend are created. 
+Setting the environment variable `MALLOC_ARENA_MAX` can avoid unlimited memory 
growth:
+
+```sh
+    $ docker run \
+      --env MALLOC_ARENA_MAX=1 \
+      flink:{{< stable >}}{{< version >}}-scala{{< scala_version >}}{{< 
/stable >}}{{< unstable >}}latest{{< /unstable >}} 
<jobmanager|standalone-job|taskmanager>
+```
+
 ### Advanced customization
 
 There are several ways in which you can further customize the Flink image:

Reply via email to