pnowojski commented on a change in pull request #18351:
URL: https://github.com/apache/flink/pull/18351#discussion_r785910240
##########
File path: docs/content.zh/docs/deployment/memory/network_mem_tuning.md
##########
@@ -87,6 +83,12 @@ Flink 1.14 新引入的缓冲消胀机制尝试通过自动调整缓冲数据量
此外,如果您想减少缓冲数据量使其低于缓冲消胀当前允许的量,您可能需要手动的设置缓冲区的个数。
+#### High parallelism
+
+Currently, the buffer debloating mechanism can work correctly only by some
parallelism values limit. When the parallelism exceeds this value, then
increasing checkpoint time and decreasing throughput can be observed. The best
way to fix this situation is to increase the number of
buffers(`taskmanager.network.memory.buffers-per-channel`,
`taskmanager.network.memory.floating-buffers-per-gate`)
Review comment:
Maybe something like this?
```suggestion
Currently, the buffer debloating mechanism might not perform correctly with
high parallelism (above ~200) using the default configuration. If you observe
reduced throughput or higher than expected checkpointing times we suggest to
increase the number of floating buffers
(`taskmanager.network.memory.floating-buffers-per-gate`) from the default value
to at least the number equal to the parallelism.
```
##########
File path: docs/content.zh/docs/deployment/memory/network_mem_tuning.md
##########
@@ -87,6 +83,12 @@ Flink 1.14 新引入的缓冲消胀机制尝试通过自动调整缓冲数据量
此外,如果您想减少缓冲数据量使其低于缓冲消胀当前允许的量,您可能需要手动的设置缓冲区的个数。
+#### High parallelism
+
+Currently, the buffer debloating mechanism can work correctly only by some
parallelism values limit. When the parallelism exceeds this value, then
increasing checkpoint time and decreasing throughput can be observed. The best
way to fix this situation is to increase the number of
buffers(`taskmanager.network.memory.buffers-per-channel`,
`taskmanager.network.memory.floating-buffers-per-gate`)
Review comment:
Plus please brake long lines so that resolving conflicts in the future
is easier.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]