TanYuxin-tyx commented on code in PR #21843: URL: https://github.com/apache/flink/pull/21843#discussion_r1101079804
########## docs/content/docs/deployment/memory/network_mem_tuning.md: ########## @@ -97,20 +97,17 @@ The actual value of parallelism from which the problem occurs is various from jo ## Network buffer lifecycle Flink has several local buffer pools - one for the output stream and one for each input gate. -Each of those pools is limited to at most +The upper limit of the size of each buffer pool is called the buffer pool **Target**, which is calculated by the following formula. `#channels * taskmanager.network.memory.buffers-per-channel + taskmanager.network.memory.floating-buffers-per-gate` The size of the buffer can be configured by setting `taskmanager.memory.segment-size`. ### Input network buffers -Buffers in the input channel are divided into exclusive and floating buffers. Exclusive buffers can be used by only one particular channel. A channel can request additional floating buffers from a buffer pool shared across all channels belonging to the given input gate. The remaining floating buffers are optional and are acquired only if there are enough resources available. +Not all buffers in the buffer pool Target can be obtained eventually. A **Threshold** is introduced to divide the buffer pool Target into two parts. The part below the threshold is called required. The excess part buffers, if any, is optional. A task will fail if the required buffers cannot be obtained in runtime. A task will not fail due to not obtaining optional buffers, but may suffer a performance reduction. If not explicitly configured, the default value of the threshold is Integer.MAX_VALUE for streaming workloads, and 1000 for batch workloads. Review Comment: Ok, Fixed. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
