1996fanrui commented on code in PR #20038: URL: https://github.com/apache/flink/pull/20038#discussion_r902614364
########## docs/content/docs/deployment/memory/network_mem_tuning.md: ########## @@ -120,6 +120,19 @@ In order to avoid excessive data skew, the number of buffers for each subpartiti Unlike the input buffer pool, the configured amount of exclusive buffers and floating buffers is only treated as recommended values. If there are not enough buffers available, Flink can make progress with only a single exclusive buffer per output subpartition and zero floating buffers. +#### Overdraft buffers + +For each output subtask can also request up to `taskmanager.network.memory.max-overdraft-buffers-per-gate` (by default 5) extra overdraft buffers. +Those buffers are only used, if despite presence of a backpressure, Flink can not stop producing more records to the output. Review Comment: Thanks @pnowojski quick fix! Other comments are good to me, but I still have a question about this comment. I think `but the subtask can not gracefully pause its current process.` is too technical. For some flink users, they may be curious: what is the `gracefully pause its current process`? If `it can be easily confused with regular processing of records from the input and producing a single output record.`, could we explain it clearly? Make it clear to users that overdraft buffers address scenarios when processing a single record may require multiple network buffers. Then we can list some situations, it's your ` This can happen in situations like:` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
