[ 
https://issues.apache.org/jira/browse/FLINK-15031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17371012#comment-17371012
 ] 

Jin Xing commented on FLINK-15031:
----------------------------------

Some details for network memory allocation algorithm:

*For Input:*

Min:  networkBuffersPerChannel * numInputChannels + numInputGates * 1

Max: networkBuffersPerChannel * numInputChannels + numInputGates * 
floatingNetworkBuffersPerGate

*For Output:*

Min:  numSubpartitions + 1

Max: numSubpartitions * networkBuffersPerChannel + floatingNetworkBuffersPerGate

 

 > Regarding whether to include floating buffers in announced network memory, 
my main concern is that it is possible to result in doubled network memory 
requirement

 

True, if we announce the network memory requirement of output by 
"numSubpartitions * networkBuffersPerChannel + floatingNetworkBuffersPerGate", 
much extra memory cost could be caused.  Though we can fully resolve the issue 
of  FLINK-12852 by always announcing the max network memory requirement , it's 
indeed a pain point for resource-sensitive users. 

 

+1 on advice from [~zhuzh] – – add a fraction style configuration and it could 
be in range of [0.0, 1.0]. And give user a knob to trade off between min and 
max.

Any advice for the naming of the configuration ? How about 
taskmanager.network.memory.announcing-fraction-for-floating ?

 

> Automatically calculate required network memory for fine-grained jobs
> ---------------------------------------------------------------------
>
>                 Key: FLINK-15031
>                 URL: https://issues.apache.org/jira/browse/FLINK-15031
>             Project: Flink
>          Issue Type: Improvement
>          Components: Runtime / Coordination
>    Affects Versions: 1.10.0
>            Reporter: Zhu Zhu
>            Assignee: Jin Xing
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 1.12.0
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> In cases where resources are specified, we expect each operator to declare 
> required resources before using them. In this way, no resource related error 
> should happen if resources are not used beyond what was declared. This 
> ensures a deployed task would not fail due to insufficient resources in TM, 
> which may result in unnecessary failures and may even cause a job hanging 
> forever, failing repeatedly on deploying tasks to a TM with insufficient 
> resources.
> Shuffle memory is the last missing piece for this goal at the moment. Minimum 
> network buffers are required by tasks to work. Currently a task is possible 
> to be deployed to a TM with insufficient network buffers, and fails on 
> launching.
> To avoid that, we should calculate required network memory for a 
> task/SlotSharingGroup before allocating a slot for it.
> The required shuffle memory can be derived from the number of required 
> network buffers. The number of buffers required by a task (ExecutionVertex) is
> {code:java}
> exclusive buffers for input channels(i.e. numInputChannel * 
> buffersPerChannel) + required buffers for result partition buffer 
> pool(currently is numberOfSubpartitions + 1)
> {code}
> Note that this is for the {{NettyShuffleService}} case. For custom shuffle 
> services, currently there is no way to get the required shuffle memory of a 
> task.
> To make it simple under dynamic slot sharing, the required shuffle memory for 
> a task should be the max required shuffle memory of all {{ExecutionVertex}} 
> of the same {{ExecutionJobVertex}}. And the required shuffle memory for a 
> slot sharing group should be the sum of shuffle memory for each 
> {{ExecutionJobVertex}} instance within.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to