wsry opened a new pull request #18473:
URL: https://github.com/apache/flink/pull/18473


   ## What is the purpose of the change
   
   In the current sort-shuffle implementation, the maximum number of buffers 
can be used per result partition for shuffle data read is 32M. However, for 
large parallelism jobs, 32M is not enough and for small parallelism jobs, 32M 
may waste buffers. This ticket aims to adjust the maximum number of buffers can 
be used per result partition to let it adjust according to parallelism and the 
selected value is an empirical one based on the TPC-DS test results.
   
   ## Brief change log
   
     - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (yes / **no**)
     - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
     - The serializers: (yes / **no** / don't know)
     - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
     - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
     - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (yes / **no**)
     - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to