dawidwys opened a new pull request #18883:
URL: https://github.com/apache/flink/pull/18883


   ## What is the purpose of the change
   Tests that extend from UnalignedCheckpointTestBase create a lot of
   MiniClusters. E.g. the rescale it case creates 72 tests * 2 clusters
   (pre & post rescale). Direct buffers allocated by netty are freed during
   the GC.
   
   At the same time Flink uses PooledBufferAllocator, where we return used
   buffers earlier and we do not need to wait for GC to kick in. The idea
   to make the test more stable is to reuse a single NettyBufferPool for
   all clusters that are started in those tests. That way we can reuse
   buffers that were previously allocated and we do not need to wait until
   they are freed.
   
   Lastly as a note. This should not be an issue in production setups, as
   we do not start multiple shuffle environments in a single JVM process
   (TM).
   
   
   ## Verifying this change
   
   ???
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (yes / **no**)
     - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
     - The serializers: (yes / **no** / don't know)
     - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
     - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
     - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (yes / **no**)
     - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to