xianjingfeng opened a new issue, #1727:
URL: https://github.com/apache/incubator-uniffle/issues/1727

   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   
   
   ### Search before asking
   
   - [X] I have searched in the 
[issues](https://github.com/apache/incubator-uniffle/issues?q=is%3Aissue) and 
found no similar issues.
   
   
   ### What would you like to be improved?
   
   Currently we have put the shuffle data into the off-heap memory in shuffle 
server . But I found it still occupancy a lot of heap memory.
   The following is the result of printing by using `jmap -histo`.
   ```
      1:     189601376    16684921088  
io.netty.buffer.UnpooledByteBufAllocator$InstrumentedUnpooledUnsafeDirectByteBuf
      2:     189860728    15188858240  java.nio.DirectByteBuffer 
([email protected])
      3:     189605871    13651622712  jdk.internal.ref.Cleaner 
([email protected])
      4:     189018520    10585037120  
org.apache.uniffle.common.ShufflePartitionedBlock
      5:     189605871     7584234840  java.nio.DirectByteBuffer$Deallocator 
([email protected])
     ```
   From the above results, we can see that the main reason for high memory 
usage is that there are too many blocks. And the reason why there are so many 
blocks is because the blocks are very small.
   
   ### How should we improve?
   
   Introduce local allocation buffer like `MSLAB` in Hbase. 
   Refer: https://hbase.apache.org/book.html#gcpause
   
   ### Are you willing to submit PR?
   
   - [X] Yes I am willing to submit a PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to