[ https://issues.apache.org/jira/browse/HDFS-8019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14545001#comment-14545001 ]
Kai Zheng commented on HDFS-8019: --------------------------------- In my understanding, coding buffer allocating and managing would be coordinated by higher layer. Some factors I could think of for now: * What kinds of buffer, heap buffer or direct buffer. This is determined by the configured codec and coder. If it's a Java one, then heap buffer is good enough; otherwise if it's a native one, direct buffer would be good; * How many coding tasks are allowed to perform concurrently, which might be determined by other configuration or facts how powerful in CPU and how richful in memory. As a basic facility provided for higher layer, I thought it would be good enough to have some API for the upper layer to adjust the values. If in future we found such items are good to be configurable, we can do it then. What concerned me to have them now is that once we add them, then it's concerned to deprecate/remove/change them later if find they're not actually useful or easy to conflict with other items. > Erasure Coding: erasure coding chunk buffer allocation and management > --------------------------------------------------------------------- > > Key: HDFS-8019 > URL: https://issues.apache.org/jira/browse/HDFS-8019 > Project: Hadoop HDFS > Issue Type: Sub-task > Reporter: Kai Zheng > Assignee: Vinayakumar B > Attachments: HDFS-8019-HDFS-7285-01.patch, > HDFS-8019-HDFS-7285-02.patch > > > As a task of HDFS-7344, this is to come up a chunk buffer pool allocating and > managing coding chunk buffers, either based on on-heap or off-heap. Note this > assumes some DataNodes are powerful in computing and performing EC coding > work, so better to have this dedicated buffer pool and management. -- This message was sent by Atlassian JIRA (v6.3.4#6332)