[ https://issues.apache.org/jira/browse/HDFS-12443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16220394#comment-16220394 ]
Yiqun Lin commented on HDFS-12443: ---------------------------------- Hi [~cheersyang], bq. But this approach makes it a bit difficult to prepare a reasonable size of TXs for DNs, please take a look at this issue and let me know if you have any idea to resolve it. I think this should the same type problem that we discussed in HDFS-12691. Maybe as you said, we should do some scaling testing, then we get a appropriate value. Let's get back of this JIRA, today I took some time making some change of block deletion throttling algorithm. Attach the initial patch (With some additional log floodings fixed). [~cheersyang], please have a review. > Ozone: Improve SCM block deletion throttling algorithm > ------------------------------------------------------- > > Key: HDFS-12443 > URL: https://issues.apache.org/jira/browse/HDFS-12443 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone, scm > Reporter: Weiwei Yang > Assignee: Yiqun Lin > Labels: OzonePostMerge > Attachments: HDFS-12443-HDFS-7240.001.patch > > > Currently SCM scans delLog to send deletion transactions to datanode > periodically, the throttling algorithm is simple, it scans at most > {{BLOCK_DELETE_TX_PER_REQUEST_LIMIT}} (by default 50) at a time. This is > non-optimal, worst case it might cache 50 TXs for 50 different DNs so each DN > will only get 1 TX to proceed in an interval, this will make the deletion > slow. An improvement to this is to make this throttling by datanode, e.g 50 > TXs per datanode per interval. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org