[ 
https://issues.apache.org/jira/browse/HDFS-12443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16216760#comment-16216760
 ] 

Weiwei Yang commented on HDFS-12443:
------------------------------------

Thanks for taking over [~linyiqun]. Some more background, currently the code 
uses _push_ mode, which means SCM collects TXs and caches them for DNs. Instead 
of _pull_ (DN fetches TXs from SCM while doing HB processing). This was because 
we want to make HB lightweight as much as possible (I/O-less). But this 
approach makes it a bit difficult to prepare a reasonable size of TXs for DNs, 
please take a look at this issue and let me know if you have any idea to 
resolve it. Thanks!

> Ozone: Improve SCM block deletion throttling algorithm 
> -------------------------------------------------------
>
>                 Key: HDFS-12443
>                 URL: https://issues.apache.org/jira/browse/HDFS-12443
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: ozone, scm
>            Reporter: Weiwei Yang
>            Assignee: Yiqun Lin
>              Labels: OzonePostMerge
>
> Currently SCM scans delLog to send deletion transactions to datanode 
> periodically, the throttling algorithm is simple, it scans at most 
> {{BLOCK_DELETE_TX_PER_REQUEST_LIMIT}} (by default 50) at a time. This is 
> non-optimal, worst case it might cache 50 TXs for 50 different DNs so each DN 
> will only get 1 TX to proceed in an interval, this will make the deletion 
> slow. An improvement to this is to make this throttling by datanode, e.g 50 
> TXs per datanode per interval.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to