[ 
https://issues.apache.org/jira/browse/HDFS-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16184501#comment-16184501
 ] 

Andrew Wang commented on HDFS-12412:
------------------------------------

I've cherry-picked this to branch-3.0 for beta1.

> Change ErasureCodingWorker.stripedReadPool to cached thread pool
> ----------------------------------------------------------------
>
>                 Key: HDFS-12412
>                 URL: https://issues.apache.org/jira/browse/HDFS-12412
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: erasure-coding
>    Affects Versions: 3.0.0-alpha3
>            Reporter: Lei (Eddy) Xu
>            Assignee: Lei (Eddy) Xu
>              Labels: hdfs-ec-3.0-nice-to-have
>             Fix For: 3.0.0-beta1
>
>         Attachments: HDFS-12412.00.patch, HDFS-12412.01.patch
>
>
> In {{ErasureCodingWorker}}, it uses {{stripedReconstructionPool}} to schedule 
> the EC recovery tasks, while uses {{stripedReadPool}} for the reader threads 
> in each recovery task.  We only need one of them to throttle the speed of 
> recovery process, because each EC recovery task has a fix number of source 
> readers (i.e., 3 for RS(3,2)). And because of the findings in HDFS-12044, the 
> speed of EC recovery can be throttled by {{strippedReconstructionPool}} with 
> {{xmitsInProgress}}. 
> Moreover, keeping {{stripedReadPool}} makes customer difficult to understand 
> and calculate the right balance between 
> {{dfs.datanode.ec.reconstruction.stripedread.threads}}, 
> {{dfs.datanode.ec.reconstruction.stripedblock.threads.size}} and 
> {{maxReplicationStreams}}.  For example, a small {{stripread.threads}} 
> (comparing to which {{reconstruction.threads.size}} implies), will 
> unnecessarily limit the speed of recovery, which leads to larger MTTR. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to