[ 
https://issues.apache.org/jira/browse/HDFS-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15599928#comment-15599928
 ] 

Daniel Templeton commented on HDFS-11045:
-----------------------------------------

Looking back at the way the test is constructed, we're looking at the ratio of 
running time to waiting time.  One big source of variability is that the last 
round may not include any waiting.  For example, if the throttle limit is 
800ms, we might throttle perfectly in the first round (800/200) and then spend 
800ms finishing the scan in the second round, giving a final ratio of 1600/200, 
even though the throttling was exactly correct.

An obvious solution would be to count not just total wait and run times, but 
also track the rounds and calculate the ratio per round, maybe keeping a 
running average or something to get the final ratio.  It would also have to 
ignore the final, incomplete round.

Another improvement which would probably also go a long way would be to start 
by empirically figure out how many blocks are needed to get a complete round on 
scanning.

> TestDirectoryScanner#testThrottling fails: Throttle is too permissive
> ---------------------------------------------------------------------
>
>                 Key: HDFS-11045
>                 URL: https://issues.apache.org/jira/browse/HDFS-11045
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 3.0.0-alpha2
>            Reporter: John Zhuge
>            Priority: Minor
>
>   TestDirectoryScanner.testThrottling:709 Throttle is too permissive
> https://builds.apache.org/job/PreCommit-HDFS-Build/17259/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to