[
https://issues.apache.org/jira/browse/HDFS-15159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17058356#comment-17058356
]
Ayush Saxena commented on HDFS-15159:
-------------------------------------
Why this is written twice :
{code:java}
+ BlockManagerTestUtil.computeAllPendingWork(bm);
+ BlockManagerTestUtil.computeAllPendingWork(bm);
{code}
Can you add a line, as why these configurations are being set(since the test
passed for me without these too, so better to know why they were set) :
{code:java}
+ conf.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 1024);
+ conf.setLong(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1);
+ conf.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1);
{code}
No need of 4 datanodes, To speed up the test, you can test your scenario with
just 2 datanodes, create a file with rep 1 and increase the rep to 2 later.
Give a check if it is ok.
> Prevent adding same DN multiple times in PendingReconstructionBlocks
> --------------------------------------------------------------------
>
> Key: HDFS-15159
> URL: https://issues.apache.org/jira/browse/HDFS-15159
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: hemanthboyina
> Assignee: hemanthboyina
> Priority: Major
> Attachments: HDFS-15159.001.patch, HDFS-15159.002.patch
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]