[ 
https://issues.apache.org/jira/browse/HDFS-14029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated HDFS-14029:
------------------------------
    Description: 
TestLazyPersistFiles#testFileShouldNotDiscardedIfNNRestarted test should be 
improved.

The test sleeps for 6000 at once, it could at least sleep in a loop checking 
for the corrupt block to be reported.
{code:java}
    cluster.shutdownDataNodes();

    cluster.restartNameNodes();

    // wait for the redundancy monitor to mark the file as corrupt.
    Thread.sleep(2 * DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_DEFAULT * 1000);

    Long corruptBlkCount = (long) Iterators.size(cluster.getNameNode()
        .getNamesystem().getBlockManager().getCorruptReplicaBlockIterator());
{code}

Thanks [~knanasi] for the suggestion.

  was:
TestLazyPersistFiles#testFileShouldNotDiscardedIfNNRestarted test should be 
improved.

The test sleeps for 6000 at once, it could at least sleep in a loop checking 
for the corrupt block to be reported.
{code:java}
    cluster.shutdownDataNodes();

    cluster.restartNameNodes();

    // wait for the redundancy monitor to mark the file as corrupt.
    Thread.sleep(2 * DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_DEFAULT * 1000);

    Long corruptBlkCount = (long) Iterators.size(cluster.getNameNode()
        .getNamesystem().getBlockManager().getCorruptReplicaBlockIterator());
{code}


> Sleep in TestLazyPersistFiles should be put into a loop
> -------------------------------------------------------
>
>                 Key: HDFS-14029
>                 URL: https://issues.apache.org/jira/browse/HDFS-14029
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs
>            Reporter: Adam Antal
>            Priority: Trivial
>              Labels: newbie
>
> TestLazyPersistFiles#testFileShouldNotDiscardedIfNNRestarted test should be 
> improved.
> The test sleeps for 6000 at once, it could at least sleep in a loop checking 
> for the corrupt block to be reported.
> {code:java}
>     cluster.shutdownDataNodes();
>     cluster.restartNameNodes();
>     // wait for the redundancy monitor to mark the file as corrupt.
>     Thread.sleep(2 * DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_DEFAULT * 1000);
>     Long corruptBlkCount = (long) Iterators.size(cluster.getNameNode()
>         .getNamesystem().getBlockManager().getCorruptReplicaBlockIterator());
> {code}
> Thanks [~knanasi] for the suggestion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to