[ 
https://issues.apache.org/jira/browse/HDFS-1414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12915087#action_12915087
 ] 

Tanping Wang commented on HDFS-1414:
------------------------------------

I removed the hadoop-14-dfs-dir.tgz that I generated with added VERSION files 
because after adding clusterID and blockpoolID. We need to generate new 
FSImages that is used in TestUpgradeFromImage.  However as I commented in 
HADOOP-2797.  Since there is test program to generate these FSImages, I need to 
write one that contains the various categories

    * zero length files
    * file with replication set higher than number of datanodes
    * file with no .crc file
    * file with corrupt .crc file
    * file with multiple blocks (will need to set dfs.block.size to a small 
value)
    * file with multiple checksum blocks
    * empty directory
    * all of the above again but with a different io.bytes.per.checksum setting

Except for the very initiative ones such as zero length files or empty 
directory, I need to spend some time to find out how to make  

    * file with replication set higher than number of datanodes
    * file with no .crc file
    * file with corrupt .crc file
    * file with multiple blocks (will need to set dfs.block.size to a small 
value)
    * file with multiple checksum blocks

Any how, I think the test should be easier to maintain than how it is now......

> HDFS federation : fix unit test cases
> -------------------------------------
>
>                 Key: HDFS-1414
>                 URL: https://issues.apache.org/jira/browse/HDFS-1414
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Tanping Wang
>         Attachments: HDFS1414-branchHDFS1052.1.patch
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to