[ 
https://issues.apache.org/jira/browse/HDFS-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Su updated HDFS-8607:
----------------------------
    Attachment: HDFS-8607.01.patch

{noformat}
2015-06-15 17:02:35,962 WARN  hdfs.DFSClient 
(DFSInputStream.java:actualGetFromOneDataNode(1218)) - Connection failure: 
Failed to connect to /127.0.0.1:58508 for file 
/srcdat/nine/eight/5842207179401855738 for block 
BP-1622355698-9.96.1.34-1434358952058:blk_1073741833_1009:java.io.IOException: 
Got error, status message opReadBlock 
BP-1622355698-9.96.1.34-1434358952058:blk_1073741833_1009 received exception 
java.io.IOException: BlockId 1073741833 is not valid., for OP_READ_BLOCK, 
self=/127.0.0.1:43788, remote=/127.0.0.1:58508, for file 
/srcdat/nine/eight/5842207179401855738, for pool 
BP-1622355698-9.96.1.34-1434358952058 block 1073741833_1009 
java.io.IOException: Got error, status message opReadBlock 
BP-1622355698-9.96.1.34-1434358952058:blk_1073741833_1009 received exception 
java.io.IOException: BlockId 1073741833 is not valid., for OP_READ_BLOCK, 
self=/127.0.0.1:43788, remote=/127.0.0.1:58508, for file 
/srcdat/nine/eight/5842207179401855738, for pool 
BP-1622355698-9.96.1.34-1434358952058 block 1073741833_1009
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:140)
    at 
org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:453)
    at 
org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:421)
    at 
org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:819)
    at 
org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:698)
    at 
org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:358)
    at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:655)
    at 
org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1178)
    at 
org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1141)
    at 
org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:1099)
...
{noformat}
With the patch, we got the expected IOException and know it's properly handled.

> TestFileCorruption doesn't work as expected
> -------------------------------------------
>
>                 Key: HDFS-8607
>                 URL: https://issues.apache.org/jira/browse/HDFS-8607
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Walter Su
>            Assignee: Walter Su
>         Attachments: HDFS-8607.01.patch
>
>
> Although it passes, it's useless.
> {code}
>  77  File[] blocks = data_dir.listFiles();
>  78  assertTrue("Blocks do not exist in data-dir", (blocks != null) && 
> (blocks.length > 0));
>  79  for (int idx = 0; idx < blocks.length; idx++) {
>  80    if (!blocks[idx].getName().startsWith(Block.BLOCK_FILE_PREFIX)) {
>  81      continue;
>  82    }
>  83    System.out.println("Deliberately removing file 
> "+blocks[idx].getName());
>  84    assertTrue("Cannot remove file.", blocks[idx].delete());
>  85  }
> {code}
> blocks are located at finalized/subdir0/subdir0, but line 77 only returns 
> "subdir0" because {{File.listFiles()}} is not recursive. So line 83~84 will 
> never be excuted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to