[ 
https://issues.apache.org/jira/browse/HDFS-1523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12965569#action_12965569
 ] 

Konstantin Boudnik commented on HDFS-1523:
------------------------------------------

Here's the scenario of the test:
- read the file in chunks of 134217728 (128Mb)
- after last full read there are 513 bytes to be read
- 512 bytes of those have to be read from the first block
- 1 byte is going to be read from the last block (second one)

When test passes before reading last 513 bytes call to 
FSNameSystem.getBlockLocationsInternal returns last block of the file (size=1)

{noformat}
2010-11-30 19:44:49,439 DEBUG namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(866)) - blocks = 
[blk_-6779333650185181528_1001, blk_-3599
865432887782445_1001]
2010-11-30 19:44:49,440 DEBUG namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(881)) - last = 
blk_-3599865432887782445_1001
2010-11-30 19:44:49,457 INFO  FSNamesystem.audit 
(FSNamesystem.java:logAuditEvent(148)) - ugi=cos       ip=/127.0.0.1   cmd=open 
       src=/home/cos/work/
H0.23/git/hdfs/build/test/data/2147484160.dat        dst=null        perm=null
2010-11-30 19:44:49,459 DEBUG hdfs.DFSClient 
(DFSInputStream.java:openInfo(113)) - newInfo = LocatedBlocks{
  fileLength=2147484161
  underConstruction=false
  blocks=[LocatedBlock{blk_-6779333650185181528_1001; 
getBlockSize()=2147484160; corrupt=false; offset=0; locs=[127.0.0.1:35608]}]
  lastLocatedBlock=LocatedBlock{blk_-3599865432887782445_1001; 
getBlockSize()=1; corrupt=false; offset=2147484160; locs=[127.0.0.1:35608]}
  isLastBlockComplete=true}
...
2010-11-30 19:45:23,880 INFO  DataNode.clienttrace 
(BlockSender.java:sendBlock(491)) - src: /127.0.0.1:35608, dest: 
/127.0.0.1:51640, bytes: 2164261380, op
: HDFS_READ, cliID: DFSClient_1273505070, offset: 0, srvID: 
DS-212336177-192.168.102.126-35608-1291175019763, blockid: 
blk_-6779333650185181528_1001, durat
ion: 34361753463
2010-11-30 19:45:24,030 DEBUG namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(866)) - blocks = 
[blk_-6779333650185181528_1001, blk_-3599
865432887782445_1001]
2010-11-30 19:45:24,030 DEBUG namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(881)) - last = 
blk_-3599865432887782445_1001
2010-11-30 19:45:24,031 INFO  FSNamesystem.audit 
(FSNamesystem.java:logAuditEvent(148)) - ugi=cos       ip=/127.0.0.1   cmd=open 
       src=/home/cos/work/H0.23/git/hdfs/build/test/data/2147484160.dat        
dst=null        perm=null
2010-11-30 19:45:24,032 DEBUG datanode.DataNode (DataXceiver.java:<init>(86)) - 
Number of active connections is: 2
2010-11-30 19:45:24,099 DEBUG datanode.DataNode (DataXceiver.java:run(135)) - 
DatanodeRegistration(127.0.0.1:35608, 
storageID=DS-212336177-192.168.102.126-35608-1291175019763, infoPort=46218, 
ipcPort=38099):Number of active connections is: 3
2010-11-30 19:45:24,099 DEBUG datanode.DataNode (BlockSender.java:<init>(140)) 
- block=blk_-3599865432887782445_1001, replica=FinalizedReplica, 
blk_-3599865432887782445_1001, FINALIZED
  getNumBytes()     = 1
  getBytesOnDisk()  = 1
  getVisibleLength()= 1
  getVolume()       = 
/home/cos/work/H0.23/git/hdfs/build/test/data/dfs/data/data2/current/finalized
  getBlockFile()    = 
/home/cos/work/H0.23/git/hdfs/build/test/data/dfs/data/data2/current/finalized/blk_-3599865432887782445
  unlinked=false
2010-11-30 19:45:24,101 DEBUG datanode.DataNode (BlockSender.java:<init>(231)) 
- replica=FinalizedReplica, blk_-3599865432887782445_1001, FINALIZED
  getNumBytes()     = 1
  getBytesOnDisk()  = 1
  getVisibleLength()= 1
  getVolume()       = 
/home/cos/work/H0.23/git/hdfs/build/test/data/dfs/data/data2/current/finalized
  getBlockFile()    = 
/home/cos/work/H0.23/git/hdfs/build/test/data/dfs/data/data2/current/finalized/blk_-3599865432887782445
  unlinked=false
2010-11-30 19:45:24,103 INFO  DataNode.clienttrace 
(BlockSender.java:sendBlock(491)) - src: /127.0.0.1:35608, dest: 
/127.0.0.1:51644, bytes: 5, op: HDFS_READ, cliID: DFSClient_1273505070, offset: 
0, srvID: DS-212336177-192.168.102.126-35608-1291175019763, blockid: 
blk_-3599865432887782445_1001, duration: 1854472
{noformat}

In case of failure:
{noformat}
2010-11-30 19:35:49,426 DEBUG namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(866)) - blocks = 
[blk_1170274882140601397_1001, blk_289191
6181488413346_1001]
2010-11-30 19:35:49,426 DEBUG namenode.FSNamesystem 
(FSNamesystem.java:getBlockLocationsInternal(881)) - last = 
blk_2891916181488413346_1001
2010-11-30 19:35:49,427 INFO  FSNamesystem.audit 
(FSNamesystem.java:logAuditEvent(148)) - ugi=cos       ip=/127.0.0.1   cmd=open 
       src=/home/cos/work/
hadoop/git/hdfs/build/test/data/2147484160.dat       dst=null        perm=null
2010-11-30 19:35:49,428 DEBUG hdfs.DFSClient 
(DFSInputStream.java:openInfo(113)) - newInfo = LocatedBlocks{
  fileLength=2147484161
  underConstruction=false
  blocks=[LocatedBlock{blk_1170274882140601397_1001; getBlockSize()=2147484160; 
corrupt=false; offset=0; locs=[127.0.0.1:35644]}]
  lastLocatedBlock=LocatedBlock{blk_2891916181488413346_1001; getBlockSize()=1; 
corrupt=false; offset=2147484160; locs=[127.0.0.1:35644]}
  isLastBlockComplete=true}
...
2010-11-30 19:36:16,761 INFO  DataNode.clienttrace 
(BlockSender.java:sendBlock(491)) - src: /127.0.0.1:35644, dest: 
/127.0.0.1:52290, bytes: 2164194816, op
: HDFS_READ, cliID: DFSClient_635470834, offset: 0, srvID: 
DS-514949605-127.0.0.1-35644-1291174495289, blockid: 
blk_1170274882140601397_1001, duration: 273
20364567
2010-11-30 19:36:16,761 DEBUG datanode.DataNode (DataXceiver.java:run(135)) - 
DatanodeRegistration(127.0.0.1:35644, storageID=DS-514949605-127.0.0.1-35644-
1291174495289, infoPort=56924, ipcPort=60439):Number of active connections is: 2
2010-11-30 19:36:16,794 DEBUG datanode.DataNode (DataXceiver.java:<init>(86)) - 
Number of active connections is: 1
2010-11-30 19:36:16,794 DEBUG datanode.DataNode (BlockSender.java:<init>(140)) 
- block=blk_1170274882140601397_1001, replica=FinalizedReplica, blk_11702748
82140601397_1001, FINALIZED
  getNumBytes()     = 2147484160
  getBytesOnDisk()  = 2147484160
  getVisibleLength()= 2147484160
  getVolume()       = 
/home/cos/work/hadoop/git/hdfs/build/test/data/dfs/data/data1/current/finalized
  getBlockFile()    = 
/home/cos/work/hadoop/git/hdfs/build/test/data/dfs/data/data1/current/finalized/blk_1170274882140601397
  unlinked=false
2010-11-30 19:36:16,795 DEBUG datanode.DataNode (BlockSender.java:<init>(231)) 
- replica=FinalizedReplica, blk_1170274882140601397_1001, FINALIZED
  getNumBytes()     = 2147484160
  getBytesOnDisk()  = 2147484160
  getVisibleLength()= 2147484160
  getVolume()       = 
/home/cos/work/hadoop/git/hdfs/build/test/data/dfs/data/data1/current/finalized
  getBlockFile()    = 
/home/cos/work/hadoop/git/hdfs/build/test/data/dfs/data/data1/current/finalized/blk_1170274882140601397
  unlinked=false
2010-11-30 19:36:17,276 INFO  DataNode.clienttrace 
(BlockSender.java:sendBlock(491)) - src: /127.0.0.1:35644, dest: 
/127.0.0.1:52296, bytes: 135200256, op:
 HDFS_READ, cliID: DFSClient_635470834, offset: 2013265920, srvID: 
DS-514949605-127.0.0.1-35644-1291174495289, blockid: 
blk_1170274882140601397_1001, durat
ion: 480762241
2010-11-30 19:36:17,276 DEBUG datanode.DataNode (DataXceiver.java:run(135)) - 
DatanodeRegistration(127.0.0.1:35644, storageID=DS-514949605-127.0.0.1-35644-
1291174495289, infoPort=56924, ipcPort=60439):Number of active connections is: 2
2010-11-30 19:36:17,290 WARN  hdfs.DFSClient 
(DFSInputStream.java:readBuffer(486)) - Exception while reading from 
blk_1170274882140601397_1001 of /home/cos
/work/hadoop/git/hdfs/build/test/data/2147484160.dat from 127.0.0.1:35644: 
java.io.IOException: Premature EOF from inputStream
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:118)
        at org.apache.hadoop.hdfs.BlockReader.readChunk(BlockReader.java:275)
{noformat}

so it seems like the test fails because wrong block is being read or something.

> TestLargeBlock is failing on trunk
> ----------------------------------
>
>                 Key: HDFS-1523
>                 URL: https://issues.apache.org/jira/browse/HDFS-1523
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 0.22.0
>            Reporter: Konstantin Boudnik
>
> TestLargeBlock is failing for more than a week not on 0.22 and trunk with
> {noformat}
> java.io.IOException: Premeture EOF from inputStream
>       at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:118)
>       at org.apache.hadoop.hdfs.BlockReader.readChunk(BlockReader.java:275)
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to