samar created HBASE-7402:
----------------------------

             Summary: java.io.IOException: Got error in response to 
OP_READ_BLOCK
                 Key: HBASE-7402
                 URL: https://issues.apache.org/jira/browse/HBASE-7402
             Project: HBase
          Issue Type: Bug
          Components: HFile
    Affects Versions: 0.94.0, 0.90.4
            Reporter: samar


Getting this error on our hbase version 0.90.4-cdh3u3


2012-12-18 02:35:39,082 WARN org.apache.hadoop.hdfs.DFSClient: Failed to 
connect to /x.x.x.x:xxxxx for file 
/hbase/table_x/37bea13d03ed9fa611941cc4aad6e8c2/scores/7355825801969613604 for 
block 3174705353677971357:java.io.IOException: Got error in response to 
OP_READ_BLOCK self=/x.x.x.x, remote=/x.x.x.x:xxxx for file 
/hbase/table_x/37bea13d03ed9fa611941cc4aad6e8c2/scores/7355825801969613604 for 
block 3174705353677971357_1028665
        at 
org.apache.hadoop.hdfs.DFSClient$RemoteBlockReader.newBlockReader(DFSClient.java:1673)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.getBlockReader(DFSClient.java:2383)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:2272)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2438)
        at 
org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46)
        at 
org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:141)
        at 
org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1094)
        at 
org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:1036)
        at 
org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1446)
        at 
org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1303)
        at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:136)
        at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:96)
        at 
org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:77)
        at 
org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1405)
        at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.<init>(HRegion.java:2467)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateInternalScanner(HRegion.java:1192)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1184)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1168)
        at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:3215)

this causes the HBase RS to hang and hence stops responding.

 In NameNode the block was delete before.. ( as per the timestamp)

2012-12-18 02:25:19,027 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* ask 
x.x.x.x:xxxxx to delete  blk_3174705353677971357_1028665 
blk_-9072685530813588257_1028824
2012-12-18 02:25:19,027 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* ask 
x.x.x.x:xxxxx to delete  blk_5651962510569886604_1028711
2012-12-18 02:25:22,027 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* ask 
x.x.x.x:xxxxx to delete  blk_3174705353677971357_1028665


Looks like org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream is 
cacheing the  block location and causing this issue


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to