Hi

I have a strange error in log of HBase (0.90.2)

2011-11-28 15:12:53,049 INFO
org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded col bloom
filter metadata
for 
hdfs://hadoop89:8020/hbase/TC/2833a916eba1c562cdc5533b6af3ddc0/PI/569772658705731893
2011-11-28 15:12:53,052 INFO org.apache.hadoop.hdfs.DFSClient: Failed to
connect to /10.232.83.107:50010, add to deadNodes and continue
java.io.IOException: Got error in response to OP_READ_BLOCK self=/
10.232.83.107:33694, remote=/10.232.83.107:50010 for file
/hbase/TC/2833a916eba1c562cdc5533b6af3ddc0/PI/6153988896004900741 for block
2491155619874165730_1067272
        at
org.apache.hadoop.hdfs.DFSClient$BlockReader.newBlockReader(DFSClient.java:1487)
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1811)
        at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1948)
        at java.io.DataInputStream.readFully(DataInputStream.java:178)
        at java.io.DataInputStream.readFully(DataInputStream.java:152)
        at
org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1521)
        at
org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:885)
        at
org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:819)
        at
org.apache.hadoop.hbase.regionserver.StoreFile$Reader.loadFileInfo(StoreFile.java:1002)
        at
org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:382)
        at
org.apache.hadoop.hbase.regionserver.StoreFile.createReader(StoreFile.java:438)
        at
org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:266)
        at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:208)
        at
org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2003)
        at
org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:345)
        at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2546)
        at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2532)
        at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:262)
        at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:94)
        at
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:151)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
2011-11-28 15:12:53,053 INFO
org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded col bloom
filter metadata
for 
hdfs://hadoop89:8020/hbase/TC/4dff91c02a4a7617457e9628318c1ff3/EVENT/7559787874595222989

then few line later i can see
2011-11-28 15:12:53,139 INFO
org.apache.hadoop.hbase.regionserver.StoreFile$Reader: Loaded col bloom
filter metadata
for 
hdfs://hadoop89:8020/hbase/TC/2833a916eba1c562cdc5533b6af3ddc0/PI/6153988896004900741
So can i deduce that even if the first operation failed everything is ok,
or is something is lost?

Second and Certainly i need to post this question to hdfs guys, (since this
is a dfs output) - from the first line i see that the same node that is the
source of the request will add itself as a dead node isn't it strange?

Regards,
Mikael.S

Reply via email to