Here is the rest of the stack trace.   Also, I got this when running the
rowcounter mapreduce job.  Which data node do I go and check? There are 37
of them :(

at
org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:104)
at
org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:77)
 at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1341)
at
org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.<init>(HRegion.java:2269)
 at
org.apache.hadoop.hbase.regionserver.HRegion.instantiateInternalScanner(HRegion.java:1126)
at
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1118)
 at
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1102)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:1767)
 at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:570)
 at
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1039)
Caused by: java.io.IOException: Could not obtain block:
blk_-424406918172069880_43644
file=/hbase/test/fd2613c69cf26f7948b6f123cb0d48cb/c1/5597046987759674559
 at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1977)
at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1784)
 at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1932)
at java.io.DataInputStream.read(DataInputStream.java:132)
 at
org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:105)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
 at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
at java.util.zip.CheckedInputStream.read(CheckedInputStream.java:42)
 at java.util.zip.GZIPInputStream.readUByte(GZIPInputStream.java:205)
at java.util.zip.GZIPInputStream.readUShort(GZIPInputStream.java:197)
 at java.util.zip.GZIPInputStream.readHeader(GZIPInputStream.java:136)
at java.util.zip.GZIPInputStream.<init>(GZIPInputStream.java:58)
 at java.util.zip.GZIPInputStream.<init>(GZIPInputStream.java:68)
at
org.apache.hadoop.io.compress.GzipCodec$GzipInputStream$ResetableGZIPInputStream.<init>(GzipCodec.java:95)
 at
org.apache.hadoop.io.compress.GzipCodec$GzipInputStream.<init>(GzipCodec.java:104)
at
org.apache.hadoop.io.compress.GzipCodec.createInputStream(GzipCodec.java:173)
 at
org.apache.hadoop.io.compress.GzipCodec.createInputStream(GzipCodec.java:183)
at org.apache.hadoop.hbas

Viv



On Mon, Mar 21, 2011 at 9:26 PM, Jean-Daniel Cryans <[email protected]>wrote:

> What's the rest of the error message? Is fsck ok? Using the hadoop
> shell, can you read one of those files that's error'ing?
>
> And more importantly, did you check the datanode log around the same
> timestamp?
>
> Thx,
>
> J-D
>
> On Mon, Mar 21, 2011 at 6:07 PM, Vivek Krishna <[email protected]>
> wrote:
> > I keep getting this error very often.
> >
> > java.io.IOException: java.io.IOException: Could not seek
> > StoreFileScanner[HFileScanner for reader reader=hdfs://e
> >
> > I have a 30 node cluster with several writers writing data.  Once the
> write
> > is done and I run rowcounter job to count the records, I face the above
> > error.  How do I fix this?
> >
> > Viv
> >
>

Reply via email to