That is interesting. It'd almost point to a shell issue. Enable DEBUG so client can see it. Then rerun shell. Is it at least loading the right region? (The regions start and end keys span the asked for key?). I took a look at your attached .META. scan. All looks good there. The region specifications look right. If you want to bundle up the region that is failing -- the one that the failing key comes out of, I can take a look here. You could also try playing with the HFile tool: ./bin/hbase org.apache.hadoop.hbase.io.hfile.HFile. Run the former and it'll output usage. You should be able to get it to dump content of the region (You need to supply flags like -v to see actual keys to the HFile tool else it just runs its check silently). Check for your key. Check things like timestamp on it. Maybe its 100 years in advance of now or something?
Yours, St.Ack On Fri, Oct 30, 2009 at 9:01 AM, Murali Krishna. P <[email protected]>wrote: > Attached ".META" > > Interesting, I was able to get the row from HTable via java code. But from > the shell, still getting following > > hbase(main):004:0> get 'TestTable2', 'ffffef95bcbf2638' > 0 row(s) in 1.2250 seconds > > Thanks, > Murali Krishna > > Thanks, > Murali Krishna > > > ------------------------------ > *From:* stack <[email protected]> > *To:* [email protected] > *Sent:* Fri, 30 October, 2009 8:39:46 PM > *Subject:* Re: Issue with bulk loader tool > > Can you send a listing of ".META."? > > hbase> scan ".META." > > Also, can you bring a region down from hdfs, tar and gzip it, and then put > it someplace I can pull so I can take a look? > > Thanks, > St.Ack > > > On Fri, Oct 30, 2009 at 3:31 AM, Murali Krishna. P > <[email protected]>wrote: > > > Hi guys, > > I created a table according to hbase-48. A mapreduce job which creates > > HFiles and then used loadtable.rb script to create the table. Everything > > worked fine and i was able to scan the table. But when i do a get for a > key > > displayed in the scan output, it is not retrieving the row. shell says 0 > > row. > > > > I tried using one reducer to ensure total ordering, but still same > issue. > > > > > > My mapper is like: > > context.write(new > > ImmutableBytesWritable(((Text)key).toString().getBytes()), new > > KeyValue(((Text)key).toString().getBytes(), "family1".getBytes(), > > "column1".getBytes(), getValueBytes())); > > > > > > Please help me investigate this. > > > > Thanks, > > Murali Krishna > > >
