Yep.. I have only filters which are deployed along with HBase 0.90. 
You can browse CVS code @ http://bizosyshsearch.sourceforge.net/

One more finding > The 0.90 seems slower than 0.89

Test Result: I indexed using HSearch (HSearch Uses HBase for storing
indexes) around 1 Million records of Freebase location information. The
warmed Search for keyword "Hill" returned around 6000 matching records and
10 teasers in around 250ms. In the same test bed with 0.90 it went up to
280ms on average. May be the ugly session warnings are causing it!! 

However, with 0.90 Get Batching for teasers it came down to 235ms.

Regards
Abinash

-----Original Message-----
From: Ted Dunning [mailto:[email protected]] 
Sent: Friday, January 07, 2011 9:23 PM
To: [email protected]; [email protected]
Subject: Re: java.lang.NoSuchMethodException: hbase-0.90

This is on 0.90, right?  Were you using HDFS to store your region tables?

I just ran into the same thing and looked into the
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader$WAL
ReaderFSDataInputStream.getPos
method.

That method does some truly hideous reflection things without checking that
the objects involved actually are the correct type.  It also pierces the
visibility constraints on fields internal to objects by manipulating their
visibility.

Is that code really necessary?  Is there a good way to make it less
sensitive to violation of its assumptions?

My own situation is a bit unusual since I was testing hbase on a non-HDFS
file system, but Abinash's experience makes it seem that there is something
worse going on.

On Fri, Jan 7, 2011 at 2:32 AM, Abinash Karana (Bizosys) <
[email protected]> wrote:

> 11/01/07 14:46:11 WARN wal.SequenceFileLogReader: Error while trying to
get
> accurate file length.  Truncation / data loss may occur if RegionServers d
> ie.
> java.lang.NoSuchMethodException:
>
>
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.getFileLength
> ()
>        at java.lang.Class.getMethod(Unknown Source)
>        at
>
>
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader$WAL
> ReaderFSDataInputStream.getPos(SequenceFileLogReader.java:107)
>        at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1434)
>        at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1424)
>        at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1419)
>        at
>
>
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<in
> it>(SequenceFileLogReader.java:57)
>        at
>
>
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(Sequence
> FileLogReader.java:158)
>        at
> org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:576)
>        at
>
>
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.ja
> va:1848)
>        at
>
>
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegi
> on.java:1808)
>        at
> org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:350)
>        at
>
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2505)
>        at
>
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2491)
>        at
>
>
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(Op
> enRegionHandler.java:262)
>        at
>
>
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenR
> egionHandler.java:94)
>        at
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:151)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown
> Source)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> Source)
>        at java.lang.Thread.run(Unknown Source)
>
>

Reply via email to