take a look at
HBASE-1777
not sure but It might need to be addressed in 0.20.0

This cause my whole cluster to go down because each server reloads the bad column from hlog replays and the next server goes down on the next memstore flush for that region.

Billy




"stack" <[email protected]> wrote in message news:[email protected]...
Please file a bug Billy.

IMO this is not a critical issue since its easy enough making a workaround.

Thanks,
St.Ack

On Tue, Aug 18, 2009 at 10:58 AM, Billy Pearson
<[email protected]>wrote:

testing RC2

found
2009-08-18 12:54:16,572 FATAL
org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Replay of hlog
required. Forcing server shutdown
org.apache.hadoop.hbase.DroppedSnapshotException: region:
webdata,http:\x2F\x2Fanaal-genomen.isporno.nl\x2F,1250569930062
      at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:950)
      at
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:843)
      at
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:241)
      at
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.run(MemStoreFlusher.java:149)
Caused by: java.io.IOException: Key length 183108 > 65536
      at
org.apache.hadoop.hbase.io.hfile.HFile$Writer.checkKey(HFile.java:511)
      at
org.apache.hadoop.hbase.io.hfile.HFile$Writer.append(HFile.java:479)
      at
org.apache.hadoop.hbase.io.hfile.HFile$Writer.append(HFile.java:447)
      at
org.apache.hadoop.hbase.regionserver.Store.internalFlushCache(Store.java:525)
      at
org.apache.hadoop.hbase.regionserver.Store.flushCache(Store.java:489)
      at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:935)
      ... 3 more


looks like from a mr job we allow keys larger then 65536 to be processed
but when the memcache flush happends we kill the server should the client
not fail before the long key gets in to memory?

Billy



"stack" <[email protected]> wrote in message
news:[email protected]...

The second hbase 0.20.0 release candidate is available for download:

http://people.apache.org/~stack/hbase-0.20.0-candidate-2/<http://people.apache.org/%7Estack/hbase-0.20.0-candidate-2/>

Near to 450 issues have been addressed since the 0.19 branch.  Release
notes
are available here: http://su.pr/18zcEO <http://tinyurl.com/8xmyx9>.

HBase 0.20.0 runs on Hadoop 0.20.0.  Alot has changed since 0.19.x
including
configuration fundamentals.  Be sure to read the 'Getting Started'
documentation in 0.20.0 available here: http://su.pr/8YQjHO

If you wish to bring your 0.19.x hbase data forward to 0.20.0, you will
need
to run a migration. See http://wiki.apache.org/hadoop/Hbase/HowToMigrate
.
First read the overview and then go to the section, 'From 0.19.x to
0.20.x'.

Should we release this candidate as hbase 0.20.0?  Please vote +1/-1 by
Wednesday, August 25th.

Yours,
The HBasistas







Reply via email to