Can you tell us more about your off-heap cache solution ? Have you seen https://issues.apache.org/jira/browse/HBASE-7404 ?
Thanks On Jan 21, 2014, at 11:51 AM, Dean <[email protected]> wrote: > Hi, > > We recently upgraded our production cluster from 0.92.1 to 0.94.6 (CDH > 4.5.0). While running 0.92.1, we used the "experimental" off-heap block > cache with good results, but it appears broken and unusable in 0.94.6 We > seem to have bumped into HBASE-6048 or HBASE-7136, though we hit the error > with increments rather than scans. > > Both Is it just that nobody uses off-heap cache, or is there a particular RS > configuration or table schema that causes issues? We found that the off-heap > cache was really useful in allowing us to cache the blocks we needed without > resorting to large heap sizes or trashing the OS filesystem cache during > scans. > > Cheers, > > Dean > > The stack trace is: > Mon Jan 20 21:31:37 GMT 2014, > org.apache.hadoop.hbase.client.HTable$7@37e687d1, java.io.IOException: > java.io.IOException: java.lang.IllegalStateException: Schema metrics > requested before table/CF name initialization: > {"tableName":"null","cfName":"null"} > at > org.apache.hadoop.hbase.regionserver.metrics.SchemaConfigured.getSchemaMetrics(SchemaConfigured.java:180) > at > org.apache.hadoop.hbase.io.hfile.LruBlockCache.updateSizeMetrics(LruBlockCache.java:337) > at > org.apache.hadoop.hbase.io.hfile.LruBlockCache.cacheBlock(LruBlockCache.java:292) > at > org.apache.hadoop.hbase.io.hfile.LruBlockCache.cacheBlock(LruBlockCache.java:320) > at > org.apache.hadoop.hbase.io.hfile.DoubleBlockCache.getBlock(DoubleBlockCache.java:102) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:303) > at > org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:480) > at > org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:530) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:236) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:161) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:349) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:355) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:312) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:277) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:543) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:411) > at > org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:143) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3867) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3939) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3810) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3791) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:3834) > at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:4760) > at > org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:5202) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.increment(HRegionServer.java:3532) > at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) > at > org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1428)
