Yes there was concurrent compaction happening.  This was the cause for the
scanner reset and so finally ended up in seeking/next in the encoded block
of those files under the storefilescanner.

Adding the trace to show how a memstore flusher was trying to read a hfile.

org.apache.hadoop.hbase.DroppedSnapshotException: region:
usertable,user5152654437639860133,1391056599393.654e89edf63813d2120e9d287afff889.
        at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1694)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1556)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1471)
        at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:456)
        at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:430)
        at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:66)
        at 
org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:248)
        at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.IndexOutOfBoundsException: index (16161) must be
less than size (7)
        at 
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:305)
        at 
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:284)
        at 
org.apache.hadoop.hbase.io.util.LRUDictionary$BidirectionalLRUMap.get(LRUDictionary.java:139)
        at 
org.apache.hadoop.hbase.io.util.LRUDictionary$BidirectionalLRUMap.access$000(LRUDictionary.java:76)
        at 
org.apache.hadoop.hbase.io.util.LRUDictionary.getEntry(LRUDictionary.java:43)
        at 
org.apache.hadoop.hbase.io.TagCompressionContext.uncompressTags(TagCompressionContext.java:159)
        at 
org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.decodeTags(BufferedDataBlockEncoder.java:273)
        at 
org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decode(FastDiffDeltaEncoder.java:522)
        at 
org.apache.hadoop.hbase.io.encoding.FastDiffDeltaEncoder$1.decodeNext(FastDiffDeltaEncoder.java:540)
        at 
org.apache.hadoop.hbase.io.encoding.BufferedDataBlockEncoder$BufferedEncodedSeeker.next(BufferedDataBlockEncoder.java:262)
        at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.next(HFileReaderV2.java:1063)
        at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:137)
        at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:108)
        at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:509)
        at 
org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:128)
        at 
org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:73)
        at 
org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:786)
        at 
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:1943)
        at 
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1669)



On Fri, Jan 31, 2014 at 11:17 AM, lars hofhansl <la...@apache.org> wrote:

> Interesting. Did you see the cause for the scanner reset? Was it a
> concurrent compaction?
>
>
>
> ----- Original Message -----
> From: ramkrishna vasudevan <ramkrishna.s.vasude...@gmail.com>
> To: "dev@hbase.apache.org" <dev@hbase.apache.org>; lars hofhansl <
> la...@apache.org>
> Cc:
> Sent: Thursday, January 30, 2014 9:41 PM
> Subject: Re: StoreScanner created for memstore flush should be bothered
> about updated readers?
>
> >> The scanner stack is only reset if the set of HFiles for this store
> changes, i.e. a compaction or a concurrent flush (when using multithreaded
> flushing). It seems that would relatively rare.
> In our test scenario this happens.  While trying to find out the root cause
> for HBASE-10443, hit this issue. It is not directly related to the flush
> scenario but found this issue while debugging it.
> I was not trying to improve the performance here, but the fact that we
> updating the kv heap does make the flush to read those Hfiles on a
> StoreScanner.next() call and it is expensive.
>
> Regards
> Ram
>
>
>
>
>
>
> On Fri, Jan 31, 2014 at 11:02 AM, lars hofhansl <la...@apache.org> wrote:
>
> > From what I found is that the main performance detriment comes from the
> > fact that we need to take a lock for each next/peek call of the
> > StoreScanner. Even when those are uncontended (which they are in 99.9% of
> > the cases) the memory read/writes barriers are expensive.
> >
> > I doubt you'll see much improvement from this. The scanner stack is only
> > reset if the set of HFiles for this store changes, i.e. a compaction or a
> > concurrent flush (when using multithreaded flushing). It seems that would
> > relatively rare.
> >
> > If anything we could a class like StoreScanner that does not need to
> > synchronize any of its calls, but even there, the flush is asynchronous
> to
> > any user action (unless we're blocked on the number of store files, in
> > which case there bigger problem anyway).
> >
> >
> > Did you see a specific issue?
> >
> > -- Lars
> >
> >
> >
> > ----- Original Message -----
> > From: ramkrishna vasudevan <ramkrishna.s.vasude...@gmail.com>
> > To: "dev@hbase.apache.org" <dev@hbase.apache.org>
> > Cc:
> > Sent: Thursday, January 30, 2014 11:48 AM
> > Subject: StoreScanner created for memstore flush should be bothered about
> > updated readers?
> >
> > Hi All
> >
> > In case of flush we create a memstore flusher which in turn creates a
> > StoreScanner backed by a Single ton MemstoreScanner.
> >
> > But this scanner also registers for any updates in the reader in the
> > HStore.  Is this needed?
> > If this happens then any update on the reader may nullify the current
> heap
> > and the entire Scanner Stack is reset, but this time with the other
> > scanners for all the files that satisfies the last top key.  So the flush
> > that happens on the memstore holds the storefile scanners also in the
> heap
> > that was recreated but originally the intention was to create a scanner
> on
> > the memstore alone.
> >
> > Am i missing something here?  Or what i observed is right?  If so, then I
> > feel that this step can be avoided.
> >
> > Regards
> > Ram
> >
> >
>
>

Reply via email to