[
https://issues.apache.org/jira/browse/HBASE-8547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13658581#comment-13658581
]
Enis Soztutar commented on HBASE-8547:
--------------------------------------
bq. what is the intention of this line?
That is for joining with the threads from the executor so that junit resource
checker does not complain about hanging threads.
bq. should we call updateSizeMetrics(cb, true) on the existing CachedBlock
first ?
The one we are replacing is supposed to be the same block, so they should have
the same size.
> Fix java.lang.RuntimeException: Cached an already cached block
> --------------------------------------------------------------
>
> Key: HBASE-8547
> URL: https://issues.apache.org/jira/browse/HBASE-8547
> Project: HBase
> Issue Type: Bug
> Components: io, regionserver
> Reporter: Enis Soztutar
> Assignee: Enis Soztutar
> Fix For: 0.98.0, 0.94.8, 0.95.1
>
> Attachments: hbase-8547_v1-0.94.patch, hbase-8547_v1-0.94.patch,
> hbase-8547_v1.patch, hbase-8547_v2-0.94-reduced.patch,
> hbase-8547_v2-trunk.patch
>
>
> In one test, one of the region servers received the following on 0.94.
> Note HalfStoreFileReader in the stack trace. I think the root cause is that
> after the region is split, the mid point can be in the middle of the block
> (for store files that the mid point is not chosen from). Each half store file
> tries to load the half block and put it in the block cache. Since IdLock is
> instantiated per store file reader, they do not share the same IdLock
> instance, thus does not lock against each other effectively.
> {code}
> 2013-05-12 01:30:37,733 ERROR
> org.apache.hadoop.hbase.regionserver.HRegionServer:ยท
> java.lang.RuntimeException: Cached an already cached block
> at
> org.apache.hadoop.hbase.io.hfile.LruBlockCache.cacheBlock(LruBlockCache.java:279)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:353)
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$BlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:254)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:480)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekTo(HFileReaderV2.java:501)
> at
> org.apache.hadoop.hbase.io.HalfStoreFileReader$1.seekTo(HalfStoreFileReader.java:237)
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:226)
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:145)
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.enforceSeek(StoreFileScanner.java:351)
> at
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.pollRealKV(KeyValueHeap.java:354)
> at
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:312)
> at
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:277)
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:543)
> at
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:411)
> at
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:143)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:3829)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:3896)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3778)
> at
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3770)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2643)
> at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java:308)
> at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
> {code}
> I can see two possible fixes:
> # Allow this kind of rare cases in LruBlockCache by not throwing an
> exception.
> # Move the lock instances to upper layer (possibly in CacheConfig), and let
> half hfile readers share the same IdLock implementation.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira