[
https://issues.apache.org/jira/browse/HBASE-16460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15434444#comment-15434444
]
Anoop Sam John commented on HBASE-16460:
----------------------------------------
{code}
Iterator<BlockCacheKey> iterator = map.keySet().iterator();
356 while (iterator.hasNext()) {
357 BlockCacheKey key = iterator.next();
358 BucketEntry entry = map.get(key);
{code}
This is not efficient way. You can create an iterator over the entryset so that
u can avoid map.get() call for every key. U will see a Findbugs issue for this.
Also rather than logging at many places and so possibly making more log lines,
we can collect all such failed keys into a List and log them at once at the end
(If the list size is >0)
> Can't rebuild the BucketAllocator's data structures when BucketCache use
> FileIOEngine
> -------------------------------------------------------------------------------------
>
> Key: HBASE-16460
> URL: https://issues.apache.org/jira/browse/HBASE-16460
> Project: HBase
> Issue Type: Bug
> Components: BucketCache
> Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3, 0.98.22
> Reporter: Guanghao Zhang
> Assignee: Guanghao Zhang
> Attachments: HBASE-16460-v1.patch, HBASE-16460-v2.patch,
> HBASE-16460-v2.patch, HBASE-16460-v3.patch, HBASE-16460.patch
>
>
> When bucket cache use FileIOEngine, it will rebuild the bucket allocator's
> data structures from a persisted map. So it should first read the map from
> persistence file then use the map to new a BucketAllocator. But now the code
> has wrong sequence in retrieveFromFile() method of BucketCache.java.
> {code}
> BucketAllocator allocator = new BucketAllocator(cacheCapacity,
> bucketSizes, backingMap, realCacheSize);
> backingMap = (ConcurrentHashMap<BlockCacheKey, BucketEntry>)
> ois.readObject();
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)