[ 
https://issues.apache.org/jira/browse/HBASE-16460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15430138#comment-15430138
 ] 

Anoop Sam John commented on HBASE-16460:
----------------------------------------

Wait a moment before commit..  So till now the persistence of cache was not 
getting effective even if we used persistent File for cache. Need to eval all 
implication down the line because of this change.  So how was it previously 
before this fix?  We will have the file already (cache file) and we will 
continue to use that. Overwrite the data there?  So the issue was that when the 
RS restarts and the same Regions which was there in this RS comes back to this 
RS, still we have to read from HDFS to load the FIle cache. (The data might be 
there already)
There are things like below in the constructor
{code}
for (int i = 0; i < bucketSizes.length; ++i) {
        if (foundLen <= bucketSizes[i]) {
          bucketSizeIndex = i;
          break;
        }
      }
      if (bucketSizeIndex == -1) {
        throw new BucketAllocatorException(
            "Can't match bucket size for the block with size " + foundLen);
      }
{code}
This means over the restart of the RS the bucket sizes (which is configurable 
by users) should not get changed. If so there is a chance that we will not see 
a suitable bucket for a block and so we will fail this L2 cache init. We throw 
Exception!

> Can't rebuild the BucketAllocator's data structures when BucketCache use 
> FileIOEngine
> -------------------------------------------------------------------------------------
>
>                 Key: HBASE-16460
>                 URL: https://issues.apache.org/jira/browse/HBASE-16460
>             Project: HBase
>          Issue Type: Bug
>          Components: BucketCache
>    Affects Versions: 2.0.0, 1.1.6, 1.3.1, 1.2.3, 0.98.22
>            Reporter: Guanghao Zhang
>            Assignee: Guanghao Zhang
>         Attachments: HBASE-16460-v1.patch, HBASE-16460.patch
>
>
> When bucket cache use FileIOEngine, it will rebuild the bucket allocator's 
> data structures from a persisted map. So it should first read the map from 
> persistence file then use the map to new a BucketAllocator. But now the code 
> has wrong sequence in retrieveFromFile() method of BucketCache.java.
> {code}
>       BucketAllocator allocator = new BucketAllocator(cacheCapacity, 
> bucketSizes, backingMap, realCacheSize);
>       backingMap = (ConcurrentHashMap<BlockCacheKey, BucketEntry>) 
> ois.readObject();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to