[ 
https://issues.apache.org/jira/browse/HBASE-1460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12715230#action_12715230
 ] 

stack commented on HBASE-1460:
------------------------------

.bq setBuffer is currently used when an already cached block is cached again. 
behavior is defined in javadocs. its assumed this will never happen though.

I don't get above.  If it should never happen, lets throw assert if it ever 
does?  Looking here:

{code}
+  public void cacheBlock(String blockName, ByteBuffer buf, boolean inMemory) {
+    CachedBlock cb = map.get(blockName);
+    if(cb != null) {
+      cb.setBuffer(buf);
+      return;
+    }
{code}

This seems odd.  Either lock is cached or not?  If cached, why we doing 
setBuffer?

ConcurrentLRUBlockCacheTest is code you don't want run as part of unit tests?  
Is that why the Test on the end?  This class needs a class comment saying what 
its for?  Same for CachedBlockQueueTest.

Great tests by the way.

This is not always going to work:

{code}
+    // A single eviction run should have occurred
+    assertEquals(cache.getEvictionCount(), 1);
{code}

Worse, I bet it'll work on everyones desktop but not up on solaris hudson.  Can 
it be changed to not be time based?

These could be final:

{code}
+  private String blockName;
+  private long size;
{code}

CachedBlock could do with a bit of javadoc.

This could be final:

+  private EvictionThread evictionThread;

Why this:

+      evictionInProgress = true;

...when you have this:

+    if(!evictionLock.tryLock()) return;

Can your stats be hooked up to metrics?  As Ryan did in first commit of cache?

Otherwise patch looks good to me.  Committing, would I remove the current LRU?





> Concurrent LRU Block Cache
> --------------------------
>
>                 Key: HBASE-1460
>                 URL: https://issues.apache.org/jira/browse/HBASE-1460
>             Project: Hadoop HBase
>          Issue Type: Improvement
>          Components: io
>            Reporter: Jonathan Gray
>            Assignee: Jonathan Gray
>             Fix For: 0.20.0
>
>         Attachments: HBASE-1460-v1.patch, HBASE-1460-v2.patch
>
>
> The LRU-based block cache that will be committed in HBASE-1192 is thread-safe 
> but contains a big lock on the hash map.  Under high load, the block cache 
> will be hit very heavily from a number of threads, so it needs to be built to 
> handle massive concurrency.
> This issue aims to implement a new block cache with LRU eviction, but backed 
> by a ConcurrentHashMap and a separate eviction thread.  Influence will be 
> drawn from Solr's ConcurrentLRUCache, however there are major differences 
> because solr treats all cached elements as equal size whereas we are 
> dependent on our HeapSize interface with realistic (though approximate) heap 
> usage.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to