[ 
https://issues.apache.org/jira/browse/CASSANDRA-678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12798744#action_12798744
 ] 

Ryan King commented on CASSANDRA-678:
-------------------------------------

It would be nice if the capacity configuration was in absolute size, rather 
than a fraction of the CF size. In other words, I'd like to be able to 
configure cassandra like "use 1GB of row cache per CF". Obviously, as your data 
set gets bigger, your cache hit ratios will go down (assuming your traffic 
distribution stays relatively uniform).

Otherwise, I think this looks awesome, this will really help us. 

> row-level cache
> ---------------
>
>                 Key: CASSANDRA-678
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-678
>             Project: Cassandra
>          Issue Type: New Feature
>          Components: Core
>            Reporter: Jonathan Ellis
>            Assignee: Jonathan Ellis
>            Priority: Minor
>             Fix For: 0.9
>
>         Attachments: 0001-basic-caching.txt, 
> 0002-clean-up-onstart-to-fix-cache-size-calculation.txt, 
> 0003-instrumentation.txt, 
> 0004-make-rowcache-keycache-configurable-per-CF.txt, 
> 0005-combine-SSTable-keycaches-into-one-at-CFS-level.-do-no.txt, 
> 0005-update-cached-rows-in-place-instead-of-invalidating-.patch
>
>
> We have a key cache but that doesn't help mitigate the expensive 
> deserialization of the actual data to return.
> Adding a row-level cache should be fairly simple using a 
> ConcurrentLinkedHashMap<String [key], ColumnFamily> structure.  (We will only 
> cache whole rows at a time, since already know how to query on those 
> in-memory.  This limits us to CFs full of narrow rows but that is a common 
> enough use case to be worth tackling if it can be done simply enough.)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to