Yes it will be nice if you can add a parameter in storage-conf.xml to enable
write-through to row cache. There are many cases that require the new keys
to be immediately available for read. In my case I'm thinking of caching
30-50% of all records in memory to reduce read latency.

Thanks,

-Weijun

On Tue, Feb 16, 2010 at 5:17 PM, Jonathan Ellis <jbel...@gmail.com> wrote:

> On Tue, Feb 16, 2010 at 7:11 PM, Weijun Li <weiju...@gmail.com> wrote:
> > Just started to play with the row cache feature in trunk: it seems to be
> > working fine so far except that for RowsCached parameter you need to
> specify
> > number of rows rather than a percentage (e.g., "20%" doesn't work).
>
> 20% works, but it's 20% of the rows at server startup.  So on a fresh
> start that is zero.
>
> Maybe we should just get rid of the % feature...
>
> > The problem is: when you write to Cassandra it doesn't seem to put the
> new
> > keys in row cache (it is said to update instead invalidate if the entry
> is
> > already in cache). Is it easy to implement this feature?
>
> It's deliberately not done.  For many (most?) workloads you don't want
> fresh writes blowing away your read cache.  The code is in
> Table.apply:
>
>                ColumnFamily cachedRow =
> cfs.getRawCachedRow(mutation.key());
>                if (cachedRow != null)
>                    cachedRow.addAll(columnFamily);
>
> I think it would be okay to have a WriteThrough option for what you're
> asking, though.
>
> -Jonathan
>

Reply via email to