You should be using the off heap row cache option. That way you avoid GC
overhead and the rows are stored in a compact serialized form that means you
get more cache entries in RAM. Trade off is slightly more CPU for
deserialization etc.

Adrian

On Sunday, September 11, 2011, aaron morton <aa...@thelastpickle.com> wrote:
> If the row cache is enabled the read path will not use the sstables.
Depending on the workload I would then look at setting *low* memtable flush
settings to use as much memory as possible for the row cache. If the row is
in the row cache the read path will not look at SSTables.
>
> Then set the row cache save settings per CF to ensure the cache is warmed
when the node starts.
>
> The write path will still use the WAL so if you may want to disable the
commit log using the durable_writes setting on  the keyspace.
>
> Hope that helps.
>
> -----------------
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 10/09/2011, at 4:38 AM, kapil nayar wrote:
>
>> Hi,
>>
>> Can we configure some column-families (or keyspaces) in Cassandra to
perform as a pure in-memory cache?
>>
>> The feature should let the memtables always be in-memory (never flushed
to the disk - sstables).
>> The memtable flush threshold settings of time/ memory/ operations can be
set to a max value to achieve this.
>>
>> However, it seems uneven distribution of the keys across the nodes in the
cluster could lead to java error no-memory available. In order to prevent
this error can we overflow some entries to the disk?
>>
>> Thanks,
>> Kapil
>
>

Reply via email to