Thanks for the patch Sylvain! I remember during build Cassandra re-generates
the thrift java code (in src/) with a libthrift jar, is this correct?

Here's my use case: 

1) Write/read ratio is close to 1:1
2) High volume of traffic and I want low read latency (e.g., < 40ms). That's
why I'm testing a build with row-level cache and mmap (I think Jonathan is
right that mmap does help with performance).
3) A row should expire if its last modified time is too old so we don't need
to worry about scanning all keys to clean up old items. So yes if you write
to a row the last-modified-time should be updated as well.
4) (nice to have) support for range scan (key iteration) with RP.

So ideally a row should have a "last modified time" field. Or, I can use one
column to record the last modified time (this means each write to a row will
be followed by another one to update the last modified column, which is kind
of ugly). For the simplest case: suppose each row just have one
ExpiringColumn, will the row be deleted automatically if it has no column
associated with it?  Does it make sense for Cassandra to keep a row without
any column?

Please let me know if the following plan will work or not:

1) Manually apply your patch to the trunk build that I use (which has
row-level cache and mmap). If will be nice if you can throw some words about
the design flow of your ExpringColum :-)
2) Find the API entry point for deleting a row, and modify the expiration
handler (suppose you have one) of ExpiringColumn to call the key delete
method if the key has no other columns (if it doesn't happen for now). How
do you trigger the expiration check for a ExpiringColumn? Upon hit of a
column? Or use a timer to scan all columns for expiration??

Thanks,

-Weijun




-----Original Message-----
From: Sylvain Lebresne [mailto:sylv...@yakaz.com] 
Sent: Thursday, February 25, 2010 2:23 AM
To: Weijun Li
Cc: cassandra-user@incubator.apache.org
Subject: Re: Strategy to delete/expire keys in cassandra

Hi,

> Should I just run command (in Cassandra 0.5 source folder?) like:
> patch –p1 –i  0001-Add-new-ExpiringColumn-class.patch
> for all of the five patches in your ticket?

Well, actually I lied. The patches were made for a version a little after
0.5.
If you really want to try, I attach a version of those patches that (should)
work with 0.5 (There is only the 3 first patch, but the fourth one is for
tests so not necessary per se). Apply them with your patch command.
Still, to compile that you will have to regenerate the thrift java interface
(with ant gen-thrift-java), but for that you will have to install the right
svn revision of thrift (which is libthrift-r820831 for 0.5). And if you
manage to make it work, you will have to digg in cassandra.thrift as it make
change to it.

In the end, remember that this is not an official patch yet and it *will
not* make it in Cassandra in its current form. All I can tell you is that I
need those expiring columns for quite some of my usage and I will do what I
can to make this feature included if and when possible.

> Also what’s your opinion on extending ExpiringColumn to expire a key 
> completely? Otherwise it will be difficult to track what are expired 
> or old rows in Cassandra.

I'm not sure how to make full rows (or even full superColumns for that
matter) expire. What if you set a row to expire after some time and add new
columns before this expiration ? Should you update the expiration of the row
? Which is to say that a row will expires when it's last column expire,
which is almost what you get with expiring column.
The one thing you may want though is that when all the columns of a row
expire (or, to be precise, get physically deleted), the row itself is
deleted. Looking at the code, I'm not convince this happen and I'm not sure
why.

--
Sylvain

>
>
>
> From: Weijun Li [mailto:weiju...@gmail.com]
> Sent: Tuesday, February 23, 2010 6:18 PM
> To: cassandra-user@incubator.apache.org
> Subject: Re: Strategy to delete/expire keys in cassandra
>
>
>
> Thanks for the answer.  A dumb question: how did you apply the patch 
> file to
> 0.5 source? The link you gave doesn't mention that the patch is for 0.5??
>
> Also, this ExpiringColumn feature doesn't seem to expire key/row, 
> meaning the number of keys will keep grow (even if you drop columns 
> for them) unless you delete them. In your case, how do you manage 
> deleting/expiring keys from Cassandra? Do you keep a list of keys 
> somewhere and go through them once a while?
>
> Thanks,
>
> -Weijun
>
> On Tue, Feb 23, 2010 at 2:26 AM, Sylvain Lebresne <sylv...@yakaz.com>
wrote:
>
> Hi,
>
> Maybe the following ticket/patch may be what you are looking for:
> https://issues.apache.org/jira/browse/CASSANDRA-699
>
> It's flagged for 0.7 but as it breaks the API (and if I understand 
> correctly the release plan) it may not make it in cassandra before 0.8 
> (and the patch will have to change to accommodate the change that will 
> be made to the internals in 0.7).
>
> Anyway, what I can at least tell you is that I'm using the patch 
> against
> 0.5 in a test cluster without problem so far.
>
>> 3)      Once keys are deleted, do you have to wait till next GC to 
>> clean them from disk or memory (suppose you don’t run cleanup 
>> manually)? What’s the strategy for Cassandra to handle deleted items 
>> (notify other replica nodes, cleanup memory/disk, defrag/rebuild disk 
>> files, rebuild bloom filter etc). I’m asking this because if the keys 
>> refresh very fast (i.e., high volume write/read and expiration is 
>> kind of short) how will the data file grow and how does this impact 
>> the system performance.
>
> Items are deleted only during compaction, and you may actually have to 
> wait for the GCGraceSeconds before deletion. This value is 
> configurable in storage-conf.xml, but is 10 days by default. You can 
> decrease this value but because of consistency (and the fact that you 
> have to at least wait for compaction to occurs) you will always have a 
> delay before the actual delete (all this is also true for the patch I 
> mention above by the way). But when it's deleted, it's just skipping 
> the items during compaction, so it's really cheap.
>
> --
> Sylvain
>
>

Reply via email to