I suspect that running cluster wide repair interferes with TTL based expiration. I am running repair every 7 days and using TTL expiration time 7 days too. Data are never deleted. Stored data in cassandra are always growing (watching them for 3 months) but they should not. If i run manual cleanup, some data are deleted but just about 5%. Currently there are about 3-5 times more rows then i estimate.

I suspect that running repair on data with TTL can cause:

1. time check for expired records is ignored and these data are streamed to other node and they will be alive again
 or
2. streaming data are propagated with full TTL. Lets say that i have ttl 7 days, data are stored for 5 days and then repaired, they should be sent to other node with ttl 2 days not 7.

Can someone do testing on this case? I could not play with production cluster too much.

Reply via email to