Hi everyone,
We are running a 10 node Cassandra 2.0.9 without vnode cluster. We are
running in to a issue where we are reading too many tombstones and hence
getting tons of WARN messages and some ERROR query aborted.

cass-prod4 2015-06-04 14:38:34,307 WARN ReadStage:
<https://logentries.com/app/9f95dbd4#>1998
SliceQueryFilter.collectReducedColumns - Read 46 live and 1560 tombstoned
cells in ABC.home_feed (see tombstone_warn_threshold). 100 columns was
requested, slices= <https://logentries.com/app/9f95dbd4#>[-], delInfo=
<https://logentries.com/app/9f95dbd4#>{deletedAt=
<https://logentries.com/app/9f95dbd4#>-9223372036854775808, localDeletion=
<https://logentries.com/app/9f95dbd4#>2147483647}

cass-prod2 2015-05-31 12:55:55,331 ERROR ReadStage:
<https://logentries.com/app/9f95dbd4#>1953
SliceQueryFilter.collectReducedColumns - Scanned over 100000 tombstones in
ABC.home_feed; query aborted (see tombstone_fail_threshold)

As you can see all of this is happening for CF home_feed. This CF is
basically maintaining a feed with TTL set to 2592000 (30 days).
gc_grace_seconds for this CF is 864000 and its SizeTieredCompaction.

Repairs have been running regularly and automatic compactions are occurring
normally too.

I can definitely use some help here in how to tackle this issue.

Up till now I have the following ideas:

1) I can make gc_grace_seconds to 0 and then do a manual compaction for
this CF and bump up the gc_grace again.

2) Make gc_grace 0, run manual compaction on this CF and leave gc_grace to
zero. In this case have to be careful in running repairs.

3) I am also considering moving to DateTier Compaction.

What would be the best approach here for my feed case. Any help is
appreciated.

Thanks

Reply via email to