The TTL data will only be removed after the gc_grace_seconds. So your data
with 30 days TTL will be still in Cassandra for 10 days more (40 in total).
Is your data being there for more than that? Otherwise it is expected
behaviour and probably you should do something on your data model to avoid
scanning tombstoned data.

Regards,

Carlos Juzarte Rolo
Cassandra Consultant

Pythian - Love your data

rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
<http://linkedin.com/in/carlosjuzarterolo>*
Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
www.pythian.com

On Thu, Jun 4, 2015 at 8:31 PM, Aiman Parvaiz <ai...@flipagram.com> wrote:

> yeah we don't update old data. One thing I am curious about is why are we
> running in to so many tombstones with compaction happening normally. Is
> compaction not removing tombstomes?
>
>
> On Thu, Jun 4, 2015 at 11:25 AM, Jonathan Haddad <j...@jonhaddad.com>
> wrote:
>
>> DateTiered is fantastic if you've got time series, TTLed data.  That
>> means no updates to old data.
>>
>> On Thu, Jun 4, 2015 at 10:58 AM Aiman Parvaiz <ai...@flipagram.com>
>> wrote:
>>
>>> Hi everyone,
>>> We are running a 10 node Cassandra 2.0.9 without vnode cluster. We are
>>> running in to a issue where we are reading too many tombstones and hence
>>> getting tons of WARN messages and some ERROR query aborted.
>>>
>>> cass-prod4 2015-06-04 14:38:34,307 WARN ReadStage:
>>> <https://logentries.com/app/9f95dbd4#>1998
>>> SliceQueryFilter.collectReducedColumns - Read 46 live and 1560 tombstoned
>>> cells in ABC.home_feed (see tombstone_warn_threshold). 100 columns was
>>> requested, slices= <https://logentries.com/app/9f95dbd4#>[-], delInfo=
>>> <https://logentries.com/app/9f95dbd4#>{deletedAt=
>>> <https://logentries.com/app/9f95dbd4#>-9223372036854775808,
>>> localDeletion= <https://logentries.com/app/9f95dbd4#>2147483647}
>>>
>>> cass-prod2 2015-05-31 12:55:55,331 ERROR ReadStage:
>>> <https://logentries.com/app/9f95dbd4#>1953
>>> SliceQueryFilter.collectReducedColumns - Scanned over 100000 tombstones in
>>> ABC.home_feed; query aborted (see tombstone_fail_threshold)
>>>
>>> As you can see all of this is happening for CF home_feed. This CF is
>>> basically maintaining a feed with TTL set to 2592000 (30 days).
>>> gc_grace_seconds for this CF is 864000 and its SizeTieredCompaction.
>>>
>>> Repairs have been running regularly and automatic compactions are
>>> occurring normally too.
>>>
>>> I can definitely use some help here in how to tackle this issue.
>>>
>>> Up till now I have the following ideas:
>>>
>>> 1) I can make gc_grace_seconds to 0 and then do a manual compaction for
>>> this CF and bump up the gc_grace again.
>>>
>>> 2) Make gc_grace 0, run manual compaction on this CF and leave gc_grace
>>> to zero. In this case have to be careful in running repairs.
>>>
>>> 3) I am also considering moving to DateTier Compaction.
>>>
>>> What would be the best approach here for my feed case. Any help is
>>> appreciated.
>>>
>>> Thanks
>>>
>>>
>
>
>
>

-- 


--



Reply via email to