On Sun, Sep 27, 2015 at 11:59 PM, Erick Ramirez
wrote:
> You should never run `nodetool compact` since this will result in a
> massive SSTable that will almost never get compacted out or take a very
> long time to get compacted out.
>
Respectfully disagree. There are various cases where nodetool
ache.org", Dongfeng Lu
Subject: Re: How to remove huge files with all expired data sooner?
Hello,
You should never run `nodetool compact` since this will result in a massive
SSTable that will almost never get compacted out or take a very long time to
get compacted out.
You are correct th
quot;
Date: Sunday, September 27, 2015 at 11:59 PM
To: "user@cassandra.apache.org", Dongfeng Lu
Subject: Re: How to remove huge files with all expired data sooner?
Hello,
You should never run `nodetool compact` since this will result in a massive
SSTable that will almost never get compacte
On Mon, Sep 28, 2015 at 2:59 AM, Erick Ramirez wrote:
> have many tables like this, and I'd like to reclaim those spaces sooner.
> What would be the best way to do it? Should I run "nodetool compact" when I
> see two large files that are 2 weeks old? Is there configuration parameters
> I can tune
Hello,
You should never run `nodetool compact` since this will result in a massive
SSTable that will almost never get compacted out or take a very long time
to get compacted out.
You are correct that there needs to be 4 similar-sized SSTables for them to
get compacted. If you want the expired dat
Hi I have a table where I set TTL to only 7 days for all records and we keep
pumping records in every day. In general, I would expect all data files for
that table to have timestamps less than, say 8 or 9 days old, giving the system
some time to work its magic. However, I see some files more tha