Thanks for the answers!
Cem
On Wed, May 29, 2013 at 1:26 AM, Robert Coli rc...@eventbrite.com wrote:
On Tue, May 28, 2013 at 2:38 PM, Bryan Talbot btal...@aeriagames.com
wrote:
I think what you're asking for (efficient removal of TTL'd write-once
data)
is already in the works but not
Hi Experts,
We have general problem about cleaning up data from the disk. I need to
free the disk space after retention period and the customer wants to
dimension the disk space base on that.
After running multiple performance tests with TTL of 1 day we saw that the
compaction couldn't keep up
You need to change the gc_grace time of the column family. It defaults to
10 days. By default the tombstones will not go away for 10 days.
On Tue, May 28, 2013 at 2:46 PM, cem cayiro...@gmail.com wrote:
Hi Experts,
We have general problem about cleaning up data from the disk. I need to
free
Thanks for the answer but it is already set to 0 since I don't do any
delete.
Cem
On Tue, May 28, 2013 at 9:03 PM, Edward Capriolo edlinuxg...@gmail.comwrote:
You need to change the gc_grace time of the column family. It defaults to
10 days. By default the tombstones will not go away for 10
@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: data clean up problem
Thanks for the answer but it is already set to 0 since I don't do any delete.
Cem
On Tue, May 28, 2013 at 9:03 PM, Edward Capriolo
edlinuxg...@gmail.commailto:edlinuxg...@gmail.com wrote:
You need to change the gc_grace time
@cassandra.apache.org
Date: Tuesday, May 28, 2013 1:17 PM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: data clean up problem
Thanks for the answer but it is already set to 0 since I don't do any delete.
Cem
On Tue, May 28
@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: data clean up problem
Thanks for the answer but it is already set to 0 since I don't do any delete.
Cem
On Tue, May 28, 2013 at 9:03 PM, Edward Capriolo
edlinuxg...@gmail.commailto:edlinuxg
@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Tuesday, May 28, 2013 1:17 PM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: data clean up problem
@cassandra.apache.org
Date: Tuesday, May 28, 2013 1:45 PM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: data clean up problem
Thanks for the answer.
Sorry for the misunderstanding. I tried to say I don't send delete
How do you determine the slow node, client side response latency?
-Original Message-
From: Hiller, Dean [mailto:dean.hil...@nrel.gov]
Sent: Tuesday, May 28, 2013 1:10 PM
To: user@cassandra.apache.org
Subject: Re: data clean up problem
How much disk used on each node? We run
-
From: Hiller, Dean [mailto:dean.hil...@nrel.gov]
Sent: Tuesday, May 28, 2013 1:10 PM
To: user@cassandra.apache.org
Subject: Re: data clean up problem
How much disk used on each node? We run the suggested 300G per node as
above that compactions can have trouble keeping up.
Ps. We run compactions
Subject: Re: data clean up problem
How much disk used on each node? We run the suggested 300G per node as
above that compactions can have trouble keeping up.
Ps. We run compactions during peak hours just fine because our client
reroutes to the 2 of 3 nodes not running compactions based on seeing
: Re: data clean up problem
How much disk used on each node? We run the suggested 300G per node as
above that compactions can have trouble keeping up.
Ps. We run compactions during peak hours just fine because our client
reroutes to the 2 of 3 nodes not running compactions based on seeing
On Tue, May 28, 2013 at 2:38 PM, Bryan Talbot btal...@aeriagames.com wrote:
I think what you're asking for (efficient removal of TTL'd write-once data)
is already in the works but not until 2.0 it seems.
If your entire dataset in a keyspace or column family is deleted every
[small time period],
14 matches
Mail list logo