Re: Understanding of proliferation of sstables during a repair

2017-02-26 Thread Seth Edwards
due to "flighing in mutations" > during merkle tree calculation. > > 2017-02-26 20:41 GMT+01:00 Seth Edwards <s...@pubnub.com>: > >> Hello, >> >> We just ran a repair on a keyspace using TWCS and a mixture of TTLs .This >> caused a large proli

Understanding of proliferation of sstables during a repair

2017-02-26 Thread Seth Edwards
Hello, We just ran a repair on a keyspace using TWCS and a mixture of TTLs .This caused a large proliferation of sstables and compactions. There is likely a lot of entropy in this keyspace. I am trying to better understand why this is. I've also read that you may not want to run repairs on short

Re: Question about compaction strategy changes

2016-10-24 Thread Seth Edwards
apacity, you may want to consider > dropping concurrent compactors down so fewer compaction tasks run at the > same time. That will translate proportionally to the amount of extra disk > you have consumed by compaction in a TWCS setting. > > > > > > > > *From: *Seth Edwa

Re: Question about compaction strategy changes

2016-10-23 Thread Seth Edwards
could jump into the thousands and we and up being short of a few hundred GB of disk space. On Sun, Oct 23, 2016 at 5:49 PM, kurt Greaves <k...@instaclustr.com> wrote: > > On 22 October 2016 at 03:37, Seth Edwards <s...@pubnub.com> wrote: > >> We're using TWCS and we notice t

Question about compaction strategy changes

2016-10-21 Thread Seth Edwards
Hello! We're using TWCS and we notice that if we make changes to the options to the window unit or size, it seems to implicitly start recompacting all sstables. Is this indeed the case and more importantly, does the same happen if we were to adjust the gr_grace_seconds for this table? Thanks!

Re: Adding disk capacity to a running node

2016-10-17 Thread Seth Edwards
than nothing > > Also to maintain your read throughput during this whole thing, double > check the EBS volumes read_ahead_kb setting on the block volume and reduce > it to something sane like 0 or 16. > > > > On Mon, 17 Oct 2016 at 13:42 Seth Edwards <s...@

Re: Adding disk capacity to a running node

2016-10-17 Thread Seth Edwards
>> >> On Mon, 17 Oct 2016 14:45:00 -0400*Seth Edwards <s...@pubnub.com >> <s...@pubnub.com>>* wrote >> >> >> >> These are i2.2xlarge instances so the disks currently configured as >> ephemeral dedicated disks. >> >>

Re: Adding disk capacity to a running node

2016-10-17 Thread Seth Edwards
assuming you are running Linux. > > > On Monday, October 17, 2016, Seth Edwards <s...@pubnub.com> wrote: > >> We're running 2.0.16. We're migrating to a new data model but we've had >> an unexpected increase in write traffic that has caused us some capacity >> issues w

Re: Adding disk capacity to a running node

2016-10-17 Thread Seth Edwards
SSTable flushes will use new disk to distribute both >> new and existing data. >> >> Best regards, Vladimir Yudovin, >> >> >> *Winguzone <https://winguzone.com?from=list> - Hosted Cloud Cassandra on >> Azure and SoftLayer.Launch your cluster

Adding disk capacity to a running node

2016-10-17 Thread Seth Edwards
We have a few nodes that are running out of disk capacity at the moment and instead of adding more nodes to the cluster, we would like to add another disk to the server and add it to the list of data directories. My question, is, will Cassandra use the new disk for compactions on sstables that

Re: Question about adding nodes to a cluster

2015-02-09 Thread Seth Edwards
I see what you are saying. So basically take whatever existing token I have and divide it by 2, give or take a couple of tokens? On Mon, Feb 9, 2015 at 5:17 PM, Robert Coli rc...@eventbrite.com wrote: On Mon, Feb 9, 2015 at 4:59 PM, Seth Edwards s...@pubnub.com wrote: We are choosing

Question about adding nodes to a cluster

2015-02-09 Thread Seth Edwards
I am on Cassandra 1.2.19 and I am following the documentation for adding existing nodes to a cluster http://www.datastax.com/docs/1.1/cluster_management#adding-capacity-to-an-existing-cluster . We are choosing to double our cluster from six to twelve. I ran the token generator. Based on what I