vla...@winguzone.com
Reply-To: "user@cassandra.apache.org" user@cassandra.apache.org
Date: Monday, October 17, 2016 at 11:48 AM
To: user user@cassandra.apache.org
Subject: Re: Adding disk capacity to a running node
It's extremely unreliable to use ephemeral (local) disks. E
rom: Seth Edwards <s...@pubnub.com>
Reply-To: "user@cassandra.apache.org" <user@cassandra.apache.org>
Date: Monday, October 17, 2016 at 2:06 PM
To: user <user@cassandra.apache.org>
Subject: Re: Adding disk capacity to a running node
Thanks for the detailed steps Be
Oct 2016 at 12:43 Jeff Jirsa <jeff.ji...@crowdstrike.com>
>> wrote:
>>
>> Ephemeral is fine, you just need to have enough replicas (in enough AZs
>> and enough regions) to tolerate instances being terminated.
>>
>>
>>
>>
>>
>>
ances being terminated.
>
>
>
>
>
>
>
> *From: *Vladimir Yudovin <vla...@winguzone.com>
> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
> *Date: *Monday, October 17, 2016 at 11:48 AM
> *To: *user <user@cassandra.apache.org
>>
>>
>>
>>
>>
>> *From: *Vladimir Yudovin <vla...@winguzone.com>
>> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>> *Date: *Monday, October 17, 2016 at 11:48 AM
>> *To: *user <user@cassandra.apache
I've had luck using the st1 EBS type, too, for situations where reads
are rare (the commit log still needs to be on its own high IOPS
volume; I like using ephemeral storage for that).
On Mon, Oct 17, 2016 at 3:03 PM, Branton Davis
wrote:
> I doubt that's true anymore.
>
>
>
> *From: *Vladimir Yudovin <vla...@winguzone.com>
> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
> *Date: *Monday, October 17, 2016 at 11:48 AM
> *To: *user <user@cassandra.apache.org>
>
>
> *Subject: *Re: Adding
r 17, 2016 at 11:48 AM
To: user <user@cassandra.apache.org>
Subject: Re: Adding disk capacity to a running node
It's extremely unreliable to use ephemeral (local) disks. Even if you don't
stop instance by yourself, it can be restarted on different server in case of
some hardware failure
There are, of course, people using EBS successfully, I didn't say there
weren't and it wasn't my point. I was merely saying the reasoning to avoid
ephemeral disk because your instance is going to move between machines and
lose data is nonsense, in that they work just fine and have been heavily
If a node is restarted is not moved, no. That's not how it works.
On Mon, Oct 17, 2016 at 12:01 PM Vladimir Yudovin
wrote:
> But after such restart node should be joined to cluster again and restore
> data, right?
>
> Best regards, Vladimir Yudovin,
>
>
> *Winguzone
I doubt that's true anymore. EBS volumes, while previously discouraged,
are the most flexible way to go, and are very reliable. You can attach,
detach, and snapshot them too. If you don't need provisioned IOPS, the GP2
SSDs are more cost-effective and allow you to balance IOPS with cost.
On
But after such restart node should be joined to cluster again and restore data,
right?
Best regards, Vladimir Yudovin,
Winguzone - Hosted Cloud Cassandra on Azure and SoftLayer.
Launch your cluster in minutes.
On Mon, 17 Oct 2016 14:55:49 -0400Jonathan Haddad
j...@jonhaddad.com
Vladimir,
*Most* people are running Cassandra are doing so using ephemeral disks.
Instances are not arbitrarily moved to different hosts. Yes, instances can
be shut down, but that's why you distribute across AZs.
On Mon, Oct 17, 2016 at 11:48 AM Vladimir Yudovin
wrote:
>
It's extremely unreliable to use ephemeral (local) disks. Even if you don't
stop instance by yourself, it can be restarted on different server in case of
some hardware failure or AWS initiated update. So all node data will be lost.
Best regards, Vladimir Yudovin,
Winguzone - Hosted Cloud
These are i2.2xlarge instances so the disks currently configured as
ephemeral dedicated disks.
On Mon, Oct 17, 2016 at 11:34 AM, Laing, Michael
wrote:
> You could just expand the size of your ebs volume and extend the file
> system. No data is lost - assuming you are
You could just expand the size of your ebs volume and extend the file
system. No data is lost - assuming you are running Linux.
On Monday, October 17, 2016, Seth Edwards wrote:
> We're running 2.0.16. We're migrating to a new data model but we've had an
> unexpected increase in
We're running 2.0.16. We're migrating to a new data model but we've had an
unexpected increase in write traffic that has caused us some capacity
issues when we encounter compactions. Our old data model is on STCS. We'd
like to add another ebs volume (we're on aws) to our JBOD config and
hopefully
I assume you're talking about Cassandra JBOD (just a bunch of disk) setup
because you do mention it as adding it to the list of data directories. If
this is the case, you may run into issues, depending on your C* version.
Check this out: http://www.datastax.com/dev/blog/improving-jbod.
Or another
Yes, Cassandra should keep percent of disk usage equal for all disk. Compaction
process and SSTable flushes will use new disk to distribute both new and
existing data.
Best regards, Vladimir Yudovin,
Winguzone - Hosted Cloud Cassandra on Azure and SoftLayer.
Launch your cluster in minutes.
19 matches
Mail list logo