There are, of course, people using EBS successfully, I didn't say there
weren't and it wasn't my point.  I was merely saying the reasoning to avoid
ephemeral disk because your instance is going to move between machines and
lose data is nonsense, in that they work just fine and have been heavily
used in production Cassandra clusters for years.

On Mon, Oct 17, 2016 at 12:03 PM Branton Davis <branton.da...@spanning.com>
wrote:

> I doubt that's true anymore.  EBS volumes, while previously discouraged,
> are the most flexible way to go, and are very reliable.  You can attach,
> detach, and snapshot them too.  If you don't need provisioned IOPS, the
> GP2 SSDs are more cost-effective and allow you to balance IOPS with cost.
>
> On Mon, Oct 17, 2016 at 1:55 PM, Jonathan Haddad <j...@jonhaddad.com>
> wrote:
>
> Vladimir,
>
> *Most* people are running Cassandra are doing so using ephemeral disks.  
> Instances
> are not arbitrarily moved to different hosts.  Yes, instances can be shut
> down, but that's why you distribute across AZs.
>
> On Mon, Oct 17, 2016 at 11:48 AM Vladimir Yudovin <vla...@winguzone.com>
> wrote:
>
> It's extremely unreliable to use ephemeral (local) disks. Even if you
> don't stop instance by yourself, it can be restarted on different server in
> case of some hardware failure or AWS initiated update. So all node data
> will be lost.
>
> Best regards, Vladimir Yudovin,
>
>
> *Winguzone <https://winguzone.com?from=list> - Hosted Cloud Cassandra on
> Azure and SoftLayer.Launch your cluster in minutes.*
>
>
> ---- On Mon, 17 Oct 2016 14:45:00 -0400*Seth Edwards <s...@pubnub.com
> <s...@pubnub.com>>* wrote ----
>
> These are i2.2xlarge instances so the disks currently configured as
> ephemeral dedicated disks.
>
> On Mon, Oct 17, 2016 at 11:34 AM, Laing, Michael <
> michael.la...@nytimes.com> wrote:
>
> You could just expand the size of your ebs volume and extend the file
> system. No data is lost - assuming you are running Linux.
>
>
> On Monday, October 17, 2016, Seth Edwards <s...@pubnub.com> wrote:
>
> We're running 2.0.16. We're migrating to a new data model but we've had an
> unexpected increase in write traffic that has caused us some capacity
> issues when we encounter compactions. Our old data model is on STCS. We'd
> like to add another ebs volume (we're on aws) to our JBOD config and
> hopefully avoid any situation where we run out of disk space during a large
> compaction. It appears that the behavior we are hoping to get is actually
> undesirable and removed in 3.2. It still might be an option for us until we
> can finish the migration.
>
> I'm not familiar with LVM so it may be a bit risky to try at this point.
>
> On Mon, Oct 17, 2016 at 9:42 AM, Yabin Meng <yabinm...@gmail.com> wrote:
>
> I assume you're talking about Cassandra JBOD (just a bunch of disk) setup
> because you do mention it as adding it to the list of data directories. If
> this is the case, you may run into issues, depending on your C* version.
> Check this out: http://www.datastax.com/dev/blog/improving-jbod.
>
> Or another approach is to use LVM to manage multiple devices into a single
> mount point. If you do so, from what Cassandra can see is just simply
> increased disk storage space and there should should have no problem.
>
> Hope this helps,
>
> Yabin
>
> On Mon, Oct 17, 2016 at 11:54 AM, Vladimir Yudovin <vla...@winguzone.com>
> wrote:
>
>
> Yes, Cassandra should keep percent of disk usage equal for all disk.
> Compaction process and SSTable flushes will use new disk to distribute both
> new and existing data.
>
> Best regards, Vladimir Yudovin,
>
> *Winguzone <https://winguzone.com?from=list> - Hosted Cloud Cassandra on
> Azure and SoftLayer.Launch your cluster in minutes.*
>
>
> ---- On Mon, 17 Oct 2016 11:43:27 -0400*Seth Edwards <s...@pubnub.com>*
> wrote ----
>
> We have a few nodes that are running out of disk capacity at the moment
> and instead of adding more nodes to the cluster, we would like to add
> another disk to the server and add it to the list of data directories. My
> question, is, will Cassandra use the new disk for compactions on sstables
> that already exist in the primary directory?
>
>
>
> Thanks!
>
>
>
>
>

Reply via email to