If a node is restarted is not moved, no.  That's not how it works.

On Mon, Oct 17, 2016 at 12:01 PM Vladimir Yudovin <vla...@winguzone.com>
wrote:

> But after such restart node should be joined to cluster again and restore
> data, right?
>
> Best regards, Vladimir Yudovin,
>
>
> *Winguzone <https://winguzone.com?from=list> - Hosted Cloud Cassandra on
> Azure and SoftLayer.Launch your cluster in minutes.*
>
>
> ---- On Mon, 17 Oct 2016 14:55:49 -0400*Jonathan Haddad
> <j...@jonhaddad.com <j...@jonhaddad.com>>* wrote ----
>
> Vladimir,
>
> *Most* people are running Cassandra are doing so using ephemeral disks.
> Instances are not arbitrarily moved to different hosts.  Yes, instances can
> be shut down, but that's why you distribute across AZs.
>
> On Mon, Oct 17, 2016 at 11:48 AM Vladimir Yudovin <vla...@winguzone.com>
> wrote:
>
>
> It's extremely unreliable to use ephemeral (local) disks. Even if you
> don't stop instance by yourself, it can be restarted on different server in
> case of some hardware failure or AWS initiated update. So all node data
> will be lost.
>
> Best regards, Vladimir Yudovin,
>
> *Winguzone <https://winguzone.com?from=list> - Hosted Cloud Cassandra on
> Azure and SoftLayer.Launch your cluster in minutes.*
>
>
> ---- On Mon, 17 Oct 2016 14:45:00 -0400*Seth Edwards <s...@pubnub.com
> <s...@pubnub.com>>* wrote ----
>
> These are i2.2xlarge instances so the disks currently configured as
> ephemeral dedicated disks.
>
> On Mon, Oct 17, 2016 at 11:34 AM, Laing, Michael <
> michael.la...@nytimes.com> wrote:
>
> You could just expand the size of your ebs volume and extend the file
> system. No data is lost - assuming you are running Linux.
>
>
> On Monday, October 17, 2016, Seth Edwards <s...@pubnub.com> wrote:
>
> We're running 2.0.16. We're migrating to a new data model but we've had an
> unexpected increase in write traffic that has caused us some capacity
> issues when we encounter compactions. Our old data model is on STCS. We'd
> like to add another ebs volume (we're on aws) to our JBOD config and
> hopefully avoid any situation where we run out of disk space during a large
> compaction. It appears that the behavior we are hoping to get is actually
> undesirable and removed in 3.2. It still might be an option for us until we
> can finish the migration.
>
> I'm not familiar with LVM so it may be a bit risky to try at this point.
>
> On Mon, Oct 17, 2016 at 9:42 AM, Yabin Meng <yabinm...@gmail.com> wrote:
>
> I assume you're talking about Cassandra JBOD (just a bunch of disk) setup
> because you do mention it as adding it to the list of data directories. If
> this is the case, you may run into issues, depending on your C* version.
> Check this out: http://www.datastax.com/dev/blog/improving-jbod.
>
> Or another approach is to use LVM to manage multiple devices into a single
> mount point. If you do so, from what Cassandra can see is just simply
> increased disk storage space and there should should have no problem.
>
> Hope this helps,
>
> Yabin
>
> On Mon, Oct 17, 2016 at 11:54 AM, Vladimir Yudovin <vla...@winguzone.com>
> wrote:
>
>
> Yes, Cassandra should keep percent of disk usage equal for all disk.
> Compaction process and SSTable flushes will use new disk to distribute both
> new and existing data.
>
> Best regards, Vladimir Yudovin,
>
> *Winguzone <https://winguzone.com?from=list> - Hosted Cloud Cassandra on
> Azure and SoftLayer.Launch your cluster in minutes.*
>
>
> ---- On Mon, 17 Oct 2016 11:43:27 -0400*Seth Edwards <s...@pubnub.com>*
> wrote ----
>
> We have a few nodes that are running out of disk capacity at the moment
> and instead of adding more nodes to the cluster, we would like to add
> another disk to the server and add it to the list of data directories. My
> question, is, will Cassandra use the new disk for compactions on sstables
> that already exist in the primary directory?
>
>
>
> Thanks!
>
>
>
>

Reply via email to