Hi,
>From "nodetool status" output, it looks like the cluster is running ok. The
exception itself simply says that data streaming fails during nodetool
rebuild. This could be due to possible network hiccup. It is hard to say.
You need to do further investigation. For example, you can run
It is a Cassandra bug. The workaround is to change system_distributed
keyspce replication strategy to something as below:
alter keyspace system_distributed with replication = {'class':
'NetworkTopologyStrategy', 'DC1': '3', 'DC2': '3', 'DC3': '3'};
You may see similar problem for other
Dorian, I don't think Cassandra is able to achieve what you want natively.
In short words, what you want to achieve is conditional data replication.
Yabin
On Mon, Oct 3, 2016 at 1:37 PM, Dorian Hoxha wrote:
> @INDRANIL
> Please go find your own thread and don't hijack
Have you restarted Cassandra after making changes in cassandra-env.sh?
Yabin
On Mon, Oct 3, 2016 at 7:44 AM, Jean Carlo
wrote:
> OK I got the response to one of my questions. In the script
> /etc/init.d/cassandra we set the path for the heap dump by default in the
>
Most likely node A has some gossip related problems. You can try purging
the gossip state on node A, as per the procedure:
https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_gossip_purge.html
.
Yabin
On Mon, Oct 3, 2016 at 2:38 AM, Girish Kamarthi <
Are you sure cassandra.yaml file of the new node is correctly configured?
What is your seeds and listen_address setup of your new node and existing
nodes?
Yabin
On Fri, Sep 30, 2016 at 7:56 PM, Rajath Subramanyam
wrote:
> Hello Cassandra-users,
>
> I was running some tests
With CQL data modeling, everything is called a "row". But really in CQL, a
row is just a logical concept. So if you think of "wide partition" instead
of "wide row" (partition is what is determined by the has index of the
partition key), it will help the understanding a bit: one wide-partition
may
Most likely the issue is caused by the fact that when you move the data,
you move the system keyspace data away as well. Meanwhile, due to the error
of data being copied into a different location than what C* is expecting,
when C* starts, it can not find the system metadata info and therefore
I have seen this on other releases, on 2.2.x. The workaround is exactly
like yours, some other system keyspaces also need similar changes.
I would say this is a benign bug.
Yabin
On Thu, Oct 20, 2016 at 4:41 PM, Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:
> thanks,
>
> This
r. We've got the nodes back in place with the
> original data, but the fear is that some data may have been moved off of
> other nodes. I think that this is very unlikely, but I'm just looking for
> confirmation.
>
>
> On Thursday, October 20, 2016, Yabin Meng <yabinm...@gmail.co
Sorry, I'm not aware of it
On Thu, Oct 20, 2016 at 6:00 PM, Jai Bheemsen Rao Dhanwada <
jaibheem...@gmail.com> wrote:
> Thank you Yabin, is there a exisiting JIRA that I can refer to?
>
> On Thu, Oct 20, 2016 at 2:05 PM, Yabin Meng <yabinm...@gmail.com> wrote:
>
>
The exception you run into is expected behavior. This is because as Ben
pointed out, when you delete everything (including system schemas), C*
cluster thinks you're bootstrapping a new node. However, node2's IP is
still in gossip and this is why you see the exception.
I'm not clear the reasoning
I assume you're talking about Cassandra JBOD (just a bunch of disk) setup
because you do mention it as adding it to the list of data directories. If
this is the case, you may run into issues, depending on your C* version.
Check this out: http://www.datastax.com/dev/blog/improving-jbod.
Or another
13 matches
Mail list logo