I am running 3.0.5 with 2 nodes in two DCs, gce-us-central1 and
gce-us-east1. I increased the replication factor of gce-us-central1 from 1
to 2. Then I ran 'nodetool repair -dc gce-us-central1'. The "Owns" for
the node switched to 100% as it should but the Load showed that it didn't
actually
.211.55.8:9042...
> INFO 13:41:18 Binding thrift service to /10.211.55.8:9160
> INFO 13:41:18 Listening for thrift clients...
>
>
> The error I am getting is this:
>
> $ ./sstableloader -d 10.211.55.8 -f ../conf/cassandra.yaml -v ~/Downloads/
>
> ams0002-cassandra-20160523-
getting is this:
$ ./sstableloader -d 10.211.55.8 -f ../conf/cassandra.yaml -v ~/Downloads/
ams0002-cassandra-20160523-1035/var/lib/cassandra/data/Titan/edgestore-8bcd2300d0d011e5a3ab233f92747e94/
objc[18941]: Class JavaLaunchHelper is implemented in both
/Library/Java/JavaVirtualMachines/jdk1.8.0_77
unsubscribe
If you remove a node at a time, you’ll eventually end up with a single node in
the DC you’re decommissioning which will own all of the data, and you’ll likely
overwhelm that node.
It’s typically recommended that you ALTER the keyspace, remove the replication
settings for that DC, and then you
You may also check in the system.log, loaded properties are logged on node
startup.
2016-05-23 19:55 GMT-03:00 Jonathan Haddad :
>
> find / -name 'cassandra.yaml' -exec grep -nH auto_bootstrap {} \;
>
> On Mon, May 23, 2016 at 3:44 PM Rajath Subramanyam
Hello,
Suppose we have 2 DCs and we know that the data is correctly replicated in
both. In such situation, is it safe to "remove" one of the DCs by simply doing
a "nodetool remove node" followed by "nodetool removenode force" for each node
in that DC (instead of doing a "nodetool decommission"
Do you have 1 node in each DC or 2? If you're saying you have 1 node in
each DC then a RF of 2 doesn't make sense. Can you clarify on what your set
up is?
On 23 May 2016 at 19:31, Luke Jolly wrote:
> I am running 3.0.5 with 2 nodes in two DCs, gce-us-central1 and
>
Hi Cassandra users,
Is there a way to find if auto_bootstrap is set to false on a Cassandra
node if we didn't know the location of the cassandra.yaml or the cassandra
installation directory (for e.g., through means like JMX, etc) ?
Thank you !
Regards,
Rajath
Rajath
find / -name 'cassandra.yaml' -exec grep -nH auto_bootstrap {} \;
On Mon, May 23, 2016 at 3:44 PM Rajath Subramanyam
wrote:
> Hi Cassandra users,
>
> Is there a way to find if auto_bootstrap is set to false on a Cassandra
> node if we didn't know the location of the
Hi!
I have a 2.0.13 cluster which I have just extended, and I'm now looking
into upgrading it to 2.1.
* The cleanup after the extension is partially done.
* I'm also looking into changing a few tables into Leveled Compaction
Strategy.
In the interest of speeding up things by avoiding
I'm sorry, I've just noticed I missed to type a few words
1. Before the upgrade, always make sure you have sstables on the latest
version of the current Cassandra version you are running (by running
nodetool upgradesstables on all nodes - if they are latest, they should
return almost immediately;
Hi,
my understanding after reading the upgrade doc and looking through the
email lists was the following:
1. Before the upgrade, always make sure you have sstables on the latest
version of the current Cassandra version you are running (by running
nodetool upgradesstables on all nodes - if they
I remembered that Titan treats edges (and vertices?) as immutable and deletes
the entity and re-creates it on every change.
So I set the gc_grace_seconds to 0 for every table in the Titan keyspace and
ran a major compaction. However, this made the situation worse. Instead of
roughly 2’700 tcp
14 matches
Mail list logo