RE: cassandra reads are unbalanced

2015-12-02 Thread Walsh, Stephen
Very good questions.

We have reads and writes at LOCAL_ONE.
There are 2 application (1 for each DC) who read and write at the same rate to 
their local DC
(All reads / writes started all perfectly even and degraded over time)

We use DCAwareRoundRobin policy

On update on the nodetool cleanup – it has help but hasn’t balanced all nodes. 
Node 1 on DC2 is still quite high

Node 1 (DC1)  =  1.35k(seeder)
Node 2 (DC1)  =  1.54k
Node 3 (DC1)  =  1.45k

Node 1 (DC2)  =  2.06k   (seeder)
Node 2 (DC2)  =  1.38k
Node 3 (DC2)  =  1.43k


From: DuyHai Doan [mailto:doanduy...@gmail.com]
Sent: 02 December 2015 14:22
To: user@cassandra.apache.org
Subject: Re: cassandra reads are unbalanced

Which Consistency level do you use for reads ? ONE ? Are you reading from only 
DC1 or from both DC ?
What is the LoadBalancingStrategy you have configured for your driver ? 
TokenAware wrapped on DCAwareRoundRobin ?





On Wed, Dec 2, 2015 at 3:36 PM, Walsh, Stephen 
> wrote:
Hey all,

Thanks for taking the time to help.

So we have 6 cassandra nodes in 2 Data Centers.
Both Data Centers have a replication of 3 – so all nodes have all the data.

Over the last 2 days we’ve noticed that data reads / writes has shifted from 
balanced to unbalanced
(Nodetool status still shows 100% ownership on every node, with similar sizes)


For Example

We monitor the number of reads / writes of every table via the cassandra JMX 
metrics. (cassandra.db.read_count)
Over the last hour of this run

Reads
Node 1 (DC1)  =  1.79k(seeder)
Node 2 (DC1)  =  1.92k
Node 3 (DC1)  =  1.97k

Node 1 (DC2)  =  2.90k   (seeder)
Node 2 (DC2)  =  1.76k
Node 3 (DC2)  =  1.19k

As you see on DC1, everything is pretty well balanced, but on DC2 the reads 
favour Node1 over Node 3.
I ran a nodetool repair yesterday – ran for 6 hours and when completed didn’t 
change the read balance.

Write levels are similar on  DC2, but not as bad a reads.

Anyone any suggestion on how to rebalance? I’m thinking maybe running a 
nodetool cleanup in case some of the keys have shifted?

Regards
Stephen Walsh


This email (including any attachments) is proprietary to Aspect Software, Inc. 
and may contain information that is confidential. If you have received this 
message in error, please do not read, copy or forward this message. Please 
notify the sender immediately, delete it from your system and destroy any 
copies. You may not further disclose or distribute this email or its 
attachments.

This email (including any attachments) is proprietary to Aspect Software, Inc. 
and may contain information that is confidential. If you have received this 
message in error, please do not read, copy or forward this message. Please 
notify the sender immediately, delete it from your system and destroy any 
copies. You may not further disclose or distribute this email or its 
attachments.


Re: cassandra reads are unbalanced

2015-12-02 Thread DuyHai Doan
If you're using the Java driver with LOCAL_ONE and the default load
balancing strategy (TokenAware wrapped on DCAwareRoundRobin), the
driver will always select the primary replica. To change this behavior and
introduce some randomness so that non primary replicas get a chance to
serve a read:

new TokenAwarePolicy(new DCAwareRoundRobinPolicy("local_DC"), true).

The second parameter (true) asks the TokenAware policy to "shuffle" replica
on each request to avoid always returning the primary replica.

On Wed, Dec 2, 2015 at 6:44 PM, Walsh, Stephen 
wrote:

> Very good questions.
>
>
>
> We have reads and writes at LOCAL_ONE.
>
> There are 2 application (1 for each DC) who read and write at the same
> rate to their local DC
>
> (All reads / writes started all perfectly even and degraded over time)
>
>
>
> We use DCAwareRoundRobin policy
>
>
>
> On update on the nodetool cleanup – it has help but hasn’t balanced all
> nodes. Node 1 on DC2 is still quite high
>
>
>
> Node 1 (DC1)  =  1.35k(seeder)
>
> Node 2 (DC1)  =  1.54k
>
> Node 3 (DC1)  =  1.45k
>
>
>
> Node 1 (DC2)  =  2.06k   (seeder)
>
> Node 2 (DC2)  =  1.38k
>
> Node 3 (DC2)  =  1.43k
>
>
>
>
>
> *From:* DuyHai Doan [mailto:doanduy...@gmail.com]
> *Sent:* 02 December 2015 14:22
> *To:* user@cassandra.apache.org
> *Subject:* Re: cassandra reads are unbalanced
>
>
>
> Which Consistency level do you use for reads ? ONE ? Are you reading from
> only DC1 or from both DC ?
>
> What is the LoadBalancingStrategy you have configured for your driver ?
> TokenAware wrapped on DCAwareRoundRobin ?
>
>
>
>
>
>
>
>
>
>
>
> On Wed, Dec 2, 2015 at 3:36 PM, Walsh, Stephen 
> wrote:
>
> Hey all,
>
>
>
> Thanks for taking the time to help.
>
>
>
> So we have 6 cassandra nodes in 2 Data Centers.
>
> Both Data Centers have a replication of 3 – so all nodes have all the data.
>
>
>
> Over the last 2 days we’ve noticed that data reads / writes has shifted
> from balanced to unbalanced
>
> (Nodetool status still shows 100% ownership on every node, with similar
> sizes)
>
>
>
>
>
> For Example
>
>
>
> We monitor the number of reads / writes of every table via the cassandra
> JMX metrics. (cassandra.db.read_count)
>
> Over the last hour of this run
>
>
>
> Reads
>
> Node 1 (DC1)  =  1.79k(seeder)
>
> Node 2 (DC1)  =  1.92k
>
> Node 3 (DC1)  =  1.97k
>
>
>
> Node 1 (DC2)  =  2.90k   (seeder)
>
> Node 2 (DC2)  =  1.76k
>
> Node 3 (DC2)  =  1.19k
>
>
>
> As you see on DC1, everything is pretty well balanced, but on DC2 the
> reads favour Node1 over Node 3.
>
> I ran a nodetool repair yesterday – ran for 6 hours and when completed
> didn’t change the read balance.
>
>
>
> Write levels are similar on  DC2, but not as bad a reads.
>
>
>
> Anyone any suggestion on how to rebalance? I’m thinking maybe running a
> nodetool cleanup in case some of the keys have shifted?
>
>
>
> Regards
>
> Stephen Walsh
>
>
>
>
>
> This email (including any attachments) is proprietary to Aspect Software,
> Inc. and may contain information that is confidential. If you have received
> this message in error, please do not read, copy or forward this message.
> Please notify the sender immediately, delete it from your system and
> destroy any copies. You may not further disclose or distribute this email
> or its attachments.
>
>
> This email (including any attachments) is proprietary to Aspect Software,
> Inc. and may contain information that is confidential. If you have received
> this message in error, please do not read, copy or forward this message.
> Please notify the sender immediately, delete it from your system and
> destroy any copies. You may not further disclose or distribute this email
> or its attachments.
>


Re: cassandra reads are unbalanced

2015-12-02 Thread DuyHai Doan
Which Consistency level do you use for reads ? ONE ? Are you reading from
only DC1 or from both DC ?

What is the LoadBalancingStrategy you have configured for your driver ?
TokenAware wrapped on DCAwareRoundRobin ?





On Wed, Dec 2, 2015 at 3:36 PM, Walsh, Stephen 
wrote:

> Hey all,
>
>
>
> Thanks for taking the time to help.
>
>
>
> So we have 6 cassandra nodes in 2 Data Centers.
>
> Both Data Centers have a replication of 3 – so all nodes have all the data.
>
>
>
> Over the last 2 days we’ve noticed that data reads / writes has shifted
> from balanced to unbalanced
>
> (Nodetool status still shows 100% ownership on every node, with similar
> sizes)
>
>
>
>
>
> For Example
>
>
>
> We monitor the number of reads / writes of every table via the cassandra
> JMX metrics. (cassandra.db.read_count)
>
> Over the last hour of this run
>
>
>
> Reads
>
> Node 1 (DC1)  =  1.79k(seeder)
>
> Node 2 (DC1)  =  1.92k
>
> Node 3 (DC1)  =  1.97k
>
>
>
> Node 1 (DC2)  =  2.90k   (seeder)
>
> Node 2 (DC2)  =  1.76k
>
> Node 3 (DC2)  =  1.19k
>
>
>
> As you see on DC1, everything is pretty well balanced, but on DC2 the
> reads favour Node1 over Node 3.
>
> I ran a nodetool repair yesterday – ran for 6 hours and when completed
> didn’t change the read balance.
>
>
>
> Write levels are similar on  DC2, but not as bad a reads.
>
>
>
> Anyone any suggestion on how to rebalance? I’m thinking maybe running a
> nodetool cleanup in case some of the keys have shifted?
>
>
>
> Regards
>
> Stephen Walsh
>
>
>
>
> This email (including any attachments) is proprietary to Aspect Software,
> Inc. and may contain information that is confidential. If you have received
> this message in error, please do not read, copy or forward this message.
> Please notify the sender immediately, delete it from your system and
> destroy any copies. You may not further disclose or distribute this email
> or its attachments.
>


Want to run repair on a node without it taking traffic

2015-12-02 Thread K F
Hi Folks,
How can I run repair on a node without it taking any coordinator/client 
traffic. So, I can complete the repair on the node without it taking any 
traffic, except the streams from other nodes. Is that possible?
Thanks.



Re: Restoring a snapshot into a new cluster - thoughts on replica placement

2015-12-02 Thread Robert Coli
On Wed, Dec 2, 2015 at 5:06 AM, Peer, Oded  wrote:

> It seems it is not enough to restore the token ranges on an equal-size
> cluster since you also need to restore the rack information.
>

Yep, if you're using a rack-aware snitch, that is correct. Because in that
case, rack determines replica placement.

=Rob


Re: Want to run repair on a node without it taking traffic

2015-12-02 Thread Robert Coli
On Wed, Dec 2, 2015 at 8:54 AM, K F  wrote:

> How can I run repair on a node without it taking any coordinator/client
> traffic. So, I can complete the repair on the node without it taking any
> traffic, except the streams from other nodes. Is that possible?
>

In general you should probably just bootstrap, but... for example when
bringing back a node that has been down for longer than gc_grace_seconds,
you can use :

https://issues.apache.org/jira/browse/CASSANDRA-6961 (Resolved; Fixed;
2.0.7, 2.1 beta2): "nodes should go into hibernate when join_ring is false"

=Rob


Unable to add nodes / awaiting patch.

2015-12-02 Thread Jeff Ferland
Looks like we’re hit by https://issues.apache.org/jira/browse/CASSANDRA-10012 
. Not knowing a better 
place to ask, when will the next version of 2.1.x Cassandra be cut and the 
following DSE fix cut from there? Could DSE cut an in-between version for this 
fix? Can we patch it into our current version of DSE?

-Jeff

Re: Transitioning to incremental repair

2015-12-02 Thread Marcus Eriksson
Bryan, this should be improved with
https://issues.apache.org/jira/browse/CASSANDRA-10768 - could you try it
out?

On Tue, Dec 1, 2015 at 10:58 PM, Bryan Cheng  wrote:

> Sorry if I misunderstood, but are you asking about the LCS case?
>
> Based on our experience, I would absolutely recommend you continue with
> the migration procedure. Even if the compaction strategy is the same, the
> process of anticompaction is incredibly painful. We observed our test
> cluster running 2.1.11 experiencing a dramatic increase in latency and not
> responding to nodetool queries over JMX while anticompacting the largest
> SSTables. This procedure also took several times longer than a standard
> full repair.
>
> If you absolutely cannot perform the migration procedure, I believe 2.2.x
> contains the changes to automatically set the RepairedAt flags after a full
> repair, so you may be able to do a full repair on 2.2.x and then transition
> directly to incremental without migrating (can someone confirm?)
>


Re: Unable to add nodes / awaiting patch.

2015-12-02 Thread Michael Shuler
On 12/02/2015 01:54 PM, Jeff Ferland wrote:
> Looks like we’re hit
> by https://issues.apache.org/jira/browse/CASSANDRA-10012. Not knowing a
> better place to ask, when will the next version of 2.1.x Cassandra be
> cut and the following DSE fix cut from there? Could DSE cut an
> in-between version for this fix? Can we patch it into our current
> version of DSE?

Apache Cassandra 2.1.12 just went up for release vote this morning.

http://mail-archives.apache.org/mod_mbox/cassandra-dev/201512.mbox/%3CCALamADLcKJ0_AjToEfwkjZGMLqHNcGdDPMEewBkJ4K0qBEwTjw%40mail.gmail.com%3E

Not sure about the DSE timeline, but you might contact your support
folks with that question, if someone doesn't have that insight here.

I'm also vaguely recalling that you can rip and replace the cassandra
jar in DSE with a newer one(?). Don't quote me on that.. :)

-- 
Michael


Re: Issues on upgrading from 2.2.3 to 3.0

2015-12-02 Thread Carlos A
Bryan, thanks for replying. I had that figured out already few days ago.
The issue was that the snitch method was also changed on the configuration
hence the problem.

Now, if you change from rackInferingSnitch to PropertiesFileSnitch it will
not run as the data has to be migrated. Is that correct? Or do you have the
change the replication class on the keyspace?

Putting back to rackInferingSnitch worked just fine.

On Wed, Dec 2, 2015 at 6:30 PM, Bryan Cheng  wrote:

> Has your configuration changed?
>
> This is a new check- https://issues.apache.org/jira/browse/CASSANDRA-10242.
> It seems likely either your snitch changed, your properties changed, or
> something caused Cassandra to think one of the two happened...
>
> What's your node layout?
>
> On Fri, Nov 27, 2015 at 6:45 PM, Carlos A  wrote:
>
>> Hello all,
>>
>> I had 2 of my systems upgraded to 3.0 from the same previous version.
>>
>> The first cluster seem to be fine.
>>
>> But the second, each node starts and then fails.
>>
>> On the log I have the following on all of them:
>>
>> INFO  [main] 2015-11-27 19:40:21,168 ColumnFamilyStore.java:381 -
>> Initializing system_schema.keyspaces
>> INFO  [main] 2015-11-27 19:40:21,177 ColumnFamilyStore.java:381 -
>> Initializing system_schema.tables
>> INFO  [main] 2015-11-27 19:40:21,185 ColumnFamilyStore.java:381 -
>> Initializing system_schema.columns
>> INFO  [main] 2015-11-27 19:40:21,192 ColumnFamilyStore.java:381 -
>> Initializing system_schema.triggers
>> INFO  [main] 2015-11-27 19:40:21,198 ColumnFamilyStore.java:381 -
>> Initializing system_schema.dropped_columns
>> INFO  [main] 2015-11-27 19:40:21,203 ColumnFamilyStore.java:381 -
>> Initializing system_schema.views
>> INFO  [main] 2015-11-27 19:40:21,208 ColumnFamilyStore.java:381 -
>> Initializing system_schema.types
>> INFO  [main] 2015-11-27 19:40:21,215 ColumnFamilyStore.java:381 -
>> Initializing system_schema.functions
>> INFO  [main] 2015-11-27 19:40:21,220 ColumnFamilyStore.java:381 -
>> Initializing system_schema.aggregates
>> INFO  [main] 2015-11-27 19:40:21,225 ColumnFamilyStore.java:381 -
>> Initializing system_schema.indexes
>> ERROR [main] 2015-11-27 19:40:21,831 CassandraDaemon.java:250 - Cannot
>> start node if snitch's rack differs from previous rack. Please fix the
>> snitch or decommission and rebootstrap this node.
>>
>> It asks to "Please fix the snitch or decommission and rebootstrap this
>> node"
>>
>> If none of the nodes can go up, how can I decommission all of them?
>>
>> Doesn't make sense.
>>
>> Any suggestions?
>>
>> Thanks,
>>
>> C.
>>
>
>


Re: Transitioning to incremental repair

2015-12-02 Thread Bryan Cheng
Ah Marcus, that looks very promising- unfortunately we have already
switched back to full repairs and our test cluster has been re-purposed for
other tasks atm. I will be sure to apply the patch/try a fixed version of
Cassandra if we attempt to migrate to incremental repair again.


Re: Cassandra compaction stuck? Should I disable?

2015-12-02 Thread PenguinWhispererThe .
So it seems I found the problem.

The node opening a stream is waiting for the other node to respond but that
node never responds due to a broken pipe which makes Cassandra wait forever.

It's basically this issue:
https://issues.apache.org/jira/browse/CASSANDRA-8472
And this is the workaround/fix:
https://issues.apache.org/jira/browse/CASSANDRA-8611

So:
- update cassandra to >=2.0.11
- add option streaming_socket_timeout_in_ms = 1
- do rolling restart of cassandra

What's weird is that the IOException: Broken pipe is never shown in my logs
(not on any node). And my logging is set to INFO in log4j config.
I have this config in log4j-server.properties:
# output messages into a rolling log file as well as stdout
log4j.rootLogger=INFO,stdout,R

# stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p %d{HH:mm:ss,SSS} %m%n

# rolling log file
log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.maxFileSize=20MB
log4j.appender.R.maxBackupIndex=50
log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=%5p [%t] %d{ISO8601} %F (line %L)
%m%n
# Edit the next line to point to your logs directory
log4j.appender.R.File=/var/log/cassandra/system.log

# Application logging options
#log4j.logger.org.apache.cassandra=DEBUG
#log4j.logger.org.apache.cassandra.db=DEBUG
#log4j.logger.org.apache.cassandra.service.StorageProxy=DEBUG

# Adding this to avoid thrift logging disconnect errors.
log4j.logger.org.apache.thrift.server.TNonblockingServer=ERROR

Too bad nobody else could point to those. Hope it helps someone else from
wasting a lot of time.

2015-11-11 15:42 GMT+01:00 Sebastian Estevez :

> Use 'nodetool compactionhistory'
>
> all the best,
>
> Sebastián
> On Nov 11, 2015 3:23 AM, "PenguinWhispererThe ." <
> th3penguinwhispe...@gmail.com> wrote:
>
>> Does compactionstats shows only stats for completed compactions (100%)?
>> It might be that the compaction is running constantly, over and over again.
>> In that case I need to know what I might be able to do to stop this
>> constant compaction so I can start a nodetool repair.
>>
>> Note that there is a lot of traffic on this columnfamily so I'm not sure
>> if temporary disabling compaction is an option. The repair will probably
>> take long as well.
>>
>> Sebastian and Rob: do you might have any more ideas about the things I
>> put in this thread? Any help is appreciated!
>>
>> 2015-11-10 20:03 GMT+01:00 PenguinWhispererThe . <
>> th3penguinwhispe...@gmail.com>:
>>
>>> Hi Sebastian,
>>>
>>> Thanks for your response.
>>>
>>> No swap is used. No offense, I just don't see a reason why having swap
>>> would be the issue here. I put swapiness on 1. I also have jna installed.
>>> That should prevent java being swapped out as wel AFAIK.
>>>
>>>
>>> 2015-11-10 19:50 GMT+01:00 Sebastian Estevez <
>>> sebastian.este...@datastax.com>:
>>>
 Turn off Swap.


 http://docs.datastax.com/en/cassandra/2.1/cassandra/install/installRecommendSettings.html?scroll=reference_ds_sxl_gf3_2k__disable-swap


 All the best,


 [image: datastax_logo.png] 

 Sebastián Estévez

 Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com

 [image: linkedin.png]  [image:
 facebook.png]  [image: twitter.png]
  [image: g+.png]
 
 
 


 

 DataStax is the fastest, most scalable distributed database
 technology, delivering Apache Cassandra to the world’s most innovative
 enterprises. Datastax is built to be agile, always-on, and predictably
 scalable to any size. With more than 500 customers in 45 countries, 
 DataStax
 is the database technology and transactional backbone of choice for the
 worlds most innovative companies such as Netflix, Adobe, Intuit, and eBay.

 On Tue, Nov 10, 2015 at 1:48 PM, PenguinWhispererThe . <
 th3penguinwhispe...@gmail.com> wrote:

> I also have the following memory usage:
> [root@US-BILLINGDSX4 cassandra]# free -m
>  total   used   free sharedbuffers
> cached
> Mem: 12024   9455   2569  0110
> 2163
> -/+ buffers/cache:   7180   4844
> Swap: 2047  0   2047
>
> Still a lot free and a lot of free buffers/cache.
>
> 2015-11-10 19:45 GMT+01:00 PenguinWhispererThe . <
> th3penguinwhispe...@gmail.com>:
>
>> Still stuck with this. However I enabled GC logging. This shows the
>> following:

cassandra reads are unbalanced

2015-12-02 Thread Walsh, Stephen
Hey all,

Thanks for taking the time to help.

So we have 6 cassandra nodes in 2 Data Centers.
Both Data Centers have a replication of 3 - so all nodes have all the data.

Over the last 2 days we've noticed that data reads / writes has shifted from 
balanced to unbalanced
(Nodetool status still shows 100% ownership on every node, with similar sizes)


For Example

We monitor the number of reads / writes of every table via the cassandra JMX 
metrics. (cassandra.db.read_count)
Over the last hour of this run

Reads
Node 1 (DC1)  =  1.79k(seeder)
Node 2 (DC1)  =  1.92k
Node 3 (DC1)  =  1.97k

Node 1 (DC2)  =  2.90k   (seeder)
Node 2 (DC2)  =  1.76k
Node 3 (DC2)  =  1.19k

As you see on DC1, everything is pretty well balanced, but on DC2 the reads 
favour Node1 over Node 3.
I ran a nodetool repair yesterday - ran for 6 hours and when completed didn't 
change the read balance.

Write levels are similar on  DC2, but not as bad a reads.

Anyone any suggestion on how to rebalance? I'm thinking maybe running a 
nodetool cleanup in case some of the keys have shifted?

Regards
Stephen Walsh


This email (including any attachments) is proprietary to Aspect Software, Inc. 
and may contain information that is confidential. If you have received this 
message in error, please do not read, copy or forward this message. Please 
notify the sender immediately, delete it from your system and destroy any 
copies. You may not further disclose or distribute this email or its 
attachments.


Restoring a snapshot into a new cluster - thoughts on replica placement

2015-12-02 Thread Peer, Oded
I read the documentation for restoring a snapshot into a new cluster.
It got me thinking about replica placement in that context. 
"NetworkTopologyStrategy places replicas in the same data center by walking the 
ring clockwise until reaching the first node in another rack."
It seems it is not enough to restore the token ranges on an equal-size cluster 
since you also need to restore the rack information.

Assume I have two 6-node clusters with three racks for each cluster represented 
by "lower" "UPPER" and "numeric".
In the first cluster the ring is represented by: a -> B -> 3 -> d -> E -> 6 ->
In the second cluster the ring uses the same token ranges and  is represented 
by: a -> b -> C -> D -> 5 -> 6 ->

In this case data restored from the first cluster matches the token 
distribution on the second cluster but does not match the expected replica 
placement.
Token t will reside on nodes a,B,3 on the first cluster but should reside on 
nodes a,C,5 on the second cluster.

Does this make sense? Did I miss something?



Re: Issues on upgrading from 2.2.3 to 3.0

2015-12-02 Thread Bryan Cheng
Has your configuration changed?

This is a new check- https://issues.apache.org/jira/browse/CASSANDRA-10242.
It seems likely either your snitch changed, your properties changed, or
something caused Cassandra to think one of the two happened...

What's your node layout?

On Fri, Nov 27, 2015 at 6:45 PM, Carlos A  wrote:

> Hello all,
>
> I had 2 of my systems upgraded to 3.0 from the same previous version.
>
> The first cluster seem to be fine.
>
> But the second, each node starts and then fails.
>
> On the log I have the following on all of them:
>
> INFO  [main] 2015-11-27 19:40:21,168 ColumnFamilyStore.java:381 -
> Initializing system_schema.keyspaces
> INFO  [main] 2015-11-27 19:40:21,177 ColumnFamilyStore.java:381 -
> Initializing system_schema.tables
> INFO  [main] 2015-11-27 19:40:21,185 ColumnFamilyStore.java:381 -
> Initializing system_schema.columns
> INFO  [main] 2015-11-27 19:40:21,192 ColumnFamilyStore.java:381 -
> Initializing system_schema.triggers
> INFO  [main] 2015-11-27 19:40:21,198 ColumnFamilyStore.java:381 -
> Initializing system_schema.dropped_columns
> INFO  [main] 2015-11-27 19:40:21,203 ColumnFamilyStore.java:381 -
> Initializing system_schema.views
> INFO  [main] 2015-11-27 19:40:21,208 ColumnFamilyStore.java:381 -
> Initializing system_schema.types
> INFO  [main] 2015-11-27 19:40:21,215 ColumnFamilyStore.java:381 -
> Initializing system_schema.functions
> INFO  [main] 2015-11-27 19:40:21,220 ColumnFamilyStore.java:381 -
> Initializing system_schema.aggregates
> INFO  [main] 2015-11-27 19:40:21,225 ColumnFamilyStore.java:381 -
> Initializing system_schema.indexes
> ERROR [main] 2015-11-27 19:40:21,831 CassandraDaemon.java:250 - Cannot
> start node if snitch's rack differs from previous rack. Please fix the
> snitch or decommission and rebootstrap this node.
>
> It asks to "Please fix the snitch or decommission and rebootstrap this
> node"
>
> If none of the nodes can go up, how can I decommission all of them?
>
> Doesn't make sense.
>
> Any suggestions?
>
> Thanks,
>
> C.
>