Hello all,

I had 2 of my systems upgraded to 3.0 from the same previous version.

The first cluster seem to be fine.

But the second, each node starts and then fails.

On the log I have the following on all of them:

INFO  [main] 2015-11-27 19:40:21,168 ColumnFamilyStore.java:381 -
Initializing system_schema.keyspaces
INFO  [main] 2015-11-27 19:40:21,177 ColumnFamilyStore.java:381 -
Initializing system_schema.tables
INFO  [main] 2015-11-27 19:40:21,185 ColumnFamilyStore.java:381 -
Initializing system_schema.columns
INFO  [main] 2015-11-27 19:40:21,192 ColumnFamilyStore.java:381 -
Initializing system_schema.triggers
INFO  [main] 2015-11-27 19:40:21,198 ColumnFamilyStore.java:381 -
Initializing system_schema.dropped_columns
INFO  [main] 2015-11-27 19:40:21,203 ColumnFamilyStore.java:381 -
Initializing system_schema.views
INFO  [main] 2015-11-27 19:40:21,208 ColumnFamilyStore.java:381 -
Initializing system_schema.types
INFO  [main] 2015-11-27 19:40:21,215 ColumnFamilyStore.java:381 -
Initializing system_schema.functions
INFO  [main] 2015-11-27 19:40:21,220 ColumnFamilyStore.java:381 -
Initializing system_schema.aggregates
INFO  [main] 2015-11-27 19:40:21,225 ColumnFamilyStore.java:381 -
Initializing system_schema.indexes
ERROR [main] 2015-11-27 19:40:21,831 CassandraDaemon.java:250 - Cannot
start node if snitch's rack differs from previous rack. Please fix the
snitch or decommission and rebootstrap this node.

It asks to "Please fix the snitch or decommission and rebootstrap this node"

If none of the nodes can go up, how can I decommission all of them?

Doesn't make sense.

Any suggestions?

Thanks,

C.

Reply via email to