Re: Schema Disagreement after migration from 1.0.6 to 1.1.4

2012-09-05 Thread Martin Koch
Thanks, this is exactly it. We'd like to do a rolling upgrade - this is a
production cluster - so I guess we'll upgrade 1.0.6 -> 1.0.11 -> 1.1.4,
then.

/Martin

On Thu, Sep 6, 2012 at 2:35 AM, Omid Aladini  wrote:

> Do you see exceptions like "java.lang.UnsupportedOperationException:
> Not a time-based UUID" in log files of nodes running 1.0.6 and 1.0.9?
> Then it's probably due to [1] explained here [2] -- In this case you
> either have to upgrade all nodes to 1.1.4 or if you prefer keeping a
> mixed-version cluster, the 1.0.6 and 1.0.9 nodes won't be able to join
> the cluster again, unless you temporarily upgrade them to 1.0.11.
>
> Cheers,
> Omid
>
> [1] https://issues.apache.org/jira/browse/CASSANDRA-1391
> [2] https://issues.apache.org/jira/browse/CASSANDRA-4195
>
> On Wed, Sep 5, 2012 at 4:08 PM, Martin Koch  wrote:
> >
> > Hi list
> >
> > We have a 5-node Cassandra cluster with a single 1.0.9 installation and
> four 1.0.6 installations.
> >
> > We have tried installing 1.1.4 on one of the 1.0.6 nodes (following the
> instructions on http://www.datastax.com/docs/1.1/install/upgrading).
> >
> > After bringing up 1.1.4 there are no errors in the log, but the cluster
> now suffers from schema disagreement
> >
> > [default@unknown] describe cluster;
> > Cluster Information:
> >Snitch: org.apache.cassandra.locator.SimpleSnitch
> >Partitioner: org.apache.cassandra.dht.RandomPartitioner
> >Schema versions:
> > 59adb24e-f3cd-3e02-97f0-5b395827453f: [10.10.29.67] <- The new 1.1.4 node
> >
> > 943fc0a0-f678-11e1--339cf8a6c1bf: [10.10.87.228, 10.10.153.45,
> 10.10.145.90, 10.38.127.80] <- nodes in the old cluster
> >
> > The recipe for recovering from schema disagreement (
> http://wiki.apache.org/cassandra/FAQ#schema_disagreement) doesn't cover
> the new directory layout. The system/Schema directory is empty save for a
> snapshots subdirectory. system/schema_columnfamilies and
> system/schema_keyspaces contain some files. As described in datastax's
> description, we tried running nodetool upgradesstables. When this had done,
> describe schema in the cli showed a schema definition which seemed correct,
> but was indeed different from the schema on the other nodes in the cluster.
> >
> > Any clues on how we should proceed?
> >
> > Thanks,
> > /Martin Koch
>


Re: Schema Disagreement after migration from 1.0.6 to 1.1.4

2012-09-05 Thread Omid Aladini
Do you see exceptions like "java.lang.UnsupportedOperationException:
Not a time-based UUID" in log files of nodes running 1.0.6 and 1.0.9?
Then it's probably due to [1] explained here [2] -- In this case you
either have to upgrade all nodes to 1.1.4 or if you prefer keeping a
mixed-version cluster, the 1.0.6 and 1.0.9 nodes won't be able to join
the cluster again, unless you temporarily upgrade them to 1.0.11.

Cheers,
Omid

[1] https://issues.apache.org/jira/browse/CASSANDRA-1391
[2] https://issues.apache.org/jira/browse/CASSANDRA-4195

On Wed, Sep 5, 2012 at 4:08 PM, Martin Koch  wrote:
>
> Hi list
>
> We have a 5-node Cassandra cluster with a single 1.0.9 installation and four 
> 1.0.6 installations.
>
> We have tried installing 1.1.4 on one of the 1.0.6 nodes (following the 
> instructions on http://www.datastax.com/docs/1.1/install/upgrading).
>
> After bringing up 1.1.4 there are no errors in the log, but the cluster now 
> suffers from schema disagreement
>
> [default@unknown] describe cluster;
> Cluster Information:
>Snitch: org.apache.cassandra.locator.SimpleSnitch
>Partitioner: org.apache.cassandra.dht.RandomPartitioner
>Schema versions:
> 59adb24e-f3cd-3e02-97f0-5b395827453f: [10.10.29.67] <- The new 1.1.4 node
>
> 943fc0a0-f678-11e1--339cf8a6c1bf: [10.10.87.228, 10.10.153.45, 
> 10.10.145.90, 10.38.127.80] <- nodes in the old cluster
>
> The recipe for recovering from schema disagreement 
> (http://wiki.apache.org/cassandra/FAQ#schema_disagreement) doesn't cover the 
> new directory layout. The system/Schema directory is empty save for a 
> snapshots subdirectory. system/schema_columnfamilies and 
> system/schema_keyspaces contain some files. As described in datastax's 
> description, we tried running nodetool upgradesstables. When this had done, 
> describe schema in the cli showed a schema definition which seemed correct, 
> but was indeed different from the schema on the other nodes in the cluster.
>
> Any clues on how we should proceed?
>
> Thanks,
> /Martin Koch


Re: Schema Disagreement after migration from 1.0.6 to 1.1.4

2012-09-05 Thread Edward Sargisson

I would try nodetool resetlocalschema.


On 12-09-05 07:08 AM, Martin Koch wrote:

Hi list

We have a 5-node Cassandra cluster with a single 1.0.9 installation 
and four 1.0.6 installations.


We have tried installing 1.1.4 on one of the 1.0.6 nodes (following 
the instructions on http://www.datastax.com/docs/1.1/install/upgrading).


After bringing up 1.1.4 there are no errors in the log, but the 
cluster now suffers from schema disagreement


[default@unknown] describe cluster;
Cluster Information:
   Snitch: org.apache.cassandra.locator.SimpleSnitch
   Partitioner: org.apache.cassandra.dht.RandomPartitioner
   Schema versions:
59adb24e-f3cd-3e02-97f0-5b395827453f: [10.10.29.67] <-The new 1.1.4 node

943fc0a0-f678-11e1--339cf8a6c1bf: [10.10.87.228, 10.10.153.45, 
10.10.145.90, 10.38.127.80] <- nodes in the old cluster


The recipe for recovering from schema disagreement 
(http://wiki.apache.org/cassandra/FAQ#schema_disagreement) doesn't 
cover the new directory layout. The system/Schema directory is empty 
save for a snapshots subdirectory. system/schema_columnfamilies and 
system/schema_keyspaces contain some files. As described in datastax's 
description, we tried running nodetool upgradesstables. When this had 
done, describe schema in the cli showed a schema definition which 
seemed correct, but was indeed different from the schema on the other 
nodes in the cluster.


Any clues on how we should proceed?

Thanks,
/Martin Koch


--

Edward Sargisson

senior java developer
Global Relay

edward.sargis...@globalrelay.net 


*866.484.6630*
New York | Chicago | Vancouver | London (+44.0800.032.9829) | Singapore 
(+65.3158.1301)


Global Relay Archive supports email, instant messaging, BlackBerry, 
Bloomberg, Thomson Reuters, Pivot, YellowJacket, LinkedIn, Twitter, 
Facebook and more.



Ask about *Global Relay Message* 
*--- *The Future of 
Collaboration in the Financial Services World


*
*All email sent to or from this address will be retained by Global 
Relay's email archiving system. This message is intended only for the 
use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law.  Global Relay will not be liable for 
any compliance or technical information provided herein. All trademarks 
are the property of their respective owners.




Schema Disagreement after migration from 1.0.6 to 1.1.4

2012-09-05 Thread Martin Koch
Hi list

We have a 5-node Cassandra cluster with a single 1.0.9 installation and
four 1.0.6 installations.

We have tried installing 1.1.4 on one of the 1.0.6 nodes (following the
instructions on http://www.datastax.com/docs/1.1/install/upgrading).

After bringing up 1.1.4 there are no errors in the log, but the cluster now
suffers from schema disagreement

[default@unknown] describe cluster;
Cluster Information:
   Snitch: org.apache.cassandra.locator.SimpleSnitch
   Partitioner: org.apache.cassandra.dht.RandomPartitioner
   Schema versions:
59adb24e-f3cd-3e02-97f0-5b395827453f: [10.10.29.67] <- The new 1.1.4 node

943fc0a0-f678-11e1--339cf8a6c1bf: [10.10.87.228, 10.10.153.45,
10.10.145.90, 10.38.127.80] <- nodes in the old cluster

The recipe for recovering from schema disagreement (
http://wiki.apache.org/cassandra/FAQ#schema_disagreement) doesn't cover the
new directory layout. The system/Schema directory is empty save for a
snapshots subdirectory. system/schema_columnfamilies and
system/schema_keyspaces contain some files. As described in datastax's
description, we tried running nodetool upgradesstables. When this had done,
describe schema in the cli showed a schema definition which seemed correct,
but was indeed different from the schema on the other nodes in the cluster.

Any clues on how we should proceed?

Thanks,
/Martin Koch