Re: Schema disagreement

2018-05-01 Thread Gábor Auth
Hi,

On Tue, May 1, 2018 at 10:27 PM Gábor Auth  wrote:

> One or two years ago I've tried the CDC feature but switched off... maybe
> is it a side effect of switched off CDC? How can I fix it? :)
>

Okay, I've worked out. Updated the schema of the affected keyspaces on the
new nodes with 'cdc=false' and everything is okay now.

I think, it is a strange bug around the CDC...

Bye,
Gábor Auth


Re: Schema disagreement

2018-05-01 Thread Gábor Auth
Hi,

On Tue, May 1, 2018 at 7:40 PM Gábor Auth  wrote:

> What can I do? Any suggestion? :(
>

Okay, I've diffed the good and the bad system_scheme tables. The only
difference is the `cdc` field in three keyspaces (in `tables` and `views`):
- the value of `cdc` field on the good node is `False`
- the value of `cdc` field on the bad node is `null`

The value of `cdc` field on the other keyspaces is `null`.

One or two years ago I've tried the CDC feature but switched off... maybe
is it a side effect of switched off CDC? How can I fix it? :)

Bye,
Gábor Auth


Re: Schema disagreement

2018-05-01 Thread Gábor Auth
Hi,

On Mon, Apr 30, 2018 at 11:11 PM Gábor Auth  wrote:

> On Mon, Apr 30, 2018 at 11:03 PM Ali Hubail 
> wrote:
>
>> What steps have you performed to add the new DC? Have you tried to follow
>> certain procedures like this?
>>
>> https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddDCToCluster.html
>>
>
> Yes, exactly. :/
>

Okay, removed all new nodes (with `removenode`). Cleared all new node
(removed data and logs).

I did all the steps described in the link (again).

Same result:

Cluster Information:
   Name: cluster
   Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
   Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
   Schema versions:
   5de14758-887d-38c1-9105-fc60649b0edf: [new, new, ...]

   f4ed784a-174a-38dd-a7e5-55ff6f3002b2: [old, old, ...]

The old nodes try to gossip their own schema:
DEBUG [InternalResponseStage:1] 2018-05-01 17:36:36,266
MigrationManager.java:572 - Gossiping my schema version
f4ed784a-174a-38dd-a7e5-55ff6f3002b2
DEBUG [InternalResponseStage:1] 2018-05-01 17:36:36,863
MigrationManager.java:572 - Gossiping my schema version
f4ed784a-174a-38dd-a7e5-55ff6f3002b2

The new nodes try to gossip their own schema:
DEBUG [InternalResponseStage:4] 2018-05-01 17:36:26,329
MigrationManager.java:572 - Gossiping my schema version
5de14758-887d-38c1-9105-fc60649b0edf
DEBUG [InternalResponseStage:4] 2018-05-01 17:36:27,595
MigrationManager.java:572 - Gossiping my schema version
5de14758-887d-38c1-9105-fc60649b0edf

What can I do? Any suggestion? :(

Bye,
Gábor Auth


Re: Schema disagreement

2018-04-30 Thread Gábor Auth
Hi,

On Mon, Apr 30, 2018 at 11:03 PM Ali Hubail 
wrote:

> What steps have you performed to add the new DC? Have you tried to follow
> certain procedures like this?
>
> https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddDCToCluster.html
>

Yes, exactly. :/

Bye,
Gábor Auth


Re: Schema disagreement

2018-04-30 Thread Ali Hubail
Hi,

What steps have you performed to add the new DC? Have you tried to follow 
certain procedures like this?
https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddDCToCluster.html

Node can appear offline to other nodes for various reasons. It would help 
greatly to know what steps you have taken in order to know why you're 
facing this

Ali Hubail

Confidentiality warning: This message and any attachments are intended 
only for the persons to whom this message is addressed, are confidential, 
and may be privileged. If you are not the intended recipient, you are 
hereby notified that any review, retransmission, conversion to hard copy, 
copying, modification, circulation or other use of this message and any 
attachments is strictly prohibited. If you receive this message in error, 
please notify the sender immediately by return email, and delete this 
message and any attachments from your system. Petrolink International 
Limited its subsidiaries, holding companies and affiliates disclaims all 
responsibility from and accepts no liability whatsoever for the 
consequences of any unauthorized person acting, or refraining from acting, 
on any information contained in this message. For security purposes, staff 
training, to assist in resolving complaints and to improve our customer 
service, email communications may be monitored and telephone calls may be 
recorded.



Gábor Auth <auth.ga...@gmail.com> 
04/30/2018 03:40 PM
Please respond to
user@cassandra.apache.org


To
"user@cassandra.apache.org" <user@cassandra.apache.org>, 
cc

Subject
Re: Schema disagreement






Hi,

On Mon, Apr 30, 2018 at 11:39 AM Gábor Auth <auth.ga...@gmail.com> wrote:
've just tried to add a new DC and new node to my cluster (3 DCs and 10 
nodes) and the new node has a different schema version:

Is it normal? Node is marked down but doing a repair successfully?

WARN  [MigrationStage:1] 2018-04-30 20:36:56,579 MigrationTask.java:67 - 
Can't send schema pull request: node /x.x.216.121 is down.
INFO  [AntiEntropyStage:1] 2018-04-30 20:36:56,611 Validator.java:281 - 
[repair #323bf873-4cb6-11e8-bdd5-5feb84046dc9] Sending completed merkle 
tree to /x.x.216.121 for keyspace.table

The `nodetool status` is looking good:
UN  x.x.216.121  959.29 MiB  32   ? 
  322e4e9b-4d9e-43e3-94a3-bbe012058516  RACK01

Bye,
Gábor Auth


Re: Schema disagreement

2018-04-30 Thread Gábor Auth
Hi,

On Mon, Apr 30, 2018 at 11:39 AM Gábor Auth  wrote:

> 've just tried to add a new DC and new node to my cluster (3 DCs and 10
> nodes) and the new node has a different schema version:
>

Is it normal? Node is marked down but doing a repair successfully?

WARN  [MigrationStage:1] 2018-04-30 20:36:56,579 MigrationTask.java:67 -
Can't send schema pull request: node /x.x.216.121 is down.
INFO  [AntiEntropyStage:1] 2018-04-30 20:36:56,611 Validator.java:281 -
[repair #323bf873-4cb6-11e8-bdd5-5feb84046dc9] Sending completed merkle
tree to /x.x.216.121 for keyspace.table

The `nodetool status` is looking good:
UN  x.x.216.121  959.29 MiB  32   ?
  322e4e9b-4d9e-43e3-94a3-bbe012058516  RACK01

Bye,
Gábor Auth


Re: Schema Disagreement vs Nodetool resetlocalschema

2016-09-11 Thread Jens Rantil
Hi Michael,

Did you ever get an answer on this? I'm curious to hear for future
reference.

Thanks,
Jens

On Monday, June 20, 2016, Michael Fong 
wrote:

> Hi,
>
>
>
> We have recently encountered several schema disagreement issue while
> upgrading Cassandra. In one of the cases, the 2-node cluster idled for over
> 30 minutes and their schema remain unsynced. Due to other logic flows,
> Cassandra cannot be restarted, and hence we need to come up an alternative
> on-the-fly. We are thinking to do a nodetool resetlocalschema to force the
> schema synchronization. How safe is this method? Do we need to disable
> thrift/gossip protocol before performing this function, and enable them
> back after resync completes?
>
>
>
> Thanks in advance!
>
>
>
> Sincerely,
>
>
>
> Michael Fong
>


-- 
Jens Rantil
Backend engineer
Tink AB

Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se

Facebook  Linkedin

 Twitter 


Re: Schema disagreement errors

2014-05-13 Thread Duncan Sands

Hi Gaurav, a schema versioning bug was fixed in 2.0.7.

Best wishes, Duncan.

On 12/05/14 21:31, Gaurav Sehgal wrote:

We have recently started seeing a lot of Schema Disagreement errors. We are
using Cassandra 2.0.6 with Oracle Java 1.7. I went through the Cassandra FAQ and
followed the below steps:


  * nodetool disablethrift
  * nodetool disablegossip
  * nodetool drain
  *

'kill pid'.


As per the documentation; the commit logs should have been flush; but that did
not happen in our case. The commit logs were still there. So, I removed them
manually to make sure there are no commit logs when cassandra start up( which
was fine in our case as this data can always be replayed).  I also deleted the
schema* directory from the /data/system folder.

Though when we started cassandra back up the issue started happening again.


Any help would be appreciated

Cheers!
Gaurav






Re: Schema disagreement errors

2014-05-13 Thread Vincent Mallet
Hey Gaurav,

You should consider moving to 2.0.7 which fixes a bunch of these schema
disagreement problems. You could also play around with nodetool
resetlocalschema on the nodes that are behind, but be careful with that
one. I'd go with 2.0.7 first for sure.

Thanks,

   Vince.


On Mon, May 12, 2014 at 12:31 PM, Gaurav Sehgal gsehg...@gmail.com wrote:

 We have recently started seeing a lot of Schema Disagreement errors. We
 are using Cassandra 2.0.6 with Oracle Java 1.7. I went through the
 Cassandra FAQ and followed the below steps:



- nodetool disablethrift
- nodetool disablegossip
- nodetool drain
-

'kill pid'.


 As per the documentation; the commit logs should have been flush; but that
 did not happen in our case. The commit logs were still there. So, I removed
 them manually to make sure there are no commit logs when cassandra start
 up( which was fine in our case as this data can always be replayed).  I
 also deleted the schema* directory from the /data/system folder.

 Though when we started cassandra back up the issue started happening again.


 Any help would be appreciated

 Cheers!
 Gaurav





Re: Schema disagreement errors

2014-05-13 Thread Robert Coli
On Tue, May 13, 2014 at 5:11 PM, Donald Smith 
donald.sm...@audiencescience.com wrote:

  I too have noticed that after doing “nodetool flush” (or “nodetool
 drain”), the commit logs are still there. I think they’re NEW (empty)
 commit logs, but I may be wrong. Anyone know?


Assuming they are being correctly marked clean after drain (which
historically has been a nontrivial assumption) they are new, empty
commit log segments which have been recycled.

=Rob


RE: Schema disagreement errors

2014-05-13 Thread Donald Smith
I too have noticed that after doing “nodetool flush” (or “nodetool drain”), the 
commit logs are still there. I think they’re NEW (empty) commit logs, but I may 
be wrong. Anyone know?

Don

From: Gaurav Sehgal [mailto:gsehg...@gmail.com]
Sent: Monday, May 12, 2014 12:31 PM
To: user@cassandra.apache.org
Subject: Schema disagreement errors

We have recently started seeing a lot of Schema Disagreement errors. We are 
using Cassandra 2.0.6 with Oracle Java 1.7. I went through the Cassandra FAQ 
and followed the below steps:



  *   nodetool disablethrift
  *   nodetool disablegossip
  *   nodetool drain
  *   'kill pid'.

As per the documentation; the commit logs should have been flush; but that did 
not happen in our case. The commit logs were still there. So, I removed them 
manually to make sure there are no commit logs when cassandra start up( which 
was fine in our case as this data can always be replayed).  I also deleted the 
schema* directory from the /data/system folder.

Though when we started cassandra back up the issue started happening again.


Any help would be appreciated

Cheers!
Gaurav




Re: Schema disagreement errors

2014-05-12 Thread Laing, Michael
Upgrade to 2.0.7 fixed this for me.

You can also try 'nodetool resetlocalschema' on disagreeing nodes. This
worked temporarily for me in 2.0.6.

ml


On Mon, May 12, 2014 at 3:31 PM, Gaurav Sehgal gsehg...@gmail.com wrote:

 We have recently started seeing a lot of Schema Disagreement errors. We
 are using Cassandra 2.0.6 with Oracle Java 1.7. I went through the
 Cassandra FAQ and followed the below steps:



- nodetool disablethrift
- nodetool disablegossip
- nodetool drain
-

'kill pid'.


 As per the documentation; the commit logs should have been flush; but that
 did not happen in our case. The commit logs were still there. So, I removed
 them manually to make sure there are no commit logs when cassandra start
 up( which was fine in our case as this data can always be replayed).  I
 also deleted the schema* directory from the /data/system folder.

 Though when we started cassandra back up the issue started happening again.


 Any help would be appreciated

 Cheers!
 Gaurav





Re: Schema disagreement under normal conditions, ALTER TABLE hangs

2013-11-28 Thread Josh Dzielak
Thanks Rob. Let me add one thing in case someone else finds this thread - 

Restarting the nodes did not in and of itself get the schema disagreement 
resolved. We had to run the ALTER TABLE command individually on each of the 
disagreeing nodes once they came back up. 

On Tuesday, November 26, 2013 at 11:24 AM, Robert Coli wrote:

 On Mon, Nov 25, 2013 at 6:42 PM, Josh Dzielak j...@keen.io 
 (mailto:j...@keen.io) wrote:
  Recently we had a strange thing happen. Altering schema (gc_grace_seconds) 
  for a column family resulted in a schema disagreement. 3/4 of nodes got it, 
  1/4 didn't. There was no partition at the time, nor was there multiple 
  schema updates issued. Going to the nodes with stale schema and trying to 
  do the ALTER TABLE there resulted in hanging. We were eventually able to 
  get schema agreement by restarting nodes, but both the initial disagreement 
  under normal conditions and the hanging ALTER TABLE seem pretty weird. Any 
  ideas here? Sound like a bug? 
 
 Yes, that sounds like a bug. This behavior is less common in 1.2.x than it 
 was previously, but still happens sometimes. It's interesting that restarting 
 the affected node helped, in previous versions of hung schema issue, it 
 would survive restart. 
  
  We're on 1.2.8.
  
 
 
 Unfortunately, unless you have a repro path, it is probably not worth 
 reporting a JIRA. 
 
 =Rob
  
 
 
 
 
 




Re: Schema disagreement under normal conditions, ALTER TABLE hangs

2013-11-26 Thread Robert Coli
On Mon, Nov 25, 2013 at 6:42 PM, Josh Dzielak j...@keen.io wrote:

 Recently we had a strange thing happen. Altering schema (gc_grace_seconds)
 for a column family resulted in a schema disagreement. 3/4 of nodes got it,
 1/4 didn't. There was no partition at the time, nor was there multiple
 schema updates issued. Going to the nodes with stale schema and trying to
 do the ALTER TABLE there resulted in hanging. We were eventually able to
 get schema agreement by restarting nodes, but both the initial disagreement
 under normal conditions and the hanging ALTER TABLE seem pretty weird. Any
 ideas here? Sound like a bug?


Yes, that sounds like a bug. This behavior is less common in 1.2.x than it
was previously, but still happens sometimes. It's interesting that
restarting the affected node helped, in previous versions of hung schema
issue, it would survive restart.


 We're on 1.2.8.


Unfortunately, unless you have a repro path, it is probably not worth
reporting a JIRA.

=Rob


Re: Schema Disagreement after migration from 1.0.6 to 1.1.4

2012-09-05 Thread Edward Sargisson

I would try nodetool resetlocalschema.


On 12-09-05 07:08 AM, Martin Koch wrote:

Hi list

We have a 5-node Cassandra cluster with a single 1.0.9 installation 
and four 1.0.6 installations.


We have tried installing 1.1.4 on one of the 1.0.6 nodes (following 
the instructions on http://www.datastax.com/docs/1.1/install/upgrading).


After bringing up 1.1.4 there are no errors in the log, but the 
cluster now suffers from schema disagreement


[default@unknown] describe cluster;
Cluster Information:
   Snitch: org.apache.cassandra.locator.SimpleSnitch
   Partitioner: org.apache.cassandra.dht.RandomPartitioner
   Schema versions:
59adb24e-f3cd-3e02-97f0-5b395827453f: [10.10.29.67] -The new 1.1.4 node

943fc0a0-f678-11e1--339cf8a6c1bf: [10.10.87.228, 10.10.153.45, 
10.10.145.90, 10.38.127.80] - nodes in the old cluster


The recipe for recovering from schema disagreement 
(http://wiki.apache.org/cassandra/FAQ#schema_disagreement) doesn't 
cover the new directory layout. The system/Schema directory is empty 
save for a snapshots subdirectory. system/schema_columnfamilies and 
system/schema_keyspaces contain some files. As described in datastax's 
description, we tried running nodetool upgradesstables. When this had 
done, describe schema in the cli showed a schema definition which 
seemed correct, but was indeed different from the schema on the other 
nodes in the cluster.


Any clues on how we should proceed?

Thanks,
/Martin Koch


--

Edward Sargisson

senior java developer
Global Relay

edward.sargis...@globalrelay.net mailto:edward.sargis...@globalrelay.net


*866.484.6630*
New York | Chicago | Vancouver | London (+44.0800.032.9829) | Singapore 
(+65.3158.1301)


Global Relay Archive supports email, instant messaging, BlackBerry, 
Bloomberg, Thomson Reuters, Pivot, YellowJacket, LinkedIn, Twitter, 
Facebook and more.



Ask about *Global Relay Message* 
http://www.globalrelay.com/services/message*--- *The Future of 
Collaboration in the Financial Services World


*
*All email sent to or from this address will be retained by Global 
Relay's email archiving system. This message is intended only for the 
use of the individual or entity to which it is addressed, and may 
contain information that is privileged, confidential, and exempt from 
disclosure under applicable law.  Global Relay will not be liable for 
any compliance or technical information provided herein. All trademarks 
are the property of their respective owners.




Re: Schema Disagreement after migration from 1.0.6 to 1.1.4

2012-09-05 Thread Omid Aladini
Do you see exceptions like java.lang.UnsupportedOperationException:
Not a time-based UUID in log files of nodes running 1.0.6 and 1.0.9?
Then it's probably due to [1] explained here [2] -- In this case you
either have to upgrade all nodes to 1.1.4 or if you prefer keeping a
mixed-version cluster, the 1.0.6 and 1.0.9 nodes won't be able to join
the cluster again, unless you temporarily upgrade them to 1.0.11.

Cheers,
Omid

[1] https://issues.apache.org/jira/browse/CASSANDRA-1391
[2] https://issues.apache.org/jira/browse/CASSANDRA-4195

On Wed, Sep 5, 2012 at 4:08 PM, Martin Koch m...@issuu.com wrote:

 Hi list

 We have a 5-node Cassandra cluster with a single 1.0.9 installation and four 
 1.0.6 installations.

 We have tried installing 1.1.4 on one of the 1.0.6 nodes (following the 
 instructions on http://www.datastax.com/docs/1.1/install/upgrading).

 After bringing up 1.1.4 there are no errors in the log, but the cluster now 
 suffers from schema disagreement

 [default@unknown] describe cluster;
 Cluster Information:
Snitch: org.apache.cassandra.locator.SimpleSnitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema versions:
 59adb24e-f3cd-3e02-97f0-5b395827453f: [10.10.29.67] - The new 1.1.4 node

 943fc0a0-f678-11e1--339cf8a6c1bf: [10.10.87.228, 10.10.153.45, 
 10.10.145.90, 10.38.127.80] - nodes in the old cluster

 The recipe for recovering from schema disagreement 
 (http://wiki.apache.org/cassandra/FAQ#schema_disagreement) doesn't cover the 
 new directory layout. The system/Schema directory is empty save for a 
 snapshots subdirectory. system/schema_columnfamilies and 
 system/schema_keyspaces contain some files. As described in datastax's 
 description, we tried running nodetool upgradesstables. When this had done, 
 describe schema in the cli showed a schema definition which seemed correct, 
 but was indeed different from the schema on the other nodes in the cluster.

 Any clues on how we should proceed?

 Thanks,
 /Martin Koch


Re: Schema Disagreement after migration from 1.0.6 to 1.1.4

2012-09-05 Thread Martin Koch
Thanks, this is exactly it. We'd like to do a rolling upgrade - this is a
production cluster - so I guess we'll upgrade 1.0.6 - 1.0.11 - 1.1.4,
then.

/Martin

On Thu, Sep 6, 2012 at 2:35 AM, Omid Aladini omidalad...@gmail.com wrote:

 Do you see exceptions like java.lang.UnsupportedOperationException:
 Not a time-based UUID in log files of nodes running 1.0.6 and 1.0.9?
 Then it's probably due to [1] explained here [2] -- In this case you
 either have to upgrade all nodes to 1.1.4 or if you prefer keeping a
 mixed-version cluster, the 1.0.6 and 1.0.9 nodes won't be able to join
 the cluster again, unless you temporarily upgrade them to 1.0.11.

 Cheers,
 Omid

 [1] https://issues.apache.org/jira/browse/CASSANDRA-1391
 [2] https://issues.apache.org/jira/browse/CASSANDRA-4195

 On Wed, Sep 5, 2012 at 4:08 PM, Martin Koch m...@issuu.com wrote:
 
  Hi list
 
  We have a 5-node Cassandra cluster with a single 1.0.9 installation and
 four 1.0.6 installations.
 
  We have tried installing 1.1.4 on one of the 1.0.6 nodes (following the
 instructions on http://www.datastax.com/docs/1.1/install/upgrading).
 
  After bringing up 1.1.4 there are no errors in the log, but the cluster
 now suffers from schema disagreement
 
  [default@unknown] describe cluster;
  Cluster Information:
 Snitch: org.apache.cassandra.locator.SimpleSnitch
 Partitioner: org.apache.cassandra.dht.RandomPartitioner
 Schema versions:
  59adb24e-f3cd-3e02-97f0-5b395827453f: [10.10.29.67] - The new 1.1.4 node
 
  943fc0a0-f678-11e1--339cf8a6c1bf: [10.10.87.228, 10.10.153.45,
 10.10.145.90, 10.38.127.80] - nodes in the old cluster
 
  The recipe for recovering from schema disagreement (
 http://wiki.apache.org/cassandra/FAQ#schema_disagreement) doesn't cover
 the new directory layout. The system/Schema directory is empty save for a
 snapshots subdirectory. system/schema_columnfamilies and
 system/schema_keyspaces contain some files. As described in datastax's
 description, we tried running nodetool upgradesstables. When this had done,
 describe schema in the cli showed a schema definition which seemed correct,
 but was indeed different from the schema on the other nodes in the cluster.
 
  Any clues on how we should proceed?
 
  Thanks,
  /Martin Koch



Re: Schema disagreement in 1.0.2

2011-12-15 Thread blafrisch
So I was able to get the schema agreeing on the two bad nodes, but I don't
particularly like the way that I did it.  One at a time, I shut them down,
removed Schema* and Migration*, then copied over Schema* from another
working node.  They then started up with the correct schema.  Did I do
something totally incorrect in doing that?

Also, some of my nodes are reporting that others are unreachable via the CLI
when executing describe cluster;.  Not all of the nodes do this, about
7/10 are perfectly fine.  I tried restarting each of the nodes that say
others are unreachable, when they came back up then their unreachable list
had changed. Nodetool gossipinfo describes everything perfectly fine as does
nodetool ring.

The topology of the cluster is 2 datacenters, 5 servers each of with a RF of
3.  Only one datacenter seems to have this issue.

--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Schema-disagreement-in-1-0-2-tp7098609p7099003.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


Re: Schema disagreement issue in 1.0.0

2011-10-20 Thread aaron morton
Looks like a bug, patch is here 
https://issues.apache.org/jira/browse/CASSANDRA-3391

Until it is fixed avoid using CompositeType in the key_validator_class and blow 
away the Schema and Migrations SSTables. 

Cheers


-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 20/10/2011, at 7:59 PM, Tamil selvan R.S wrote:

 Im facing the following issue with Cassandra 1.0 set up. The same works for 
 0.8.7
 
 # cassandra-cli  -h x.x.x.x -f RTSCFs.sch 
 Connected to: Real Time Stats on x.x.x.x/9160
 Authenticated to keyspace: Stats
 39c3e120-fa24-11e0--61d449114eff
 Waiting for schema agreement...
 The schema has not settled in 10 seconds; further migrations are ill-advised 
 until it does.
 Versions are 39c3e120-fa24-11e0--61d449114eff:[x.x.x.x], 
 317eb8f0-fa24-11e0--61d449114eff:[x.x.x.y]
 I tried this http://wiki.apache.org/cassandra/FAQ#schema_disagreement
 
 But Now when I restart the cluster I'm getting
 
 `org.apache.cassandra.config.ConfigurationException: Invalid definition for 
 comparator` org.apache.cassandra.db.marshal.CompositeType
 This is my keyspace defn
 
 create keyspace Stats with placement_strategy = 
 'org.apache.cassandra.locator.SimpleStrategy' and 
 strategy_options={replication_factor:1};
 This is my CF defn
 
 create column family Sample_Stats with 
 default_validation_class=CounterColumnType
 and key_validation_class='CompositeType(UTF8Type,UTF8Type)'
 and comparator='CompositeType(UTF8Type, UTF8Type)'
 and replicate_on_write=true;
 What am I missing?
 



Re: Schema Disagreement

2011-08-05 Thread Yi Yang
Thanks Aaron.
On Aug 2, 2011, at 3:04 AM, aaron morton wrote:

 Hang on, using brain now. 
 
 That is triggering a small bug in the code see 
 https://issues.apache.org/jira/browse/CASSANDRA-2984
 
 For not just remove the column meta data. 
 
 Cheers
 
 -
 Aaron Morton
 Freelance Cassandra Developer
 @aaronmorton
 http://www.thelastpickle.com
 
 On 2 Aug 2011, at 21:19, aaron morton wrote:
 
 What do you see when you run describe cluster; in the cassandra-cli ? Whats 
 the exact error you get and is there anything in the server side logs ?
 
 Have you added other CF's before adding this one ? Did the schema agree 
 before starting this statement?
 
 I ran the statement below on the current trunk and it worked. 
 
 Cheers
 
 -
 Aaron Morton
 Freelance Cassandra Developer
 @aaronmorton
 http://www.thelastpickle.com
 
 On 2 Aug 2011, at 12:08, Dikang Gu wrote:
 
 I thought the schema disagree problem was already solved in 0.8.1...
 
 On possible solution is to decommission the disagree node and rejoin it.
 
 
 On Tue, Aug 2, 2011 at 8:01 AM, Yi Yang yy...@me.com wrote:
 Dear all,
 
 I'm always meeting mp with schema disagree problems while trying to create 
 a column family like this, using cassandra-cli:
 
 create column family sd
with column_type = 'Super'
and key_validation_class = 'UUIDType'
and comparator = 'LongType'
and subcomparator = 'UTF8Type'
and column_metadata = [
{
column_name: 'time',
validation_class : 'LongType'
},{
column_name: 'open',
validation_class : 'FloatType'
},{
column_name: 'high',
validation_class : 'FloatType'
},{
column_name: 'low',
validation_class : 'FloatType'
},{
column_name: 'close',
validation_class : 'FloatType'
},{
column_name: 'volumn',
validation_class : 'LongType'
},{
column_name: 'splitopen',
validation_class : 'FloatType'
},{
column_name: 'splithigh',
validation_class : 'FloatType'
},{
column_name: 'splitlow',
validation_class : 'FloatType'
},{
column_name: 'splitclose',
validation_class : 'FloatType'
},{
column_name: 'splitvolume',
validation_class : 'LongType'
},{
column_name: 'splitclose',
validation_class : 'FloatType'
}
]
 ;
 
 I've tried to erase everything and restart Cassandra but this still 
 happens.   But when I clear the column_metadata section this no more 
 disagreement error.   Do you have any idea why this happens?
 
 Environment: 2 VMs, using the same harddrive, Cassandra 0.8.1, Ubuntu 10.04
 This is for testing only.   We'll move to dedicated servers later.
 
 Best regards,
 Yi
 
 
 
 -- 
 Dikang Gu
 
 0086 - 18611140205
 
 
 



Re: Schema Disagreement

2011-08-03 Thread aaron morton
It means the node you ran the command against could not contact node 
192.168.1.25 it's probably down. 

Cheers

-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 3 Aug 2011, at 14:03, Dikang Gu wrote:

 I followed the instructions in the FAQ, but got the following when describe 
 cluster;
 
Snitch: org.apache.cassandra.locator.SimpleSnitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema versions: 
   dd73c740-bd84-11e0--98dab94442fb: [192.168.1.28, 192.168.1.9, 
 192.168.1.27]
   UNREACHABLE: [192.168.1.25]
 
 What's the UNREACHABLE?
 
 Thanks.
 
 -- 
 Dikang Gu
 0086 - 18611140205
 On Wednesday, August 3, 2011 at 11:28 AM, Jonathan Ellis wrote:
 
 Have you seen http://wiki.apache.org/cassandra/FAQ#schema_disagreement ?
 
 On Tue, Aug 2, 2011 at 10:25 PM, Dikang Gu dikan...@gmail.com wrote:
 I also encounter the schema disagreement in my 0.8.1 cluster today…
 
 The disagreement occurs when I create a column family using the hector api,
 and I found the following errors in my cassandra/system.log
 ERROR [pool-2-thread-99] 2011-08-03 11:21:18,051 Cassandra.java (line 3378)
 Internal error processing remove
 java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut
 down
 at
 org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:73)
 at
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:816)
 at
 java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1337)
 at
 org.apache.cassandra.service.StorageProxy.insertLocal(StorageProxy.java:360)
 at
 org.apache.cassandra.service.StorageProxy.sendToHintedEndpoints(StorageProxy.java:241)
 at
 org.apache.cassandra.service.StorageProxy.access$000(StorageProxy.java:62)
 at org.apache.cassandra.service.StorageProxy$1.apply(StorageProxy.java:99)
 at
 org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:210)
 at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:154)
 at
 org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:560)
 at
 org.apache.cassandra.thrift.CassandraServer.internal_remove(CassandraServer.java:539)
 at
 org.apache.cassandra.thrift.CassandraServer.remove(CassandraServer.java:547)
 at
 org.apache.cassandra.thrift.Cassandra$Processor$remove.process(Cassandra.java:3370)
 at
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
 at
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:636)
 And when I try to decommission, I got this:
 ERROR [pool-2-thread-90] 2011-08-03 11:24:35,611 Cassandra.java (line 3462)
 Internal error processing batch_mutate
 java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut
 down
 at
 org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:73)
 at
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:816)
 at
 java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1337)
 at
 org.apache.cassandra.service.StorageProxy.insertLocal(StorageProxy.java:360)
 at
 org.apache.cassandra.service.StorageProxy.sendToHintedEndpoints(StorageProxy.java:241)
 at
 org.apache.cassandra.service.StorageProxy.access$000(StorageProxy.java:62)
 at org.apache.cassandra.service.StorageProxy$1.apply(StorageProxy.java:99)
 at
 org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:210)
 at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:154)
 at
 org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:560)
 at
 org.apache.cassandra.thrift.CassandraServer.internal_batch_mutate(CassandraServer.java:511)
 at
 org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:519)
 at
 org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.process(Cassandra.java:3454)
 at
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
 at
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:636)
 What does this mean?
 Thanks.
 --
 Dikang Gu
 0086 - 18611140205
 
 On Tuesday, August 2, 2011 at 6:04 PM, aaron morton wrote:
 
 Hang on, using brain now.
 That is triggering a small bug in the code
 see https://issues.apache.org/jira/browse/CASSANDRA-2984
 For not just remove the column meta data.
 Cheers
 -
 Aaron Morton
 Freelance Cassandra Developer
 

Re: Schema Disagreement

2011-08-02 Thread aaron morton
What do you see when you run describe cluster; in the cassandra-cli ? Whats the 
exact error you get and is there anything in the server side logs ?

Have you added other CF's before adding this one ? Did the schema agree before 
starting this statement?

I ran the statement below on the current trunk and it worked. 

Cheers

-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 2 Aug 2011, at 12:08, Dikang Gu wrote:

 I thought the schema disagree problem was already solved in 0.8.1...
 
 On possible solution is to decommission the disagree node and rejoin it.
 
 
 On Tue, Aug 2, 2011 at 8:01 AM, Yi Yang yy...@me.com wrote:
 Dear all,
 
 I'm always meeting mp with schema disagree problems while trying to create a 
 column family like this, using cassandra-cli:
 
 create column family sd
with column_type = 'Super'
and key_validation_class = 'UUIDType'
and comparator = 'LongType'
and subcomparator = 'UTF8Type'
and column_metadata = [
{
column_name: 'time',
validation_class : 'LongType'
},{
column_name: 'open',
validation_class : 'FloatType'
},{
column_name: 'high',
validation_class : 'FloatType'
},{
column_name: 'low',
validation_class : 'FloatType'
},{
column_name: 'close',
validation_class : 'FloatType'
},{
column_name: 'volumn',
validation_class : 'LongType'
},{
column_name: 'splitopen',
validation_class : 'FloatType'
},{
column_name: 'splithigh',
validation_class : 'FloatType'
},{
column_name: 'splitlow',
validation_class : 'FloatType'
},{
column_name: 'splitclose',
validation_class : 'FloatType'
},{
column_name: 'splitvolume',
validation_class : 'LongType'
},{
column_name: 'splitclose',
validation_class : 'FloatType'
}
]
 ;
 
 I've tried to erase everything and restart Cassandra but this still happens.  
  But when I clear the column_metadata section this no more disagreement 
 error.   Do you have any idea why this happens?
 
 Environment: 2 VMs, using the same harddrive, Cassandra 0.8.1, Ubuntu 10.04
 This is for testing only.   We'll move to dedicated servers later.
 
 Best regards,
 Yi
 
 
 
 -- 
 Dikang Gu
 
 0086 - 18611140205
 



Re: Schema Disagreement

2011-08-02 Thread aaron morton
Hang on, using brain now. 

That is triggering a small bug in the code see 
https://issues.apache.org/jira/browse/CASSANDRA-2984

For not just remove the column meta data. 

Cheers

-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 2 Aug 2011, at 21:19, aaron morton wrote:

 What do you see when you run describe cluster; in the cassandra-cli ? Whats 
 the exact error you get and is there anything in the server side logs ?
 
 Have you added other CF's before adding this one ? Did the schema agree 
 before starting this statement?
 
 I ran the statement below on the current trunk and it worked. 
 
 Cheers
 
 -
 Aaron Morton
 Freelance Cassandra Developer
 @aaronmorton
 http://www.thelastpickle.com
 
 On 2 Aug 2011, at 12:08, Dikang Gu wrote:
 
 I thought the schema disagree problem was already solved in 0.8.1...
 
 On possible solution is to decommission the disagree node and rejoin it.
 
 
 On Tue, Aug 2, 2011 at 8:01 AM, Yi Yang yy...@me.com wrote:
 Dear all,
 
 I'm always meeting mp with schema disagree problems while trying to create a 
 column family like this, using cassandra-cli:
 
 create column family sd
with column_type = 'Super'
and key_validation_class = 'UUIDType'
and comparator = 'LongType'
and subcomparator = 'UTF8Type'
and column_metadata = [
{
column_name: 'time',
validation_class : 'LongType'
},{
column_name: 'open',
validation_class : 'FloatType'
},{
column_name: 'high',
validation_class : 'FloatType'
},{
column_name: 'low',
validation_class : 'FloatType'
},{
column_name: 'close',
validation_class : 'FloatType'
},{
column_name: 'volumn',
validation_class : 'LongType'
},{
column_name: 'splitopen',
validation_class : 'FloatType'
},{
column_name: 'splithigh',
validation_class : 'FloatType'
},{
column_name: 'splitlow',
validation_class : 'FloatType'
},{
column_name: 'splitclose',
validation_class : 'FloatType'
},{
column_name: 'splitvolume',
validation_class : 'LongType'
},{
column_name: 'splitclose',
validation_class : 'FloatType'
}
]
 ;
 
 I've tried to erase everything and restart Cassandra but this still happens. 
   But when I clear the column_metadata section this no more disagreement 
 error.   Do you have any idea why this happens?
 
 Environment: 2 VMs, using the same harddrive, Cassandra 0.8.1, Ubuntu 10.04
 This is for testing only.   We'll move to dedicated servers later.
 
 Best regards,
 Yi
 
 
 
 -- 
 Dikang Gu
 
 0086 - 18611140205
 
 



Re: Schema Disagreement

2011-08-02 Thread Dikang Gu
I also encounter the schema disagreement in my 0.8.1 cluster today…

The disagreement occurs when I create a column family using the hector api, and 
I found the following errors in my cassandra/system.log

ERROR [pool-2-thread-99] 2011-08-03 11:21:18,051 Cassandra.java (line 3378) 
Internal error processing remove
java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut 
down
at 
org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:73)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:816)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1337)
at org.apache.cassandra.service.StorageProxy.insertLocal(StorageProxy.java:360)
at 
org.apache.cassandra.service.StorageProxy.sendToHintedEndpoints(StorageProxy.java:241)
at org.apache.cassandra.service.StorageProxy.access$000(StorageProxy.java:62)
at org.apache.cassandra.service.StorageProxy$1.apply(StorageProxy.java:99)
at org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:210)
at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:154)
at 
org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:560)
at 
org.apache.cassandra.thrift.CassandraServer.internal_remove(CassandraServer.java:539)
at org.apache.cassandra.thrift.CassandraServer.remove(CassandraServer.java:547)
at 
org.apache.cassandra.thrift.Cassandra$Processor$remove.process(Cassandra.java:3370)
at org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
at 
org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)

And when I try to decommission, I got this:

ERROR [pool-2-thread-90] 2011-08-03 11:24:35,611 Cassandra.java (line 3462) 
Internal error processing batch_mutate
java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut 
down
at 
org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:73)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:816)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1337)
at org.apache.cassandra.service.StorageProxy.insertLocal(StorageProxy.java:360)
at 
org.apache.cassandra.service.StorageProxy.sendToHintedEndpoints(StorageProxy.java:241)
at org.apache.cassandra.service.StorageProxy.access$000(StorageProxy.java:62)
at org.apache.cassandra.service.StorageProxy$1.apply(StorageProxy.java:99)
at org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:210)
at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:154)
at 
org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:560)
at 
org.apache.cassandra.thrift.CassandraServer.internal_batch_mutate(CassandraServer.java:511)
at 
org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:519)
at 
org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.process(Cassandra.java:3454)
at org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
at 
org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)

What does this mean? 

Thanks.

-- 
Dikang Gu
0086 - 18611140205
On Tuesday, August 2, 2011 at 6:04 PM, aaron morton wrote: 
 Hang on, using brain now. 
 
 That is triggering a small bug in the code see 
 https://issues.apache.org/jira/browse/CASSANDRA-2984
 
 For not just remove the column meta data. 
 
 Cheers
 
 -
 Aaron Morton
 Freelance Cassandra Developer
 @aaronmorton
 http://www.thelastpickle.com
 
 
 
 
 
 On 2 Aug 2011, at 21:19, aaron morton wrote:
  What do you see when you run describe cluster; in the cassandra-cli ? Whats 
  the exact error you get and is there anything in the server side logs ?
  
  Have you added other CF's before adding this one ? Did the schema agree 
  before starting this statement?
  
  I ran the statement below on the current trunk and it worked. 
  
  Cheers
  
  -
  Aaron Morton
  Freelance Cassandra Developer
  @aaronmorton
  http://www.thelastpickle.com
  
  
  
  
  
  On 2 Aug 2011, at 12:08, Dikang Gu wrote:
   I thought the schema disagree problem was already solved in 0.8.1...
   
   On possible solution is to decommission the disagree node and rejoin it.
   
   
   On Tue, Aug 2, 2011 at 8:01 AM, Yi Yang yy...@me.com wrote:
Dear all,

 I'm always meeting mp with schema disagree problems while trying to 
create a 

Re: Schema Disagreement

2011-08-02 Thread Jonathan Ellis
Have you seen http://wiki.apache.org/cassandra/FAQ#schema_disagreement ?

On Tue, Aug 2, 2011 at 10:25 PM, Dikang Gu dikan...@gmail.com wrote:
 I also encounter the schema disagreement in my 0.8.1 cluster today…

 The disagreement occurs when I create a column family using the hector api,
 and I found the following errors in my cassandra/system.log
 ERROR [pool-2-thread-99] 2011-08-03 11:21:18,051 Cassandra.java (line 3378)
 Internal error processing remove
 java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut
 down
 at
 org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:73)
 at
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:816)
 at
 java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1337)
 at
 org.apache.cassandra.service.StorageProxy.insertLocal(StorageProxy.java:360)
 at
 org.apache.cassandra.service.StorageProxy.sendToHintedEndpoints(StorageProxy.java:241)
 at
 org.apache.cassandra.service.StorageProxy.access$000(StorageProxy.java:62)
 at org.apache.cassandra.service.StorageProxy$1.apply(StorageProxy.java:99)
 at
 org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:210)
 at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:154)
 at
 org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:560)
 at
 org.apache.cassandra.thrift.CassandraServer.internal_remove(CassandraServer.java:539)
 at
 org.apache.cassandra.thrift.CassandraServer.remove(CassandraServer.java:547)
 at
 org.apache.cassandra.thrift.Cassandra$Processor$remove.process(Cassandra.java:3370)
 at
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
 at
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:636)
 And when I try to decommission, I got this:
 ERROR [pool-2-thread-90] 2011-08-03 11:24:35,611 Cassandra.java (line 3462)
 Internal error processing batch_mutate
 java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut
 down
 at
 org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:73)
 at
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:816)
 at
 java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1337)
 at
 org.apache.cassandra.service.StorageProxy.insertLocal(StorageProxy.java:360)
 at
 org.apache.cassandra.service.StorageProxy.sendToHintedEndpoints(StorageProxy.java:241)
 at
 org.apache.cassandra.service.StorageProxy.access$000(StorageProxy.java:62)
 at org.apache.cassandra.service.StorageProxy$1.apply(StorageProxy.java:99)
 at
 org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:210)
 at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:154)
 at
 org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:560)
 at
 org.apache.cassandra.thrift.CassandraServer.internal_batch_mutate(CassandraServer.java:511)
 at
 org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:519)
 at
 org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.process(Cassandra.java:3454)
 at
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
 at
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:636)
 What does this mean?
 Thanks.
 --
 Dikang Gu
 0086 - 18611140205

 On Tuesday, August 2, 2011 at 6:04 PM, aaron morton wrote:

 Hang on, using brain now.
 That is triggering a small bug in the code
 see https://issues.apache.org/jira/browse/CASSANDRA-2984
 For not just remove the column meta data.
 Cheers
 -
 Aaron Morton
 Freelance Cassandra Developer
 @aaronmorton
 http://www.thelastpickle.com
 On 2 Aug 2011, at 21:19, aaron morton wrote:

 What do you see when you run describe cluster; in the cassandra-cli ? Whats
 the exact error you get and is there anything in the server side logs ?
 Have you added other CF's before adding this one ? Did the schema agree
 before starting this statement?
 I ran the statement below on the current trunk and it worked.
 Cheers
 -
 Aaron Morton
 Freelance Cassandra Developer
 @aaronmorton
 http://www.thelastpickle.com
 On 2 Aug 2011, at 12:08, Dikang Gu wrote:

 I thought the schema disagree problem was already solved in 0.8.1...
 On possible solution is to decommission the disagree node and rejoin it.

 On Tue, Aug 2, 2011 at 8:01 AM, Yi Yang 

Re: Schema Disagreement

2011-08-02 Thread Dikang Gu
I followed the instructions in the FAQ, but got the following when describe 
cluster;

Snitch: org.apache.cassandra.locator.SimpleSnitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema versions: 
dd73c740-bd84-11e0--98dab94442fb: [192.168.1.28, 192.168.1.9, 192.168.1.27]
UNREACHABLE: [192.168.1.25]


What's the UNREACHABLE?

Thanks.

-- 
Dikang Gu
0086 - 18611140205
On Wednesday, August 3, 2011 at 11:28 AM, Jonathan Ellis wrote: 
 Have you seen http://wiki.apache.org/cassandra/FAQ#schema_disagreement ?
 
 On Tue, Aug 2, 2011 at 10:25 PM, Dikang Gu dikan...@gmail.com wrote:
  I also encounter the schema disagreement in my 0.8.1 cluster today…
  
  The disagreement occurs when I create a column family using the hector api,
  and I found the following errors in my cassandra/system.log
  ERROR [pool-2-thread-99] 2011-08-03 11:21:18,051 Cassandra.java (line 3378)
  Internal error processing remove
  java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut
  down
  at
  org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:73)
  at
  java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:816)
  at
  java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1337)
  at
  org.apache.cassandra.service.StorageProxy.insertLocal(StorageProxy.java:360)
  at
  org.apache.cassandra.service.StorageProxy.sendToHintedEndpoints(StorageProxy.java:241)
  at
  org.apache.cassandra.service.StorageProxy.access$000(StorageProxy.java:62)
  at org.apache.cassandra.service.StorageProxy$1.apply(StorageProxy.java:99)
  at
  org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:210)
  at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:154)
  at
  org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:560)
  at
  org.apache.cassandra.thrift.CassandraServer.internal_remove(CassandraServer.java:539)
  at
  org.apache.cassandra.thrift.CassandraServer.remove(CassandraServer.java:547)
  at
  org.apache.cassandra.thrift.Cassandra$Processor$remove.process(Cassandra.java:3370)
  at
  org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
  at
  org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
  at
  java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
  at
  java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
  at java.lang.Thread.run(Thread.java:636)
  And when I try to decommission, I got this:
  ERROR [pool-2-thread-90] 2011-08-03 11:24:35,611 Cassandra.java (line 3462)
  Internal error processing batch_mutate
  java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut
  down
  at
  org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:73)
  at
  java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:816)
  at
  java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1337)
  at
  org.apache.cassandra.service.StorageProxy.insertLocal(StorageProxy.java:360)
  at
  org.apache.cassandra.service.StorageProxy.sendToHintedEndpoints(StorageProxy.java:241)
  at
  org.apache.cassandra.service.StorageProxy.access$000(StorageProxy.java:62)
  at org.apache.cassandra.service.StorageProxy$1.apply(StorageProxy.java:99)
  at
  org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:210)
  at org.apache.cassandra.service.StorageProxy.mutate(StorageProxy.java:154)
  at
  org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:560)
  at
  org.apache.cassandra.thrift.CassandraServer.internal_batch_mutate(CassandraServer.java:511)
  at
  org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:519)
  at
  org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.process(Cassandra.java:3454)
  at
  org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
  at
  org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:187)
  at
  java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
  at
  java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
  at java.lang.Thread.run(Thread.java:636)
  What does this mean?
  Thanks.
  --
  Dikang Gu
  0086 - 18611140205
  
  On Tuesday, August 2, 2011 at 6:04 PM, aaron morton wrote:
  
  Hang on, using brain now.
  That is triggering a small bug in the code
  see https://issues.apache.org/jira/browse/CASSANDRA-2984
  For not just remove the column meta data.
  Cheers
  -
  Aaron Morton
  Freelance Cassandra Developer
  @aaronmorton
  http://www.thelastpickle.com
  On 2 Aug 2011, at 21:19, aaron morton wrote:
  
  What do you see when you run describe cluster; in the cassandra-cli ? Whats
  the exact error you get and is 

Re: Schema Disagreement

2011-08-01 Thread Dikang Gu
I thought the schema disagree problem was already solved in 0.8.1...

On possible solution is to decommission the disagree node and rejoin it.


On Tue, Aug 2, 2011 at 8:01 AM, Yi Yang yy...@me.com wrote:

 Dear all,

 I'm always meeting mp with schema disagree problems while trying to create
 a column family like this, using cassandra-cli:

 create column family sd
with column_type = 'Super'
and key_validation_class = 'UUIDType'
and comparator = 'LongType'
and subcomparator = 'UTF8Type'
and column_metadata = [
{
column_name: 'time',
validation_class : 'LongType'
},{
column_name: 'open',
validation_class : 'FloatType'
},{
column_name: 'high',
validation_class : 'FloatType'
},{
column_name: 'low',
validation_class : 'FloatType'
},{
column_name: 'close',
validation_class : 'FloatType'
},{
column_name: 'volumn',
validation_class : 'LongType'
},{
column_name: 'splitopen',
validation_class : 'FloatType'
},{
column_name: 'splithigh',
validation_class : 'FloatType'
},{
column_name: 'splitlow',
validation_class : 'FloatType'
},{
column_name: 'splitclose',
validation_class : 'FloatType'
},{
column_name: 'splitvolume',
validation_class : 'LongType'
},{
column_name: 'splitclose',
validation_class : 'FloatType'
}
]
 ;

 I've tried to erase everything and restart Cassandra but this still
 happens.   But when I clear the column_metadata section this no more
 disagreement error.   Do you have any idea why this happens?

 Environment: 2 VMs, using the same harddrive, Cassandra 0.8.1, Ubuntu 10.04
 This is for testing only.   We'll move to dedicated servers later.

 Best regards,
 Yi




-- 
Dikang Gu

0086 - 18611140205