Hi,
On Tue, May 1, 2018 at 10:27 PM Gábor Auth wrote:
> One or two years ago I've tried the CDC feature but switched off... maybe
> is it a side effect of switched off CDC? How can I fix it? :)
>
Okay, I've worked out. Updated the schema of the affected keyspaces on the
Hi,
On Tue, May 1, 2018 at 7:40 PM Gábor Auth wrote:
> What can I do? Any suggestion? :(
>
Okay, I've diffed the good and the bad system_scheme tables. The only
difference is the `cdc` field in three keyspaces (in `tables` and `views`):
- the value of `cdc` field on the
Hi,
On Mon, Apr 30, 2018 at 11:11 PM Gábor Auth wrote:
> On Mon, Apr 30, 2018 at 11:03 PM Ali Hubail
> wrote:
>
>> What steps have you performed to add the new DC? Have you tried to follow
>> certain procedures like this?
>>
>>
Hi,
On Mon, Apr 30, 2018 at 11:03 PM Ali Hubail
wrote:
> What steps have you performed to add the new DC? Have you tried to follow
> certain procedures like this?
>
> https://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsAddDCToCluster.html
>
Yes,
o
user@cassandra.apache.org
To
"user@cassandra.apache.org" <user@cassandra.apache.org>,
cc
Subject
Re: Schema disagreement
Hi,
On Mon, Apr 30, 2018 at 11:39 AM Gábor Auth <auth.ga...@gmail.com> wrote:
've just tried to add a new DC and new node to my cluster (3 DCs and 10
Hi,
On Mon, Apr 30, 2018 at 11:39 AM Gábor Auth wrote:
> 've just tried to add a new DC and new node to my cluster (3 DCs and 10
> nodes) and the new node has a different schema version:
>
Is it normal? Node is marked down but doing a repair successfully?
WARN
Hi,
I've just tried to add a new DC and new node to my cluster (3 DCs and 10
nodes) and the new node has a different schema version:
Cluster Information:
Name: cluster
Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
Partitioner:
Hi Michael,
Did you ever get an answer on this? I'm curious to hear for future
reference.
Thanks,
Jens
On Monday, June 20, 2016, Michael Fong <michael.f...@ruckuswireless.com>
wrote:
> Hi,
>
>
>
> We have recently encountered several schema disagreement issue while
Hi,
We have recently encountered several schema disagreement issue while upgrading
Cassandra. In one of the cases, the 2-node cluster idled for over 30 minutes
and their schema remain unsynced. Due to other logic flows, Cassandra cannot be
restarted, and hence we need to come up an alternative
On Mon, Jul 6, 2015 at 1:30 PM, John Wong gokoproj...@gmail.com wrote:
But is there a problem with letting schema disagreement running for a long
time?
It depends on what the nature of the desynch is, but generally speaking
there may be.
If you added a column or a columnfamily, and one node
Thanks. Yeah we typically restart the nodes in the minor version to force
resync.
But is there a problem with letting schema disagreement running for a long
time?
Thanks.
John
On Mon, Jul 6, 2015 at 2:29 PM, Robert Coli rc...@eventbrite.com wrote:
On Thu, Jul 2, 2015 at 9:31 PM, John Wong
after certain time, which led me to believe something happened internal,
although that was a poor wild guess.
But is it safe to be okay with schema disagreement? I worry about data
consistency if I let it sit too long.
In general one shouldn't run with schema disagreement persistently.
I've seen
Hi.
Here is a schema disagreement we encountered.
Schema versions:
b6467059-5897-3cc1-9ee2-73f31841b0b0: [10.0.1.100, 10.0.1.109]
c8971b2d-0949-3584-aa87-0050a4149bbd: [10.0.1.55, 10.0.1.16,
10.0.1.77]
c733920b-2a31-30f0-bca1-45a8c9130a2c: [10.0.1.221]
We deployed
is a schema disagreement we encountered.
Schema versions:
b6467059-5897-3cc1-9ee2-73f31841b0b0: [10.0.1.100, 10.0.1.109]
c8971b2d-0949-3584-aa87-0050a4149bbd: [10.0.1.55, 10.0.1.16,
10.0.1.77]
c733920b-2a31-30f0-bca1-45a8c9130a2c: [10.0.1.221]
We deployed an application
that was a poor wild guess.
But is it safe to be okay with schema disagreement? I worry about data
consistency if I let it sit too long.
Thanks.
John
On Jul 2, 2015, at 9:37 PM, John Wong gokoproj...@gmail.com wrote:
Hi.
Here is a schema disagreement we encountered.
Schema versions
Hello,
I have a cluster running and I'm trying to change the schema on it. Altough it
succeeds on one cluster (a test one), on another it keeps creating two separate
schema versions (both are 2 DC configuration; the cluster where it goes wrong
end up with a schema version on each DC).
I use
Jonathan [mailto:jonathan.deme...@macq.eu]
Sent: mardi 12 août 2014 11:03
To: user@cassandra.apache.org
Subject: Cassandra schema disagreement
Hello,
I have a cluster running and I'm trying to change the schema on it. Altough it
succeeds on one cluster (a test one), on another it keeps creating two
Hi Gaurav, a schema versioning bug was fixed in 2.0.7.
Best wishes, Duncan.
On 12/05/14 21:31, Gaurav Sehgal wrote:
We have recently started seeing a lot of Schema Disagreement errors. We are
using Cassandra 2.0.6 with Oracle Java 1.7. I went through the Cassandra FAQ and
followed the below
Hey Gaurav,
You should consider moving to 2.0.7 which fixes a bunch of these schema
disagreement problems. You could also play around with nodetool
resetlocalschema on the nodes that are behind, but be careful with that
one. I'd go with 2.0.7 first for sure.
Thanks,
Vince.
On Mon, May 12
On Tue, May 13, 2014 at 5:11 PM, Donald Smith
donald.sm...@audiencescience.com wrote:
I too have noticed that after doing “nodetool flush” (or “nodetool
drain”), the commit logs are still there. I think they’re NEW (empty)
commit logs, but I may be wrong. Anyone know?
Assuming they are
We have recently started seeing a lot of Schema Disagreement errors. We are
using Cassandra 2.0.6 with Oracle Java 1.7. I went through the Cassandra
FAQ and followed the below steps:
- nodetool disablethrift
- nodetool disablegossip
- nodetool drain
-
'kill pid'.
As per
Upgrade to 2.0.7 fixed this for me.
You can also try 'nodetool resetlocalschema' on disagreeing nodes. This
worked temporarily for me in 2.0.6.
ml
On Mon, May 12, 2014 at 3:31 PM, Gaurav Sehgal gsehg...@gmail.com wrote:
We have recently started seeing a lot of Schema Disagreement errors. We
Thanks Rob. Let me add one thing in case someone else finds this thread -
Restarting the nodes did not in and of itself get the schema disagreement
resolved. We had to run the ALTER TABLE command individually on each of the
disagreeing nodes once they came back up.
On Tuesday, November 26
On Mon, Nov 25, 2013 at 6:42 PM, Josh Dzielak j...@keen.io wrote:
Recently we had a strange thing happen. Altering schema (gc_grace_seconds)
for a column family resulted in a schema disagreement. 3/4 of nodes got it,
1/4 didn't. There was no partition at the time, nor was there multiple
Recently we had a strange thing happen. Altering schema (gc_grace_seconds) for
a column family resulted in a schema disagreement. 3/4 of nodes got it, 1/4
didn't. There was no partition at the time, nor was there multiple schema
updates issued. Going to the nodes with stale schema and trying
of the way on all nodes
5) start cluster
6) re-load schema, being careful to explicitly check for schema
agreement on all nodes between schema modifying statements
In many/most cases of schema disagreement, people try the FAQ approach
and it doesn't work and they end up being forced to do
cluster
4) move schema and migration CF tables out of the way on all nodes
5) start cluster
6) re-load schema, being careful to explicitly check for schema
agreement on all nodes between schema modifying statements
In many/most cases of schema disagreement, people try the FAQ approach
statements
In many/most cases of schema disagreement, people try the FAQ approach
and it doesn't work and they end up being forced to do the above
anyway. In general if you can tolerate the downtime, you should save
yourself the effort and just do the above process.
=Rob
schema, being careful to explicitly check for schema
agreement on all nodes between schema modifying statements
In many/most cases of schema disagreement, people try the FAQ approach
and it doesn't work and they end up being forced to do the above
anyway. In general if you can tolerate the downtime
Hello,
I have a cluster of 4 nodes and two of them are on different schema. I
tried to run the commands described in the FAQ section but no luck (
http://wiki.apache.org/cassandra/FAQ#schema_disagreement) .
After running the commands, I get back to the same issue. Cannot afford to
lose the data
in the log, but the cluster now
suffers from schema disagreement
[default@unknown] describe cluster;
Cluster Information:
Snitch: org.apache.cassandra.locator.SimpleSnitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema versions:
59adb24e-f3cd-3e02-97f0-5b395827453f
://www.datastax.com/docs/1.1/install/upgrading).
After bringing up 1.1.4 there are no errors in the log, but the
cluster now suffers from schema disagreement
[default@unknown] describe cluster;
Cluster Information:
Snitch: org.apache.cassandra.locator.SimpleSnitch
Partitioner
there are no errors in the log, but the cluster now
suffers from schema disagreement
[default@unknown] describe cluster;
Cluster Information:
Snitch: org.apache.cassandra.locator.SimpleSnitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema versions:
59adb24e-f3cd-3e02-97f0
suffers from schema disagreement
[default@unknown] describe cluster;
Cluster Information:
Snitch: org.apache.cassandra.locator.SimpleSnitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema versions:
59adb24e-f3cd-3e02-97f0-5b395827453f: [10.10.29.67] - The new
Hi !
We got into schema disagreement situation on 1.0.10 having 250GB of compressed
data per node.
Following
http://wiki.apache.org/cassandra/FAQ#schema_disagreement
after node restart looks like it is replaying all schema changes one be one ,
right ?
As we did a lot of them during cluster
I know you specified 1.0.10, but C* 1.1 solves this problem:
http://www.datastax.com/dev/blog/the-schema-management-renaissance
On Thu, Jul 26, 2012 at 7:29 AM, Mateusz Korniak
mateusz-li...@ant.gliwice.pl wrote:
Hi !
We got into schema disagreement situation on 1.0.10 having 250GB
to say the errors are expected.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 19/05/2012, at 6:34 AM, Piavlo wrote:
Hi,
I had a schema disagreement problem in cassandra 1.0.9 cluster, where one
node had different schema version.
So
Hi,
I had a schema disagreement problem in cassandra 1.0.9 cluster, where
one node had different schema version.
So I followed the faq at
http://wiki.apache.org/cassandra/FAQ#schema_disagreement
disabled gossip, disabled thrift, drained and finally stopped the
cassandra process, on startup
://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Schema-disagreement-in-1-0-2-tp7098609p7099003.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
Im facing the following issue with Cassandra 1.0 set up. The same works for
0.8.7
# cassandra-cli -h x.x.x.x -f RTSCFs.sch
Connected to: Real Time Stats on x.x.x.x/9160
Authenticated to keyspace: Stats
39c3e120-fa24-11e0--61d449114eff
Waiting for schema agreement...
The schema has not
Looks like a bug, patch is here
https://issues.apache.org/jira/browse/CASSANDRA-3391
Until it is fixed avoid using CompositeType in the key_validator_class and blow
away the Schema and Migrations SSTables.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
I don't have time to look into the reasons for that error, but that does not
sound good. It kind of sounds like there are multiple migration chains out
there in the cluster. This could come from apply changes to different nodes at
the same time.
Is this a prod system ? If not I would shut it
um. There has got to be something stopping the migration from completing.
Turn the logging up to DEBUG before starting and look for messages from
MigrationManager.java
Provide all the log messages from Migration.java on the 1.27 node
Cheers
-
Aaron Morton
Freelance Cassandra
Hi Aaron,
I set the log level to be DEBUG, and find a lot of forceFlush debug info in the
log:
DEBUG [StreamStage:1] 2011-08-10 11:31:56,345 ColumnFamilyStore.java (line 725)
forceFlush requested but everything is clean
DEBUG [StreamStage:1] 2011-08-10 11:31:56,345 ColumnFamilyStore.java (line
And a lot of not apply logs.
DEBUG [MigrationStage:1] 2011-08-10 11:36:29,376
DefinitionsUpdateVerbHandler.java (line 70) Applying AddColumnFamily from
/192.168.1.9
DEBUG [MigrationStage:1] 2011-08-10 11:36:29,376
DefinitionsUpdateVerbHandler.java (line 80) Migration not applied Previous
did you check the logs in 1.27 for errors ?
Could you be seeing this ? https://issues.apache.org/jira/browse/CASSANDRA-2867
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 7 Aug 2011, at 16:24, Dikang Gu wrote:
I restart both
Hi Aaron,
I repeat the whole procedure:
1. kill the cassandra instance on 1.27.
2. rm the data/system/Migrations-g-*
3. rm the data/system/Schema-g-*
4. bin/cassandra to start the cassandra.
Now, the migration seems stop and I do not find any error in the system.log yet.
The ring looks good:
I have tried this, but the schema still does not agree in the cluster:
[default@unknown] describe cluster;
Cluster Information:
Snitch: org.apache.cassandra.locator.SimpleSnitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema versions:
UNREACHABLE: [192.168.1.28]
After there restart you what was in the logs for the 1.27 machine from the
Migration.java logger ? Some of the messages will start with Applying
migration
You should have shut down both of the nodes, then deleted the schema* and
migration* system sstables, then restarted one of them and
I restart both nodes, and deleted the shcema* and migration* and restarted
them.
The current cluster looks like this:
[default@unknown] describe cluster;
Cluster Information:
Snitch: org.apache.cassandra.locator.SimpleSnitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema
[default@unknown] describe cluster;
Cluster Information:
Snitch: org.apache.cassandra.locator.SimpleSnitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema versions:
743fe590-bf48-11e0--4d205df954a7: [192.168.1.28]
75eece10-bf48-11e0--4d205df954a7: [192.168.1.9,
Based on http://wiki.apache.org/cassandra/FAQ#schema_disagreement,
75eece10-bf48-11e0--4d205df954a7 own the majority, so shutdown and
remove the schema* and migration* sstables from both 192.168.1.28 and
192.168.1.27
2011/8/5 Dikang Gu dikan...@gmail.com:
[default@unknown] describe cluster;
Thanks Aaron.
On Aug 2, 2011, at 3:04 AM, aaron morton wrote:
Hang on, using brain now.
That is triggering a small bug in the code see
https://issues.apache.org/jira/browse/CASSANDRA-2984
For not just remove the column meta data.
Cheers
-
Aaron Morton
Freelance
:
I also encounter the schema disagreement in my 0.8.1 cluster today…
The disagreement occurs when I create a column family using the hector api,
and I found the following errors in my cassandra/system.log
ERROR [pool-2-thread-99] 2011-08-03 11:21:18,051 Cassandra.java (line 3378)
Internal
What do you see when you run describe cluster; in the cassandra-cli ? Whats the
exact error you get and is there anything in the server side logs ?
Have you added other CF's before adding this one ? Did the schema agree before
starting this statement?
I ran the statement below on the current
Hang on, using brain now.
That is triggering a small bug in the code see
https://issues.apache.org/jira/browse/CASSANDRA-2984
For not just remove the column meta data.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 2 Aug
I also encounter the schema disagreement in my 0.8.1 cluster today…
The disagreement occurs when I create a column family using the hector api, and
I found the following errors in my cassandra/system.log
ERROR [pool-2-thread-99] 2011-08-03 11:21:18,051 Cassandra.java (line 3378)
Internal error
Have you seen http://wiki.apache.org/cassandra/FAQ#schema_disagreement ?
On Tue, Aug 2, 2011 at 10:25 PM, Dikang Gu dikan...@gmail.com wrote:
I also encounter the schema disagreement in my 0.8.1 cluster today…
The disagreement occurs when I create a column family using the hector api,
and I
:
I also encounter the schema disagreement in my 0.8.1 cluster today…
The disagreement occurs when I create a column family using the hector api,
and I found the following errors in my cassandra/system.log
ERROR [pool-2-thread-99] 2011-08-03 11:21:18,051 Cassandra.java (line 3378
Dear all,
I'm always meeting mp with schema disagree problems while trying to create a
column family like this, using cassandra-cli:
create column family sd
with column_type = 'Super'
and key_validation_class = 'UUIDType'
and comparator = 'LongType'
and subcomparator =
I thought the schema disagree problem was already solved in 0.8.1...
On possible solution is to decommission the disagree node and rejoin it.
On Tue, Aug 2, 2011 at 8:01 AM, Yi Yang yy...@me.com wrote:
Dear all,
I'm always meeting mp with schema disagree problems while trying to create
a
61 matches
Mail list logo