Rolling restart after making DDL changes saves us. We saw this because of race 
condition in our app servers, but it could happen for various other reasons 
like node overloaded, network etc

Sent from my iPhone

> On Oct 12, 2017, at 3:46 AM, Carlos Rolo <r...@pythian.com> wrote:
> 
> Which version are you running? I got stuck in a similar situation (With a lot 
> more nodes) and the only way to make it good was to stop the whole cluster, 
> start nodes 1 by 1.
> 
> 
> 
> Regards,
> 
> Carlos Juzarte Rolo
> Cassandra Consultant / Datastax Certified Architect / Cassandra MVP
>  
> Pythian - Love your data
> 
> rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin: 
> linkedin.com/in/carlosjuzarterolo 
> Mobile: +351 918 918 100 
> www.pythian.com
> 
>> On Thu, Oct 12, 2017 at 5:53 AM, Pradeep Chhetri <prad...@stashaway.com> 
>> wrote:
>> Hello everyone,
>> 
>> We had some issues yesterday in our 3 nodes cluster where the application 
>> tried to create the same table twice quickly and cluster became unstable.
>> 
>> Temporarily, we reduced it to single node cluster which gave us some relief.
>> 
>> Now when we are trying to bootstrap a new node and add it to cluster. we're 
>> seeing schema mismatch issue. 
>> 
>> # nodetool status
>> Datacenter: datacenter1
>> =======================
>> Status=Up/Down
>> |/ State=Normal/Leaving/Joining/Moving
>> --  Address        Load       Tokens       Owns (effective)  Host ID         
>>                       Rack
>> UN  10.42.247.173  3.07 GiB   256          100.0%            
>> dffc39e5-d4ba-4b10-872e-0e3cc10f5e08  rack1
>> UN  10.42.209.245  2.25 GiB   256          100.0%            
>> 9b99d5d8-818e-4741-9533-259d0fc0e16d  rack1
>> 
>> root@cassandra-2:~# nodetool describecluster
>> Cluster Information:
>>     Name: sa-cassandra
>>     Snitch: org.apache.cassandra.locator.DynamicEndpointSnitch
>>     Partitioner: org.apache.cassandra.dht.Murmur3Partitioner
>>     Schema versions:
>>         e2275d0f-a5fc-39d9-8f11-268b5e9dc295: [10.42.209.245]
>> 
>>         5f5f66f5-d6aa-3b90-b674-e08811d4d412: [10.42.247.173]
>> 
>> Freshly bootstrapped node - 10.42.247.173 
>> Single node from original cluster - 10.42.209.245
>> 
>> I read 
>> https://docs.datastax.com/en/dse-trblshoot/doc/troubleshooting/schemaDisagree.html
>>  and tried restarting the new node but it didnt help.
>> 
>> Please do suggest. We are facing this issue in production.
>> 
>> Thank you.
> 
> 
> --
> 
> 
> 
> 

Reply via email to