[ 
https://issues.apache.org/jira/browse/CASSANDRA-8917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14387300#comment-14387300
 ] 

Gary Ogden commented on CASSANDRA-8917:
---------------------------------------

I ran nodetool status on each node and had the exact same result:
{quote}
Datacenter: PRODDC1
===================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address      Load       Tokens  Owns   Host ID                              
 Rack
UN  10.6.71.204  122.39 GB  256     31.2%  93772457-9f70-42ea-89f2-a63d40d76703 
 RAC2
UN  10.6.71.205  123.49 GB  256     36.3%  db0e2389-bbe5-43e4-b0e9-c99aff0449b8 
 RAC2
UN  10.6.71.198  122.45 GB  256     32.6%  c0123329-3262-45a6-a6df-c3fe1b1b2978 
 RAC2
[gary@secasprddb01-2 ~]$ nodetool status company
Datacenter: PRODDC1
===================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address      Load       Tokens  Owns (effective)  Host ID                   
            Rack
UN  10.6.71.204  122.39 GB  256     100.0%            
93772457-9f70-42ea-89f2-a63d40d76703  RAC2
UN  10.6.71.205  123.49 GB  256     100.0%            
db0e2389-bbe5-43e4-b0e9-c99aff0449b8  RAC2
UN  10.6.71.198  122.45 GB  256     100.0%            
c0123329-3262-45a6-a6df-c3fe1b1b2978  RAC2
{quote}

And when I run the select * from system.peers against each node, it only ever 
shows the other 2 nodes. There's no extra old nodes in the list.

> Upgrading from 2.0.9 to 2.1.3 with 3 nodes, CL = quorum causes exceptions
> -------------------------------------------------------------------------
>
>                 Key: CASSANDRA-8917
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-8917
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: C* 2.0.9, Centos 6.5, Java 1.7.0_72, spring data 
> cassandra 1.1.1, cassandra java driver 2.0.9
>            Reporter: Gary Ogden
>             Fix For: 2.1.4
>
>         Attachments: b_output.log, jersey_error.log, node1-cassandra.yaml, 
> node1-system.log, node2-cassandra.yaml, node2-system.log, 
> node3-cassandra.yaml, node3-system.log
>
>
> We have java apps running on glassfish that read/write to our 3 node cluster 
> running on 2.0.9. 
> we have the CL set to quorum for all reads and writes.
> When we started to upgrade the first node and did the sstable upgrade on that 
> node, we started getting this error on reads and writes:
> com.datastax.driver.core.exceptions.UnavailableException: Not enough replica 
> available for query at consistency QUORUM (2 required but only 1 alive)
> How is that possible when we have 3 nodes total, and there was 2 that were up 
> and it's saying we can't get the required CL?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to