Wrong peers

2015-07-06 Thread nowarry
Hey guys,

I'm using Ruby driver( http://datastax.github.io/ruby-driver/ ) for backup 
scripts. I tried to discover all peers and got wrong peers that are different 
with nodetool status. 

=
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address       Load       Tokens  Owns    Host ID                            
   Rack
UN  10.40.231.53  1.18 TB    256     ?       
b2d877d7-f031-4190-8569-976bb0ce034f  RACK01
UN  10.40.231.11  1.24 TB    256     ?       
e15cda1c-65cc-40cb-b85c-c4bd665d02d7  RACK01

cqlsh use system;
cqlsh:system select peer from system.peers;

 peer
--
 10.40.231.31
 10.40.231.53

(2 rows)

What to do with these old peers, whether they can be removed without 
consequences since they are not in production cluster? And how to keep up to 
date the peers?

--
Anton Koshevoy



Re: Wrong peers

2015-07-06 Thread Jeff Williams
Anton,

I have also seen this issue with decommissioned nodes remaining in the
system.peers table.

On the bright side, they can be safely removed from the system.peers table
without issue. You will have to check every node in the cluster since this
is a local setting per node.

Jeff

On 6 July 2015 at 22:45, nowarry nowa...@gmail.com wrote:

 Hey guys,

 I'm using Ruby driver( http://datastax.github.io/ruby-driver/ ) for
 backup scripts. I tried to discover all peers and got wrong peers that are
 different with nodetool status.

 =
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  Address   Load   Tokens  OwnsHost ID
 Rack
 UN  10.40.231.53  1.18 TB256 ?
 b2d877d7-f031-4190-8569-976bb0ce034f  RACK01
 UN  10.40.231.11  1.24 TB256 ?
 e15cda1c-65cc-40cb-b85c-c4bd665d02d7  RACK01

 cqlsh use system;
 cqlsh:system select peer from system.peers;

  peer
 --
  10.40.231.31
  10.40.231.53

 (2 rows)

 What to do with these old peers, whether they can be removed without
 consequences since they are not in production cluster? And how to keep up
 to date the peers?

 --
 Anton Koshevoy




Re: Wrong peers

2015-07-06 Thread Carlos Rolo
There is a bug in Jira related to this, it is not a driver issue, is a
Cassandra issue. It is solved on 2.0.14 I think. I will post the ticket
once I find it.

Regards,

Carlos Juzarte Rolo
Cassandra Consultant

Pythian - Love your data

rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
http://linkedin.com/in/carlosjuzarterolo*
Mobile: +31 6 159 61 814 | Tel: +1 613 565 8696 x1649
www.pythian.com

On Mon, Jul 6, 2015 at 10:50 PM, Jeff Williams je...@wherethebitsroam.com
wrote:

 Anton,

 I have also seen this issue with decommissioned nodes remaining in the
 system.peers table.

 On the bright side, they can be safely removed from the system.peers table
 without issue. You will have to check every node in the cluster since this
 is a local setting per node.

 Jeff

 On 6 July 2015 at 22:45, nowarry nowa...@gmail.com wrote:

 Hey guys,

 I'm using Ruby driver( http://datastax.github.io/ruby-driver/ ) for
 backup scripts. I tried to discover all peers and got wrong peers that are
 different with nodetool status.

 =
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  Address   Load   Tokens  OwnsHost ID
   Rack
 UN  10.40.231.53  1.18 TB256 ?
 b2d877d7-f031-4190-8569-976bb0ce034f  RACK01
 UN  10.40.231.11  1.24 TB256 ?
 e15cda1c-65cc-40cb-b85c-c4bd665d02d7  RACK01

 cqlsh use system;
 cqlsh:system select peer from system.peers;

  peer
 --
  10.40.231.31
  10.40.231.53

 (2 rows)

 What to do with these old peers, whether they can be removed without
 consequences since they are not in production cluster? And how to keep up
 to date the peers?

 --
 Anton Koshevoy




-- 


--