It could be a bug.

Yet I am not very aware of this system_distributed keyspace, but from what
I see, it is using a simple strategy:

root@tlp-cassandra-2:~# echo "DESCRIBE KEYSPACE system_distributed;" |
cqlsh $(hostname -I | awk '{print $1}')

CREATE KEYSPACE system_distributed WITH replication = {'class':
'SimpleStrategy', 'replication_factor': '3'}  AND durable_writes = true;

Let's first check some stuff. Could you share the output of:


   - echo "DESCRIBE KEYSPACE system_distributed;" | cqlsh
   [ip_address_of_the_server]
   - nodetool status
   - nodetool status system_distributed
   - Let us know about the snitch you are using and the corresponding
   configuration.


I am trying to make sure the command you used is expected to work, given
your setup.

My guess is this you might need to alter this keyspace accordingly to your
cluster setup.

Just guessing, hope that helps.

C*heers,
-----------------------
Alain Rodriguez - @arodream - al...@thelastpickle.com
France

The Last Pickle - Apache Cassandra Consulting
http://www.thelastpickle.com

2016-09-22 15:47 GMT+02:00 Timo Ahokas <timo.aho...@gmail.com>:

> Hi,
>
> We have a Cassandra 3.0.8 cluster (recently upgraded from 2.1.15)
> currently running in two data centers (13 and 19 nodes, RF3 in both). We
> are adding a third data center before decommissioning one of the earlier
> ones. Installing Cassandra (3.0.8) goes fine and all the nodes join the
> cluster (not set to bootstrap, as documented in
> https://docs.datastax.com/en/cassandra/3.0/cassandra/
> operations/opsAddDCToCluster.html).
>
> When trying to rebuild nodes in the new DC from a previous DC (nodetool
> rebuild -- DC1), we get the following error:
>
> Unable to find sufficient sources for streaming range 
> (597769692463489739,597931451954862346]
> in keyspace system_distributed
>
> The same error occurs which ever of the 2 existing DCs we try to rebuild
> from.
>
> We run pr repairs (nodetool repair -pr) on all nodes twice a week via cron.
>
> Any advice on how to get the rebuild started?
>
> Best regards,
> Timo
>
>
>
>
>

Reply via email to