initial_token is read from the yaml file once only, during bootstrap. It is 
then stored in the LocationInfo system CF and used from there. 

It sounds like when you did the move you deleted these files, but then started 
the nodes each with their own seed. So you created 3 separate clusters, when 
each one bootstrapped it auto allocated itself an initial token and stored it 
in LocationInfo. You have 3 clusters, each with one node and each node has the 
same token as it was the first node into a new empty cluster. 

Try this:
1 - Shut down the two (lets call them B and C)  you want to join the first one 
(called A). 
2 - Delete their LocationInfo CF
3 - ensure the seed list for ALL nodes points to node A.
4 - ensure the initial token is set correctly for B and C 
5 - start B and C one at a time and make sure nodetool ring and describe 
cluster; in the CLI agree before starting the next  

*IF* node A has the incorrect token I would fix this after you get B and C back 
into the ring. 

Hope that helps. 

-----------------
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 3 Aug 2011, at 09:43, Aishwarya Venkataraman wrote:

> I corrected the seed list and checked the cluster name. They are all
> good now. But still nodetool -ring shows only one node.
> 
> INFO 21:36:59,735 Starting Messaging Service on port 7000
> INFO 21:36:59,748 Using saved token 113427455640312814857969558651062452224
> 
> Nodes a_ipadrr and b_ipaddr have the same token
> 113427455640312814857969558651062452224.  a_ipadrr is the new owner.
> 
> 
> All the nodes seem to be using the same initial token, despite me
> specifying an initial_token in the config file. Is this an issue ? How
> do I force cassandra to use the token in the cassandra.yaml file ?
> 
> Thanks,
> Aishwarya
> 
> 
> On Tue, Aug 2, 2011 at 2:34 PM, Jonathan Ellis <jbel...@gmail.com> wrote:
>> Yes.
>> 
>> Different cluster names could also cause this.
>> 
>> On Tue, Aug 2, 2011 at 4:21 PM, Jeremiah Jordan
>> <jeremiah.jor...@morningstar.com> wrote:
>>> All of the nodes should have the same seedlist.  Don't use localhost as
>>> one of the items in it if you have multiple nodes.
>>> 
>>> On Tue, 2011-08-02 at 10:10 -0700, Aishwarya Venkataraman wrote:
>>>> Nodetool does not show me all the nodes. Assuming I have three nodes
>>>> A, B and C. The seedlist of A is localhost. Seedlist of B is
>>>> localhost, A_ipaddr and seedlist of C is localhost,B_ipaddr,A_ipaddr.
>>>> I have autobootstrap set to false for all 3 nodes since they all have
>>>> the correct data and do not hav to migrate data from any particular
>>>> node.
>>>> 
>>>> My problem here is why does n't nodetool ring show me all nodes in the
>>>> ring ? I agree that the cluster thinks that only one node is present.
>>>> How do I fix this ?
>>>> 
>>>> Thanks,
>>>> Aishwarya
>>>> 
>>>> 
>>>> On Tue, Aug 2, 2011 at 9:56 AM, samal <sa...@wakya.in> wrote:
>>>>> 
>>>>> 
>>>>>>> "ERROR 08:53:47,678 Internal error processing batch_mutate
>>>>>>> java.lang.IllegalStateException: replication factor (3) exceeds number
>>>>>>> of endpoints (1)"
>>>>>> 
>>>>>> You already answered
>>>>>> "It always keeps showing only one node and mentions that it is handling
>>>>>> 100% of the load."
>>>>> 
>>>>> Cluster think only one node is present in ring, it doesn't agree RF=3  it 
>>>>> is
>>>>> expecting RF=1.
>>>>> Original Q: I m not exactly sure what is the problem. But
>>>>> Does nodetool ring show all the host?
>>>>> What is your seed list?
>>>>> Is bootstrapped node has seed ip of its own?
>>>>> AFAIK gossip work even without actively joining a ring.
>>>>> 
>>>>>>> 
>>>>>>> On Tue, Aug 2, 2011 at 7:21 AM, Aishwarya Venkataraman
>>>>>>> <cyberai...@gmail.com> wrote:
>>>>>>>> Replies inline.
>>>>>>>> 
>>>>>>>> Thanks,
>>>>>>>> Aishwarya
>>>>>>>> 
>>>>>>>> On Tue, Aug 2, 2011 at 7:12 AM, Sorin Julean <sorin.jul...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>> Hi,
>>>>>>>>> 
>>>>>>>>>  Until someone answers  with more details, few questions:
>>>>>>>>>  1. did you moved the system keyspace as well ?
>>>>>>>> Yes. But I deleted the LocationInfo* files under the system folder.
>>>>>>>> Shall I go ahead and delete the entire system folder ?
>>>>>>>> 
>>>>>>>>>  2. the gossip IP of the new nodes are the same as the old ones ?
>>>>>>>> No. The Ip is different.
>>>>>>>> 
>>>>>>>>>  3. which cassandra version are you running ?
>>>>>>>> I am using 0.8.1
>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> If 1. is yes and 2. is no, for a quick fix: take down the cluster,
>>>>>>>>> remove
>>>>>>>>> system keyspace, bring the cluster up and bootstrap the nodes.
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Kind regards,
>>>>>>>>> Sorin
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> On Tue, Aug 2, 2011 at 2:53 PM, Aishwarya Venkataraman
>>>>>>>>> <cyberai...@gmail.com> wrote:
>>>>>>>>>> 
>>>>>>>>>> Hello,
>>>>>>>>>> 
>>>>>>>>>> I recently migrated 400 GB of data that was on a different cassandra
>>>>>>>>>> cluster (3 node with RF= 3) to a new cluster. I have a 3 node
>>>>>>>>>>  cluster
>>>>>>>>>> with replication factor set to three. When I run nodetool ring, it
>>>>>>>>>> does not show me all the nodes in the cluster. It always keeps
>>>>>>>>>> showing
>>>>>>>>>> only one node and mentions that it is handling 100% of the load. But
>>>>>>>>>> when I look at the logs, the nodes are able to talk to each other via
>>>>>>>>>> the gossip protocol. Why does this happen ? Can you tell me what I am
>>>>>>>>>> doing wrong ?
>>>>>>>>>> 
>>>>>>>>>> Thanks,
>>>>>>>>>> Aishwarya
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>> 
>>> 
>> 
>> 
>> 
>> --
>> Jonathan Ellis
>> Project Chair, Apache Cassandra
>> co-founder of DataStax, the source for professional Cassandra support
>> http://www.datastax.com
>> 

Reply via email to