Hey Colin,

Looks like you may have put the token next to num-tokens property in the
> yaml file for one node.  I would double check the yaml's to make sure the
> tokens are setup correctly and that the ip addresses are associated with
> the right entries as well.
> Compare them to a fresh download if possible to see what you've changed.


 Thanks! I did that and now things are working perfectly:

[root@beta-new:~] #nodetool status

Datacenter: datacenter1

=======================

Status=Up/Down

|/ State=Normal/Leaving/Joining/Moving

--  Address         Load       Tokens  Owns   Host ID
        Rack

UN  10.10.1.94  164.39 KB  256     49.4%
fd2f76ae-8dcf-4e93-a37f-bf1e9088696e  rack1

UN  10.10.1.98     99.08 KB   256     50.6%
f2a48fc7-a362-43f5-9061-4bb3739fdeaf  rack1


Thanks again for your help!


Tim


On Sun, May 18, 2014 at 12:35 AM, Colin Clark <co...@clark.ws> wrote:

> Looks like you may have put the token next to num-tokens property in the
> yaml file for one node.  I would double check the yaml's to make sure the
> tokens are setup correctly and that the ip addresses are associated with
> the right entries as well.
>
> Compare them to a fresh download if possible to see what you've changed.
>
> --
> Colin
> 320-221-9531
>
>
> On May 17, 2014, at 10:29 PM, Tim Dunphy <bluethu...@gmail.com> wrote:
>
> You probably generated the wrong token type.  Look for a murmur token
>> generator on the Datastax site.
>
> What Colin is saying is that the tool you used to create the token, is not
>> creating tokens usable for the Murmur3Partitioner. That tool is probably
>> generating tokens for the (original) RandomPartitioner, which has a
>> different range.
>
>
> Thanks guys for your input. And I apologize for reading  Colin's initial
> response too quickly which lets me know that I was probably using the wrong
> token generator for the wrong partition type. That of course was the case.
> So what I've done is use this token generator form the datastax website:
>
> python -c 'print [str(((2**64 / number_of_tokens) * i) - 2**63) for i in 
> range(number_of_tokens)]
>
>
> That algorithm generated a token I could use to start Cassandra on my second 
> node.
>
>
> However at this stage I have both nodes running and I believe their gossiping 
> if I understand what I see here correctly:
>
>
>  INFO 02:44:13,823 No gossip backlog; proceeding
>
>
> However I've setup web pages for each of the two web servers that are running 
> Cassandra. And it looks like the seed node with all the data is rendering 
> correctly. But the node that's downstream from the seed node is not receiving 
> any of its data despite the message that I've just shown you.
>
>
> And if I go to the seed node and do a describe keyspaces I see the keyspace 
> that drives the website listed. It's called 'joke_fire1'
>
>
> cqlsh> describe keyspaces;
>
> system  joke_fire1  system_traces
>
> And if I go to the node that's downstream from the seed node and run the same 
> command:
>
>
> cqlsh> describe keyspaces;
>
> system  system_traces
>
>
> I don't see the important keyspace that runs the site.
>
>
> I have the seed node's IP listed in 'seeds' in the cassandra.yaml on the 
> downstream node. So I'm not really sure why its' not receiving the seed's 
> data. If there's some command I need to run to flush the system or something 
> like that.
>
>
> And if I do a nodetool ring command on the first (seed) host I don't see the 
> IP of the downstream node listed:
>
>
>
>
>
>
> [root@beta-new:~] #nodetool ring | head -10
>
> Note: Ownership information does not include topology; for complete 
> information, specify a keyspace
>
>
> Datacenter: datacenter1
>
> ==========
>
> Address         Rack        Status State   Load            Owns               
>  Token
>
>
> 10.10.1.94  rack1       Up     Normal  150.64 KB       100.00%             
> -9173731940639284976
>
> 10.10.1.94  rack1       Up     Normal  150.64 KB       100.00%             
> -9070607847117718988
>
> 10.10.1.94  rack1   k    Up     Normal  150.64 KB       100.00%             
> -9060190512633067546
>
> 10.10.1.94  rack1       Up     Normal  150.64 KB       100.00%             
> -8935690644016753923
>
>
> And if I look on the downstream node and run nodetool ring I see only the IP 
> of the downstream node and not the seed listed:
>
>
>
>
>
>
>
>
>
> [root@beta:/var/lib/cassandra] #nodetool ring | head -15
>
>
> Datacenter: datacenter1
>
> ==========
>
> Address      Rack        Status State   Load            Owns                
> Token
>
>
> 10.10.1.98  rack1       Up     Normal  91.06 KB        99.99%              
> -9223372036854775808
>
> 10.10.1.98  rack1       Up     Normal  91.06 KB        99.99%              
> -9151314442816847873
>
> 10.10.1.98  rack1       Up     Normal  91.06 KB        99.99%              
> -9079256848778919937
>
> 10.10.1.98  rack1       Up     Normal  91.06 KB        99.99%              
> -9007199254740992001
>
> 10.10.1.98  rack1       Up     Normal  91.06 KB        99.99%              
> -8935141660703064065
>
> 10.10.1.98  rack1       Up     Normal  91.06 KB        99.99%              
> -8863084066665136129
>
> 10.10.1.98  rack1       Up     Normal  91.06 KB        99.99%              
> -8791026472627208193
>
> 10.10.1.98  rack1       Up     Normal  91.06 KB        99.99%              
> -8718968878589280257
>
> 10.10.1.98  rack1       Up     Normal  91.06 KB        99.99%              
> -8646911284551352321
>
>
> 10.10.1.98  rack1       Up     Normal  91.06 KB        99.99%              
> -8574853690513424385
>
>
> Yet in my seeds entry in cassandra.yaml I have the correct IP of my seed node 
> listed:
>
>
> seed_provider:
>
>     - class_name: org.apache.cassandra.locator.SimpleSeedProvider
>
>           # seeds is actually a comma-delimited list of addresses.
>
>           - seeds: "10.10.1.94"
>
>
> So I'm just wondering what I'm missing in trying to get these two nodes to 
> communicate via gossip at this point.
>
>
>
> Thanks!
>
> Tim
>
>
>
>
>
>
>
>
> On Sat, May 17, 2014 at 8:54 PM, Dave Brosius <dbros...@mebigfatguy.com>wrote:
>
>>  What Colin is saying is that the tool you used to create the token, is
>> not creating tokens usable for the Murmur3Partitioner. That tool is
>> probably generating tokens for the (original) RandomPartitioner, which has
>> a different range.
>>
>>
>>
>> On 05/17/2014 07:20 PM, Tim Dunphy wrote:
>>
>> Hi and thanks for your response.
>>
>>  The puzzling thing is that yes I am using the murmur partition, yet I
>> am still getting the error I just told you guys about:
>>
>>   [root@beta:/etc/alternatives/cassandrahome] #grep -i partition
>> conf/cassandra.yaml | grep -v '#'
>> partitioner: org.apache.cassandra.dht.Murmur3Partitioner
>>
>>  Thanks
>> Tim
>>
>>
>> On Sat, May 17, 2014 at 3:23 PM, Colin <colpcl...@gmail.com> wrote:
>>
>>>  You may have used the old random partitioner token generator.  Use the
>>> murmur partitioner token generator instead.
>>>
>>> --
>>> Colin
>>> 320-221-9531
>>>
>>>
>>> On May 17, 2014, at 1:15 PM, Tim Dunphy <bluethu...@gmail.com> wrote:
>>>
>>>   Hey all,
>>>
>>>   I've set my initial_token in cassandra 2.0.7 using a python script I
>>> found at the datastax wiki.
>>>
>>>  I've set the value like this:
>>>
>>>  initial_token: 85070591730234615865843651857942052864
>>>
>>>  And cassandra crashes when I try to start it:
>>>
>>>  [root@beta:/etc/alternatives/cassandrahome] #./bin/cassandra -f
>>>  INFO 18:14:38,511 Logging initialized
>>>  INFO 18:14:38,560 Loading settings from
>>> file:/usr/local/apache-cassandra-2.0.7/conf/cassandra.yaml
>>>  INFO 18:14:39,151 Data files directories: [/var/lib/cassandra/data]
>>>  INFO 18:14:39,152 Commit log directory: /var/lib/cassandra/commitlog
>>>  INFO 18:14:39,153 DiskAccessMode 'auto' determined to be mmap,
>>> indexAccessMode is mmap
>>>  INFO 18:14:39,153 disk_failure_policy is stop
>>>  INFO 18:14:39,153 commit_failure_policy is stop
>>>  INFO 18:14:39,161 Global memtable threshold is enabled at 251MB
>>>  INFO 18:14:39,362 Not using multi-threaded compaction
>>> ERROR 18:14:39,365 Fatal configuration error
>>> org.apache.cassandra.exceptions.ConfigurationException: For input
>>> string: "85070591730234615865843651857942052864"
>>>         at
>>> org.apache.cassandra.dht.Murmur3Partitioner$1.validate(Murmur3Partitioner.java:178)
>>>         at
>>> org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:440)
>>>         at
>>> org.apache.cassandra.config.DatabaseDescriptor.<clinit>(DatabaseDescriptor.java:111)
>>>         at
>>> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:153)
>>>         at
>>> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:471)
>>>         at
>>> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:560)
>>> For input string: "85070591730234615865843651857942052864"
>>> Fatal configuration error; unable to start. See log for stacktrace.
>>>
>>>  I really need to get replication going between 2 nodes. Can someone
>>> clue me into why this may be crashing?
>>>
>>>  Thanks!
>>> Tim
>>>
>>>  --
>>> GPG me!!
>>>
>>> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
>>>
>>>
>>
>>
>>  --
>> GPG me!!
>>
>> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
>>
>>
>>
>
>
> --
> GPG me!!
>
> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
>
>


-- 
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B

Reply via email to