Hi Ben, Thanks for the tip I will certainly check it out. I really appreciate the information!
Tim On Thu, Jan 24, 2013 at 6:32 PM, Ben Bromhead <b...@instaclustr.com> wrote: > Hi Tim > > If you want to check out Cassandra on AWS you should also have a look > www.instaclustr.com. > > We are still very much in Beta (so if you come across anything, please let > us know), but if you have a few minutes and want to deploy a cluster in > just a few clicks I highly recommend trying Instaclustr out. > > Cheers > > Ben Bromhead > *Instaclustr* > > > On Fri, Jan 25, 2013 at 12:35 AM, Tim Dunphy <bluethu...@gmail.com> wrote: > >> Cool Thanks for the advice Aaron. I actually did get this working before >> I read your reply. The trick apparently for me was to use the IP for the >> first node in the seeds setting of each successive node. But I like the >> idea of using larges for an hour or so and terminating them for some basic >> experimentation. Also, thanks for pointing me to the Datastax AMIs I'll be >> sure to check them out. >> >> Tim >> >> >> On Thu, Jan 24, 2013 at 3:45 AM, aaron morton <aa...@thelastpickle.com>wrote: >> >>> They both have 0 for their token, and this is stored in their System >>> keyspace. >>> Scrub them and start again. >>> >>> But I found that the tokens that were being generated would require way >>> too much memory >>> >>> Token assignments have nothing to do with memory usage. >>> >>> m1.micro instances >>> >>> You are better off using your laptop than micro instances. >>> For playing around try m1.large and terminate them when not in use. >>> To make life easier use this to make the cluster for you >>> http://www.datastax.com/docs/1.2/install/install_ami >>> >>> Cheers >>> >>> ----------------- >>> Aaron Morton >>> Freelance Cassandra Developer >>> New Zealand >>> >>> @aaronmorton >>> http://www.thelastpickle.com >>> >>> On 24/01/2013, at 5:17 AM, Tim Dunphy <bluethu...@gmail.com> wrote: >>> >>> Hello list, >>> >>> I really do appreciate the advice I've gotten here as I start building >>> familiarity with Cassandra. Aside from the single node instance I setup for >>> a developer friend, I've just been playing with a single node in a VM on my >>> laptop and playing around with the cassandra-cli and PHP. >>> >>> Well I've decided to setup my first cluster on my amazon ec2 account and >>> I'm running into an issue getting the nodes to gossip. >>> >>> I've set the IP's of 'node01' and 'node02' ec2 instances in their >>> respective listen_address, rpc_address and made sure that the >>> 'cluster_name' on both was in agreement. >>> >>> I believe the problem may be in one of two places: either the seeds or >>> the initial_token setting. >>> >>> For the seeds I have it setup as such. I put the IPs for both machines >>> in the 'seeds' settings for each, thinking this would be how each node >>> would discover each other: >>> >>> - seeds: "10.xxx.xxx.248,10.xxx.xxx.123" >>> >>> Initially I tried the tokengen script that I found in the documentation. >>> But I found that the tokens that were being generated would require way too >>> much memory for the m1.micro instances that I'm experimenting with on the >>> Amazon free tier. And according to the docs in the config it is in some >>> cases ok to leave that field blank. So that's what I did on both instances. >>> >>> Not sure how much/if this matters but I am using the setting - >>> endpoint_snitch: Ec2Snitch >>> >>> Finally, when I start up the first node all goes well. >>> >>> But when I startup the second node I see this exception on both hosts: >>> >>> node1 >>> >>> INFO 11:02:32,231 Listening for thrift clients... >>> INFO 11:02:59,262 Node /10.xxx.xxx.123 is now part of the cluster >>> INFO 11:02:59,268 InetAddress /10.xxx.xxx.123 is now UP >>> ERROR 11:02:59,270 Exception in thread Thread[GossipStage:1,5,main] >>> java.lang.RuntimeException: Host ID collision between active endpoint >>> /10.xxxx.xxx.248 and /10.xxx.xxx.123 >>> (id=54ce7ccd-1b1d-418e-9861-1c281c078b8f) >>> at >>> org.apache.cassandra.locator.TokenMetadata.updateHostId(TokenMetadata.java:227) >>> at >>> org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:1296) >>> at >>> org.apache.cassandra.service.StorageService.onChange(StorageService.java:1157) >>> at >>> org.apache.cassandra.service.StorageService.onJoin(StorageService.java:1895) >>> at >>> org.apache.cassandra.gms.Gossiper.handleMajorStateChange(Gossiper.java:805) >>> at >>> org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:883) >>> at >>> org.apache.cassandra.gms.GossipDigestAck2VerbHandler.doVerb(GossipDigestAck2VerbHandler.java:43) >>> at >>> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56) >>> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown >>> Source) >>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown >>> Source) >>> at java.lang.Thread.run(Unknown Source) >>> >>> And on node02 I see: >>> >>> INFO 11:02:58,817 Starting Messaging Service on port 7000 >>> INFO 11:02:58,835 Using saved token [0] >>> INFO 11:02:58,837 Enqueuing flush of Memtable-local@672636645(84/84 >>> serialized/live bytes, 4 ops) >>> INFO 11:02:58,838 Writing Memtable-local@672636645(84/84 >>> serialized/live bytes, 4 ops) >>> INFO 11:02:58,912 Completed flushing >>> /var/lib/cassandra/data/system/local/system-local-ia-43-Data.db (120 bytes) >>> for commitlog position ReplayPosition(segmentId=1358956977628, >>> position=49266) >>> INFO 11:02:58,922 Enqueuing flush of Memtable-local@1007604537(32/32 >>> serialized/live bytes, 2 ops) >>> INFO 11:02:58,923 Writing Memtable-local@1007604537(32/32 >>> serialized/live bytes, 2 ops) >>> INFO 11:02:58,943 Compacting >>> [SSTableReader(path='/var/lib/cassandra/data/system/local/system-local-ia-40-Data.db'), >>> SSTableReader(path='/var/lib/cassandra/data/system/local/system-local-ia-42-Data.db'), >>> SSTableReader(path='/var/lib/cassandra/data/system/local/system-local-ia-43-Data.db'), >>> SSTableReader(path='/var/lib/cassandra/data/system/local/system-local-ia-41-Data.db')] >>> INFO 11:02:58,953 Node /10.192.179.248 is now part of the cluster >>> INFO 11:02:58,961 InetAddress /10.192.179.248 is now UP >>> INFO 11:02:59,003 Completed flushing >>> /var/lib/cassandra/data/system/local/system-local-ia-44-Data.db (90 bytes) >>> for commitlog position ReplayPosition(segmentId=1358956977628, >>> position=49422) >>> ERROR 11:02:59,023 Exception in thread Thread[GossipStage:1,5,main] >>> java.lang.RuntimeException: Host ID collision between active endpoint >>> /10.xxx.xxx.123 and /10.xxx.xxx.248 >>> (id=54ce7ccd-1b1d-418e-9861-1c281c078b8f) >>> at >>> org.apache.cassandra.locator.TokenMetadata.updateHostId(TokenMetadata.java:227) >>> at >>> org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:1296) >>> at >>> org.apache.cassandra.service.StorageService.onChange(StorageService.java:1157) >>> at >>> org.apache.cassandra.service.StorageService.onJoin(StorageService.java:1895) >>> at >>> org.apache.cassandra.gms.Gossiper.handleMajorStateChange(Gossiper.java:805) >>> at >>> org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:883) >>> at >>> org.apache.cassandra.gms.GossipDigestAck2VerbHandler.doVerb(GossipDigestAck2VerbHandler.java:43) >>> at >>> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56) >>> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown >>> Source) >>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown >>> Source) >>> at java.lang.Thread.run(Unknown Source) >>> INFO 11:02:59,034 Node /10.xxx.xxx.123 state jump to normal >>> >>> And if I do a nodetool ring on node01 at this point this is all I see: >>> >>> [root@cassandra-node01 ~]# nodetool -h 10.xxx.xxx.248 ring >>> Note: Ownership information does not include topology; for complete >>> information, specify a keyspace >>> >>> Datacenter: us-east >>> ========== >>> Address Rack Status State Load >>> Owns Token >>> >>> 10.xxx.xxx.248 1a Up Normal 92.94 KB >>> 100.00% 0 >>> >>> And on node02 this is what I see: >>> >>> [root@cassandra-node02 ~]# nodetool -h 10.xxx.xxx.123 ring >>> Note: Ownership information does not include topology; for complete >>> information, specify a keyspace >>> >>> Datacenter: us-east >>> ========== >>> Address Rack Status State Load >>> Owns Token >>> >>> 10.192.218.123 1a Up Normal 84.93 KB >>> 100.00% 0 >>> >>> Just two lonely Cassandra nodes longing to Gossip with one another! :) I >>> was wondering if someone out there could point me in the right direction. >>> >>> Thanks! >>> Tim >>> >>> -- >>> GPG me!! >>> >>> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B >>> >>> >>> >> >> >> -- >> GPG me!! >> >> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B >> >> > -- GPG me!! gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B