Thanks Steve and Michael. Simply uncommenting "initial_token" did the trick !!!
Right now, I was evaluating replication, for the case when everything is a clean install. Will now try my hands on integrating/starting replication, with pre-existing data. Once again, thanks a ton for all the help guys !!! Thanks and Regards, Ajay On Sat, Oct 24, 2015 at 2:06 AM, Steve Robenalt <sroben...@highwire.org> wrote: > Hi Ajay, > > Please take a look at the cassandra.yaml configuration reference regarding > intial_token and num_tokens: > > > http://docs.datastax.com/en/cassandra/2.1/cassandra/configuration/configCassandra_yaml_r.html?scroll=reference_ds_qfg_n1r_1k__initial_token > > This is basically what Michael was referring to in his earlier message. > Setting an initial token overrode your num_tokens setting on initial > startup, but after initial startup, the initial token setting is ignored, > so num_tokens comes into play, attempting to start up with 256 vnodes. > That's where your error comes from. > > It's likely that all of your nodes started up like this since you have the > same config on all of them (hopefully, you at least changed initial_token > for each node). > > After reviewing the doc on the two sections above, you'll need to decide > which path to take to recover. You can likely bring the downed node up by > setting num_tokens to 1 (which you'd need to do on all nodes), in which > case you're not really running vnodes. Alternately, you can migrate the > cluster to vnodes: > > > http://docs.datastax.com/en/cassandra/2.1/cassandra/configuration/configVnodesProduction_t.html > > BTW, I recommend carefully reviewing the cassandra.yaml configuration > reference for ANY change you make from the default. As you've experienced > here, not all settings are intended to work together. > > HTH, > Steve > > > > On Fri, Oct 23, 2015 at 12:07 PM, Ajay Garg <ajaygargn...@gmail.com> > wrote: > >> Any ideas, please? >> To repeat, we are using the exact same cassandra-version on all 4 nodes >> (2.1.10). >> >> On Fri, Oct 23, 2015 at 9:43 AM, Ajay Garg <ajaygargn...@gmail.com> >> wrote: >> >>> Hi Michael. >>> >>> Please find below the contents of cassandra.yaml for CAS11 (the files on >>> the rest of the three nodes are also exactly the same, except the >>> "initial_token" and "listen_address" fields) :: >>> >>> CAS11 :: >>> >>> >>> >>> What changes need to be made, so that whenever a downed server comes >>> back up, the missing data comes back over to it? >>> >>> Thanks and Regards, >>> Ajay >>> >>> >>> >>> On Fri, Oct 23, 2015 at 9:05 AM, Michael Shuler <mich...@pbandjelly.org> >>> wrote: >>> >>>> On 10/22/2015 10:14 PM, Ajay Garg wrote: >>>> >>>>> However, CAS11 refuses to come up now. >>>>> Following is the error in /var/log/cassandra/system.log :: >>>>> >>>>> >>>>> ################################################################ >>>>> ERROR [main] 2015-10-23 03:07:34,242 CassandraDaemon.java:391 - Fatal >>>>> configuration error >>>>> org.apache.cassandra.exceptions.ConfigurationException: Cannot change >>>>> the number of tokens from 1 to 256 >>>>> >>>> >>>> Check your cassandra.yaml - this node has vnodes enabled in the >>>> configuration when it did not, previously. Check all nodes. Something >>>> changed. Mixed vnode/non-vnode clusters is bad juju. >>>> >>>> -- >>>> Kind regards, >>>> Michael >>>> >>> >>> >>> >>> -- >>> Regards, >>> Ajay >>> >> >> >> >> -- >> Regards, >> Ajay >> > > > > -- > Steve Robenalt > Software Architect > sroben...@highwire.org <bza...@highwire.org> > (office/cell): 916-505-1785 > > HighWire Press, Inc. > 425 Broadway St, Redwood City, CA 94063 > www.highwire.org > > Technology for Scholarly Communication > -- Regards, Ajay