Hi Rameez, If you run a rebuild on an already populated cluster it will simply stream the data again and you will have a duplicate data set.
What you need to do is stop all the nodes in the new DC and clear out the data directory. Then comment out num_tokens and populate the initial token with the correct token as calculated by the token generator, packaged with cassandra "/tools/bin/token-generator" Once that is complete follow the steps in the link I provided and that should get your new DC running on non-vnodes. On Fri, Jul 4, 2014 at 3:12 PM, Rameez Thonnakkal <ssram...@gmail.com> wrote: > i did a nodetool rebuild on one of the nodes. > > Datacenter: DC1 > ================ > Status=Up/Down > |/ State=Normal/Leaving/Joining/Moving > -- Address Load Tokens Owns Host > ID Rack > *UN 10.123.75.51 10.54 GB 256 16.0% > d2f980c1-cf82-4659-95ce-ffa3e50ed7c1 RAC1* > UN 10.123.75.53 5.18 GB 256 16.5% > bab7739d-c424-42ef-a8f6-2ba82fcdd0b9 RAC1 > UN 10.123.75.52 5.51 GB 256 18.3% > 70469a76-939b-4b8c-9512-33aedec6fd3e RAC1 > Datacenter: DC2 > ================ > Status=Up/Down > |/ State=Normal/Leaving/Joining/Moving > -- Address Load Tokens Owns Host > ID Rack > UN 10.123.75.51 5.3 GB 256 16.1% > 106d5001-2d44-4d81-8af8-5cf841a1575e RAC1 > UN 10.123.75.52 5.34 GB 256 16.2% > c4333d90-476a-4b44-bc23-5fca7ba6a2e7 RAC1 > UN 10.123.75.53 5.11 GB 256 16.8% > 8154288e-a0fb-45f8-b3fb-6c3d645ba8f3 RAC1 > > In that node the Load has increased from around 5GB to 10GB. > But the tokens remains same (256). My expectation was that it would come > down to 1. > > i will continue to rebuild the remaining nodes. But not sure whether this > is helping. > > > > On Fri, Jul 4, 2014 at 7:28 PM, Rameez Thonnakkal <ssram...@gmail.com> > wrote: > >> Thanks Mark. >> the procedure you shared is useful. I think I have missed the nodetool >> rebuild command. >> I am trying it out in a non-prod environment. >> >> The num_tokens is set to 1 and initial_token is set to different values >> (mine is a 6 node cluster with 3 in each datacenter). >> Tried a rolling restart of the cluster. That didn't help. >> Tried a cold restart of the cluster. That also didn't work. >> >> I will try the nodetool rebuild and see whether any change. >> >> Thanks, >> rameez >> >> >> >> On Fri, Jul 4, 2014 at 7:19 PM, Mark Reddy <mark.re...@boxever.com> >> wrote: >> >>> Hi Rameez, >>> >>> I have never done a migration from vnodes to non-vnodes however I would >>> imagine that the procedure would be the same as its counterpart. As always >>> testing in dev should be done first. >>> >>> To move from vnodes to non-vodes I would add a new datacenter to the >>> cluster with vnodes disabled and rebuild from your vnode cluster. >>> >>> You can find some more details about adding a data center to your >>> cluster here: >>> http://datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_add_dc_to_cluster_t.html?scroll=task_ds_hmp_54q_gk__task_ds_hmp_54q_gk_unique_1 >>> >>> >>> >>> Mark >>> >>> >>> >>> On Fri, Jul 4, 2014 at 2:43 PM, Rameez Thonnakkal <ssram...@gmail.com> >>> wrote: >>> >>>> hello Team, >>>> >>>> I am looking for standard operating procedure to disable vnode in a >>>> production cluster. >>>> This is to enable solr which doesn't work with a cassandra cluster >>>> having vnode enabled. >>>> >>>> Any suggestions/ >>>> >>>> Thanks, >>>> Rameez >>>> >>>> >>> >> >