When you ask each of the new nodes what their status is, what do they say? On Thu, May 12, 2011 at 9:09 AM, Murali Krishna. P <[email protected]>wrote:
> Hi, > even after 1 hour, the version-2 dir seems to be empty on all 3 new > nodes. Can i just copy the last log.x and last snapshot.x file from > corresponding old nodes and bounce new nodes with new config? > > > Thanks, > Murali Krishna > > > ________________________________ > From: Murali Krishna. P <[email protected]> > To: Vishal Kher <[email protected]>; "[email protected]" < > [email protected]> > Sent: Thursday, 12 May 2011 8:45 PM > Subject: Re: Changing hosts > > Thanks for the suggestion, but I had already started the process which Alex > had mentioned and DEF is started with ABC. How can we know whether DEF is > synced ? the data/version-2 dir is empty, i guess i need to wait till it > gets some data there? > > > Thanks, > Murali Krishna > > > ________________________________ > From: Vishal Kher <[email protected]> > To: [email protected]; Murali Krishna. P <[email protected]> > Sent: Thursday, 12 May 2011 7:24 PM > Subject: Re: Changing hosts > > > Hi, > > Since you can stop clients, another way to achieve what Alex suggested is > to: > > 1. stop clients > 2. stop all current zk servers (a_i) > 3. scp -r /etc/zookeeper/ from a_i to b_i > 4. scp -r /var/zookeeper/ from a_i to b_i > 5. On all b_i, edit /etc/zookeeper/zoo.cfg to reflect the correct IP > addresses > 5 start all b_i > > This is assuming that stopping ZK server is ok in our environment. > > -Vishal > > > On Thu, May 12, 2011 at 2:07 AM, Murali Krishna. P <[email protected]> > wrote: > > Thanks for the responses, > > I have the luxury of stopping the clients during the operations. So, > I would go with the second approach of cloning. > > > > > >Thanks, > >Murali Krishna > > > > > > > >________________________________ > >From: Ted Dunning <[email protected]> > >To: Alexander Shraer <[email protected]> > >Cc: "[email protected]" <[email protected]>; Murali > Krishna. P <[email protected]>; "[email protected]" < > [email protected]> > >Sent: Thursday, 12 May 2011 5:27 AM > >Subject: Re: Changing hosts > > > > > > > >Alex, > > > >I think that this process does a slightly different thing. Your process > is good for cloning a cluster, but it doesn't address the problem of > transitioning a working cluster. My process never has two clusters with the > same data so all transactions will always be applied to a single notional > version of the data. > > > >The reason that this is important is a part of the process that neither of > us mentioned. That is how to transition the clients. My assumption is that > before the transition, all clients would re-open their ZK connection with > all 6 nodes in the list of servers. Once this is done, my process will lead > the clients through the transition in a way that all updates will be visible > to all clients. At the end, the clients should (eventually) trim their list > of servers to the shorter list of new servers. > > > >With a cluster clone operation, there will be moments when some clients > connect to one cluster and some connect to the other. That makes it hard to > understand how this will work well. > > > >The OP can probably clarify which task they really wanted to accomplish. > > > > > > > > > >On Wed, May 11, 2011 at 4:49 PM, Alexander Shraer <[email protected]> > wrote: > > > >Hi Ted, > >> > >>There's a simpler way that works. Suppose that the original servers are > A, B and C, and the new ones > >>are D, E, F. Configure D, E and F to be in the configuration A, B, C, D, > E, F and start them. Don't do any changes to > >>A, B and C. After D, E and F synch with the leader (or at least 2 of them > do), turn them off, change their config files to D, E, F and bring them up > again. > >> > >>Alex > >> > >> > >> > >> > >> > >>> -----Original Message----- > >>> From: Ted Dunning [mailto:[email protected]] > >>> Sent: Tuesday, May 10, 2011 8:39 AM > >>> To: [email protected]; Murali Krishna. P > >>> Cc: [email protected] > >>> Subject: Re: Changing hosts > >>> > >>> Step 1: configure two of the new hosts to be part of a 5 node cluster > >>> containing all of the > >>> original nodes. > >>> > >>> Step 2: reconfigure each of the original 3 nodes to be part of the the > >>> 5 > >>> node cluster. > >>> > >>> Step 3: bounce each of the originals and start the two new servers. > >>> > >>> Step 4: configure the 6th server (previously untouched) to be part of a > >>> three node cluster > >>> containing only the 3 new nodes. > >>> > >>> Step 5: reconfigure servers 4 and 5. > >>> > >>> Step 6: bounce servers 4 and 5 and start server 6. > >>> > >>> Done. > >>> > >>> (wait for somebody else to critique this procedure before proceeding > >>> with > >>> it) > >>> > >>> On Tue, May 10, 2011 at 7:54 AM, Murali Krishna. P > >>> <[email protected]>wrote: > >>> > >>> > Hi, > >>> > I have a zookeeper cluster (3.2.2) with 3 hosts. I need to > >>> replace all > >>> > the 3 hosts with different machines. What is the best way to achieve > >>> this > >>> > without any data loss? I can shutdown my clients during this > >>> operation. > >>> > > >>> > > >>> > > >>> > Thanks, > >>> > Murali Krishna > >>
