Hi Anton! On Wed, 03 Aug 2005, Anton Nikiforov wrote:
> not see the way to promote second node as prymary in case of real > prymary crashed. Or i should test it with vrrpd (for example) and > restart secondary as ggated instead of being ggatec? Did not clear yet. > If you have the solution or an idea - could you please drop me a line? Yes. /usr/ports/net/freevrrpd > algorythm looks the following for me: > We have two computers: > stage 0 (up and running claster) > 1. da0 is exported via ggated over the network. > This computer runing postgresql (for example) that storing data on da0. > 2. da1 is being imported over network via ggatec as da1. Local da0 and > remote da1 are mirrored via ggated/geom_mirror.ko > and there is no service activity on this device. just a mirroring of data. No. When you work with ggate{c,d} you get devices ggate{0,1,2,...}. Not with da1. > stage 1 (slave failed) > 1. working as before > 2. down > Requests to update remote data failed. There should be no problem. > > stage 2 (slave is coming up) > 1. working as before > 2. booting and starting to mirror date from the master > in case secondary goes down and up again there are no troubles. it just > boots itself in the stage0 config. that is it When slave is start you can rebuild mirror on the fly. But it get many time (in my case) -- 54Gb slice rebuilded slow :( > stage 3 (master is down) > 1. down > 2. should consider master is down and stop ggatec, then start ggated and > export the da0. Then mount da0 and start PostgreSQL. when master down, then freevrrpd can run script on slave machine. > stage 4 (master coming up) > 1. Should understand it is not the master anymore and start ggatec and > mirror remote data to the local drive. > 2. running as a master. with services moved/started at stage 3 > Here troubles begins. When bouth become synchronized the service that > was running on this disk shoul be moved from 2 to 1. to do that i'll > have to restart services in stage 0. But how? using vrrpd? or some other > utility? How can i understand that the system is ready to get the > Postgresql back to 1 and stop it on 2? Yes. vrrp -- your friend. But moving from node2 -> node1, when node1 coming up bad idea. Service must continue to run on node2. > stage 5 (bouth down, master coming up first) > > stage 6 (bouth down, slave is coming up first) My algoritm. No difference between master and slave in start time. We have: /etc/rc.conf -- this file identical on both nodes. Many parametes in it and 2 includes: /etc/rc.clh -- specific addreses and specific params for every node. /etc/rc.cluster -- simlink dinamically switching between: /etc/rc.custer.master or /etc/rc.cluster.slave If we can ping shared IP, when starting -- we start as slave. In not -- master. We start freevrrpd with master or slave config (see /etc/rc.d/freevrrpd). /etc/rc.d/freevrrpd run after network start, but before all daemons. It work in RELENG_5* with new style rc-scripts. /boot/loader.conf have different autoboot_delay on different nodes (bad guarantee what important start-ping not run in one time on both machines) > This is main confuse. Maybe i did not read mutch, but it is not clear > for me, how to inderstand which drive contain the last data? Yes. It's a problem in one case: reboot node0 and reboot node1, when mirror not fully synchronize. See attached scripts -- this my experiments. It have some known and, my be unknown issues: - in /usr/local/ifstated/bin/ggatec-gmirror.sh script I can't get unit name with one line: ggate=`/sbin/ggatec ${ggatec_flags}` - when I restart /etc/rc.d/freevrrpd many time I get many ggate devices. - parametert for ggate{c,d} may be not ideal, but it work on gigabit interconnect. - I can't undestand what I must open on firewall for freevrrpd. By. Dmitriy
cluster.tbz2
Description: Binary data
_______________________________________________ freebsd-cluster@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-cluster To unsubscribe, send any mail to "[EMAIL PROTECTED]"