I've recently started to setup a test MySQL cluster in my VMware setup on a
windows vista box.
I've set up three Debian Lenny virtual machines.
1) 10.10.10.108 - this is the box I connect to the cluster with
2) 10.10.10.98 - MySQL node, heartbeat, ldirectord
3) 10.10.10.105 - MySQL node, heartbeat, ldirectord
I have heartbeat working, and failing over nicely.
I have ldirectord starting and stopping nicely with heartbeat... and ldirectord
even negotiates and connects to MySQL.
ldirector.cf
# Global Directives
checktimeout=10
checkinterval=2
autoreload=no
logfile="local0"
quiescent=yes
virtual = 10.10.10.99:3306
service = mysql
real = 10.10.10.105:3306 gate
real = 10.10.10.98:3306 gate
checktype = negotiate
login = "ldirector"
passwd = "xxxxx"
database = "test"
request = "SELECT UserID FROM UserInfo WHERE UserID = 123"
scheduler = wrr
Both machines however refuse to "Route" traffic to the other...
For example if I have ipvsadm -Ln output of
TCP 10.10.10.99:3306 wrr
-> 10.10.10.105:3306 Local 1 0 0
-> 10.10.10.98:3306 Route 1 0 0
The master has 10.10.10.99 bound on bond0... the slave has it bound on loopback
In the example above, the master box successfully makes a connection to all 3
ip addresses.
In the example above, a third Debian box 10.10.10.108 can connect "sometimes",
based on which box it connects to.
After research I found that if I disable one to make this output:
TCP 10.10.10.99:3306 wrr
-> 10.10.10.105:3306 Local 1 0 0
Then all MySQL connections from the remote box work fine.
However when I connect using this:
TCP 10.10.10.99:3306 wrr
-> 10.10.10.98:3306 Route 1 0 0
All MySQL connections from the remote box time out.
If I set the master to the other box, then it does the exact same thing... the
"Local" works fine but the "Route" fail.
Any ideas on what I'm missing to make the "Route" piece work?
Thanks
-Jason
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems