Dear Steve,
Related to UCARP, it is not broken, Zen Community Edition 3.10.1 uses the
ucarp of Debian Base.
I remember that for 3.10 we developed more integration with Debian Base as
requested, so now UCARP is based in Debian 8, Zen 2.6 used our own
compilation.
Related to KarmaLB, I would
SO, to clear this up. It appears that I'm running into SOME sort of issue
on the virtualization platform. It is odd, though because my EE edition
report the clusters are healthy. In any case, I will ellaborate later, but
I wanted to send something to point out that, yet again, it was RHEV not
If UCarp is broken in the Zen Community Edition, you could try my fork at
karmalb.org.uk which will be using a later version as I keep my
distribution more up to date with Debian releases, although as not many
people have tried it in anger, it may well suffer the same issues as the
3.10.1 version
Dear Christopher, Zen EE and Zen Community Edition implement cluster with
VRRP protocol, ucarp implemented, but both ucarp version are not the same.
Ucarp in Community edition as another binaries are included in Debian
package but Ucarp in Enterprise Edition is using an improved Ucarp version
Well, I feel like I'm getting further from the solution at the moment.
I just noticed that another one of my community edition cluster is not
working as well. (It looks like the same problem).
What is interesting is that my EE cluster (running on VMs on the same VM
platform, servers, switches)
I just got some time to look at this again. Here are some entries from my
systems, though first let me give an overview of the setup in case it helps:
Nodes:
orldc-prod-netlb01
orldc-prod-netlb02
They have 2 interfaces each (these are VM's running RHEV/oVirt, btw)
On orldc-prod-netlb01:
-
Dear Christopher, if the issue is presented in your cluster at the moment
of the first configuration, then the problem must be related with VRRP
packets, if VRRP packets are not received then the cluster node can't take
the decession if the node is master or backup, so both cluster nodes will
be
I feel good about the switches since I have other ZenLB instances running
fine. The appropriate MAC spoofing options are enabled for these VMs, so I
believe that would be ok, but is there some method to verify that it is
working (I'd like to have that as a check regardless)?
I will get the logs
I appear to have this issue and yet my content3-3.cgi seems to be correct.
I have other clusters (both commercial EE and CE) working and macspoofing
enabled on the systems (RHEV/oVirt), so this seems like an odd one for me:
root@orldc-prod-netlb02:/usr/local/zenloadbalancer/www# sha1sum
That did the trick. Thank you.
Shawn Hawkins | Administrative Officer | Network Administrator
Texas Bank and Trust | P.O. Box 3188 | Longview, TX 75606
p. (903) 237-5674 | f. (903) 237-1875 | shawk...@texasbankandtrust.com
Hi Shawn,
Emilio's patched content3-3.cgi resolved the issue for me:
content3-3.cgi
SHA1: F859509C00DB125FD35BF24FCC9D6A5CA8A90E68
SHA256: 8C8C652637C12383B12357C1C67913619D9EFB41B2572A47E07078F4861D80F0
Have you got the same file, located at:
/usr/local/zenloadalancer/www/content3-3.cgi
?
I am still having problems configuring the cluster in version 3.10.1 even
though it is supposed to fixed. I have verified both load balancers can ping
each other and I can ssh between the two. Hostnames are resolvable from both
devices. However, after I put in the password and click
Please could you attach the zenloadbalancer.log file?
/usr/local/zenloadbalancer/logs/zenloadbalancer.log
execute the next commands:
uname -a
grep version /usr/local/zenloabalancer/config/global.conf
dpkg -l | grep zen
And paste the output.
Thanks!
2016-04-13 22:16 GMT+02:00 Shawn Hawkins
I am still having problems configuring the cluster in version 3.10.1 even
though it is supposed to fixed. I have verified both load balancers can ping
each other and I can ssh between the two. Hostnames are resolvable from both
devices. However, after I put in the password and click
14 matches
Mail list logo