adam chandley wrote: > Hi, > > Adam C here from Greenevillesun ((dot)) com. We operate a number of > newspapers around the area and provide a modest amount of web hosting for > local businesses. We currently have 14 servers in an LVS cluster using LVS > 1.0.4 on Redhat 7. Naturally, they were setup using Piranha-Nanny-LVS.cf and > all the other nasty redhat-proprietary tools. However, since I've been with > the company, these are the only methods I've really learned to use.
I am not sure that there are one-to-one replacements but there is: keepalived http://www.keepalived.org/ and ipvsman http://wiki.inqbus.de/twiki/bin/view/Ipvsman among others. I like ipvsman, it is the seed of a cool system but a little young, 0.92 I think. You could also go a simple route and use something like monit http://www.tildeslash.com/monit/ to monitor the real servers and when they go up or down exec a script that runs ipvsadm to update the running LVS config. That's what I would do ;) But just FYI that wouldn't be instantaneous - monit checks only every 30 seconds or less so if you have some dweeb managers who think "OMG we can't wait 30-60 second to switch a real server on/off" then that won't work for you and you'll need something that hammers your real servers constantly. > > That said, I've become systems administrator and have chosen to rebuild the > cluster anew with much more advanced hardware. I've purchased just 3 servers > to take the place of the numerous slow P3/P4 machines that were previously > operating business. The new LVS/DNS servers are Xeon 2.33 w/ 2gb ram and > RAID 1. The "real server" behind it is an 8-core xeon 3.0 12mb cache 1333 > bus w/ 16gb ram -- an amazing improvement over 3 p4 2.0s as a web server > cluster! I would be interested to see if you use all that power - because my experience is that you run into OS level limits (# of sockets, open file descriptors, etc) before you run into machine power level limits on any modern hardware - if you are just serving mostly static content. If you are doing dynamic, then yes you can chew up memory and CPU. But serving static content an 8 core 16 GB box gets me no more requests/sec than a dual core 0.5 GB box when running thttpd/nginx/lighttpd. So you may want to think about Xen and dividing up that power into static vs dynamic machines (domUs in Xen) or also dividing based on separating your webhosting clients on some domUs vs your newpaper servers on other domUs. OT: [BTW, what CPU model number is the 3.0 12 MB cache? Harpertown ...? I guess you're screwed if your one real server goes down. If you were buying those machines from Dell, do yourself a favor and check out the Supermicro 6015T-TB/TV "Twin" - 2 machines in 1U each with 2 drives, 2 CPUs and 8 FB-DIMM/1333 Hz memory slots - I spec'd one out at new egg and it was less than half of the cost for a Dell 2950 - and half the electrical power use (and half the rack space!).] > Now, the intent is to use Fedora 8 on the LVS servers which will also serve > as DNS1 and DNS2 servers. I've installed IPVSADM and heartbeat with no > problems, but i've totally grown up with LVS.CF as used by redhat. I've > learned all the wrong ways and I know it. > > What types of utilities exist to help me bridge the knowledge gap? I have a > SLIGHT understanding on how to use simply IPVSADM to perform the NAT work, > but what about the way Nanny worked with IPVSADM to drop defunct > connections? Do you have a pointer to docs on Nanny, I couldn't find anything in 10 seconds of googling. > _______________________________________________ > LinuxVirtualServer.org mailing list - [email protected] > Send requests to [EMAIL PROTECTED] > or go to http://lists.graemef.net/mailman/listinfo/lvs-users _______________________________________________ LinuxVirtualServer.org mailing list - [email protected] Send requests to [EMAIL PROTECTED] or go to http://lists.graemef.net/mailman/listinfo/lvs-users
