Tristan Donaldson wrote:

Hi David,

I designed and implemented a setup like this for our company. We run two datastorage servers (primary and backup) which replicate to each other using drbd. We then run 4 servers in front of those which are diskless and boot via PXE and mount there root filesystems via NFSv3 to the backend datastorage servers. On top of these front end servers we run vservers which then allow us to seperate all of our services and move them between front end servers to deal with hardware failures and load.

Your setup sounds much like what I'm planning. We have a primary and secondary storage, just starting to test drbd now. We weren't going to go the diskless frontends though. Like you we have all services inside vservers now (except for two legacy hosting boxes, which we're still migrating things from)

You have to be careful in what applications you run on the front end as these need to be nfs nice. But most things work. But we did have problems with mail queues in postfix, which initially caused lots of corruption, but this was fixed by running the mail queue inside a ram disk (initially), and then switched to using a loopback device.

That's a bit disappointing, we're looking at moving to postfix for mail, I thought postfix actually played nicely with NFS. We currently use qmail and I really really want to get away from it (don't ask, or I'll probably start ranting :p). I suppose we can have the queue on local storage on the front-ends, it's just less elegant. :(


We did have a number of issues with performance of IO. Since we actually have a firewall between the NFS servers and the front end servers, we had performance problems with all of the udp traffic creating states on the firewall. We have changed to using NFS over TCP. We also use NFSv3 rather than NFSv2 as it is a lot faster when running under the sync option (which you have to run).


I hadn't considered this. Something I'll need to look into.


Another thing to note is we don't run any of our major databases across NFS. We run them inside vservers on the datastorage servers.


We've come to the same conclusion. There are some small databases that will probably run over nfs, but our primary ones we'll move to the file server. We also do a fair amount of Zope hosting, and we may end up moving the ZODBs to a vserver on the file server as well.

We run everything across gigabit ethernet.

For statistics, we are currently running 17 vservers on 3 servers (1 spare) all mounted across NFS. Our NFS server is running 5 vservers contain different databases. Our bandwidth to our nfs server sits on pretty much 2.5Mbit most of the time (peaks to 25Mbit at times). Most of our vservers are not under heavy use, but we have peaked our out going internet traffic from this (http) hosting to about 20Mbit without any major performance issues.

We have about 20 vservers at the moment. We're deploying the new setup with the servers and two frontends, probably bringing total frontends up to about 5 as we migrate vservers off the current boxes (we'll be pulling them back to reconfigure as we migrate)

If your hosting is high io intensive. Then you will probably have issues, if its just static html files you should be ok as the file should only be loaded over nfs once and cached locally. For our environment all of the high io stuff is inside the database which doesn't run over nfs.

Our main intensive stuff is the database at the moment, and the Zopes (large ZODBs can cause a lot of disk access). So hopefully we'll manage.. :) For reference with bonnie++ over nfs we're seeing about 40Mbyte/sec writes, 100Mbyte/second reads. That hardly tells the whole story, but it's a start.

Thanks to everyone for their comments, certainly some stuff we'll need to look at a bit more.

Regards,
-David
_______________________________________________
Vserver mailing list
[EMAIL PROTECTED]
http://list.linux-vserver.org/mailman/listinfo/vserver

Reply via email to