> Thanks for the info!

NP... A bunch of late nights info has not gone to waste.

> How do you find NFS performance? (Did you use any special 
> tweaking/mount
> options?)
> And what are you using for auth?(NIS/LDAP etc)

For the most part the NFS performance is good... Even with a 100BaseT
switch as the backend switch for the NFS share. Runs on average about
400KBps constant, with spikes up to 2MBps due to remote rsync processes
backing up data to the NFS store (we use it as our backup dumping ground
as well), so it should scale rather well.

Our current mount options are:

I have also heard that by altering the MTU of the internal (NFS)
interface it is possible to achieve greater performance, but your switch
must support Jumbo Frames, and I am only aware of a couple of GigE
switches that support that. The rational behind this is that NFS's
default packet size is 4K, so by bumping the MTU to a similarly large
value 4K-6K there is no fragmenting of the NFS packet. At least so I
have heard. ;)

As for authentication, we only have a few admins so we just setup the
accounts manually. We had considered NIS, but the reward to risk factor
was a little to high for very minimal gain. I like LDAP, but the added
complexity isn't something I want to deal with right now. Maybe in the

> I would like to consider a Linux alternative, but majority of 
> our support staff are not Linux savvy...

We are primarily a FreeBSD shop ourselves... I have a background in both
BSD (HP-UX) as well as Linux so I can easily switch back and forth
between the two. Occassionally I hit something that causes a problem
(netstat -nap on FBSD doesn't work, and I really wish Linux had
something like "systat -vmstat") but I think that newer iterations of
FBSD are close enough to Linux as far as the admin utils that I don't
really have a problem. Our boss is talking more and more about the money
being spent on Linux by major players (IBM, et al) and how FBSD is an
after thought. The 3ware support in FBSD comes to mind on that one.
3Ware support will typically lag 6 months behind Linux.

Our current mail cluster is FBSD based, but because of the need for
DRBD, we have to switch our NFS to Linux, as (to my knowledge) FBSD
doesn't have anything like DRBD available for it yet, barring a shared
SCSI implementation. Mixing NFS from diffirent vendors I have been told
can lead to weird problems and I just want to avoid that all together.

> Just out of interest - What are you using to sync 
> data(configs etc) - You also mention NFS "servers"...So I 
> assume you are running more than one behind a 
> loadbalancer...how are you synching data between them?

Our configs for qmail are being shared out from the NFS server
(control/* users/*) with control/me being a symbolic link to
/var/qmail/me so that each machine maintains their identity in the
cluster. I am still not sold on this idea but I think that for
diagnostic purposes it is probably the better solution.
(--enable-file-locking=n in vpopmail)

The NFS is only in the design phase right now. We have a single NFS
server with RAID1+0. The plan is to have an additional server (also on
the same internal LAN, behind the load balancer) that will be syncing
all data from the master (read: current NFS server) to the slave via
DRBD. The slave will monitor the master via heartbeat
(http://www.linux-ha.org). Heartbeat runs a "ping" to the master server
checking that the master still responds via serial cable on a set
interval. In the event that heartbeat is unable to contact the master
server the slave issues an arp broadcast effectively doing an arp
poisoning on the current arp cache for the machines talking to the
master. All subsequent traffic that was destined for the masters IP
address will then be sent to the slave (fake is the app that handles
that). I have not run any tests on this configuration as of yet, but it
is planned. There is a minor delay in the arp propegation, but it is
rather quick... Like 10-15 seconds.

Hope that answers some of your questions.

Tom Walsh
Network Administrator

Reply via email to