On Wed, May 29, 2013 at 6:46 AM, joris dedieu joris.ded...@gmail.comwrote:
Hi Syd,
I'm guessing an an NFS share from the 2 webservers to the 1 fileserver.
However, from a bit of research with load balanced magento setups there
seems to be a lot of negative comments about using NFS in this way.
It's always better to avoid NFS as it introduce a point of failure.
It isn't always better. We have several TB of heavily accessed static media
files being served to our web servers over NFS. I don't think we could do
this another way viably.
If the NFS server is built with redundant power, ECC memory, RAID and is
connected to a UPS, I wouldn't be too worried about using NFS as long as
the hardware is properly chosen to accommodate the workload and the OS and
NFS are set up correctly.
Sometimes just syncing the files on both servers with rsync / unison /
snapshots / whatever is preferable (it strongly depends on the number
of files and the number of file changes).
If the amount of data is small-ish and not so heavily accessed, maybe. I
can see this getting difficult to manage pretty quickly though.
A crashy NFS server can
leave inconsistent mount points on the webservers .
Agreed. It's important to make sure the server is set up correctly, well
tested under load (sysbench, etc.) and built to withstand common failures
like disk and power.
Anyway it works but you must qualify your server and client version
and setups before turning it in production. Avoid lockd unless it's
absolutely necessary,
I'm not sure I'd worry about lockd at this point. I think it would be
better for this workload to leave locking enabled unless absolutely
necessary. We run high volume w/ locking (high amount of read traffic, low
amount of write traffic). I'm sure it makes sense to disable locks in some
cases, but probably not in this case.
enable jumbo frames,
I wouldn't worry about jumbo frames either at this point. Keep it simple.
Our six year old NetApp filer is handling a LOT of traffic easily w/o jumbo
frames.
find the good rsize, wsize,
Agreed. Here are the mount options we use mostly straight out of NetApp
best practices (but I think are good options to use in many cases):
proto=tcp,rw,nosuid,nodev,hard,nointr,timeo=600,retrans=2,rsize=32768,wsize=32768,bg,nfsvers=3,_netdev,actimeo=60
check and recheck your disks health, your raids settings, your IO
performances.
Totally agree. What type of disk, RAID levels, etc. are all workload
dependent and important to get right. I always put a LOT of thought and
research into this.
Once the server has been built, I would use sysbench to test raw disk
performance of the NFS server locally, then set up NFS and run sysbench
over it to make sure performance is still acceptable before putting it into
production. I can get near raw disk performance over NFS pretty easily.
If possible, use varnish on the web servers for caching
static content or serve the static files directly from the file
server using nginx.
If these two web servers are Apache / PHP, it might just be simpler for now
to set up Apache to serve both PHP and the static files. There is a ton of
documentation out there on how to do this correctly. I think this would go
a LONG way.
We have separate pools of servers for dynamic and static files, but if set
up right I'm pretty sure we could easily use Apache for the whole thing. I
prefer keeping things simple and tweaking only when necessary.
Never forget that NFS is slow.
We're serving upwards of 4-500Mbps (7000 NFS OPS/s) of consistent traffic
from a single six year old NetApp filer over NFS. This thing has two single
core 32-bit Xeons and only 2GB of memory. Granted, these boxes were VERY
expensive and have 15K RPM fiber channel disk, but I wouldn't have a
problem using an NFS server properly built with common hardware.
In fact, we're set to replace the filer this year, and I've been looking at
hardware from here:
http://www.pc-pitstop.com/sas_expanders/
The only other thing I would add is that I'd probably be more comfortable
using Redhat / CentOS as an NFS server than Debian/Ubuntu/Others. We're
even considering paying for Redhat entitlements and the storage add-on when
the time comes. Redhat's NFS implementation seems more stable and better
supported than others, and their documentation is very good.
Brendon