Monday, November 24, 2003, 5:44:49 PM, Nick wrote:

NH> I whole heartedly agree. Intermezzo looks and feels like a research FS,
NH> which its pretty darn good for. Its not a production FS at this point. 
ACK - but i hope drbd is better

NH> This is a little nappy if you're going to try and "cluster" these, as
NH> you'll get mail showing up and disappearing based on where the user hits
NH> until the rsync finishes. I don't know about anybody else, but I also
NH> get nervous cron'ing an rsync which removes files.
NH> Consider separating your MXs away from your mail storage boxes.
NH> Yes. Double Yes.
is done

NH> If people are paying to access their mailbox, I disagree 100%. 3 or 4 9s
NH> of availability isn't that hard to achieve with a well engineered
NH> solution. However, be ready to spend the right amount of money.
hehehe, this is the problem :)

NH> There are several ways to solve this:
NH> Shared scsi storage with active/passive failover using something in the
NH> realm of Linux-HA or Veritas Cluster Services (opposite ends of the
NH> price spectrum. Also opposite ends of the ability to sleep at night
NH> spectrum). Simply hang an external scsi storage array (or fibre channel)
NH> off of two boxes on its two channels. Then have a cluster service
NH> mediating who has access to the boxes. This means one box is always
NH> going to waste, something which I'm not real keen on. You can expand
NH> this by having two external boxes and criss-crossing, with the ability
NH> for each machine to take control of both raid's, and then offer services
NH> from each with virtual IPs and fail over in each direction.
NH> The other solution is to build a real cluster, with redundant storage
NH> and front end machines delivering mail via NFS. Then you can abstract
NH> out as many or as few functions to their own clusters, and just access
NH> them via virtual IPs. 
NH> This is the setup I'm migrating towards, piece by piece, for my hosted
NH> email platform. Right now I've got four front end smtp/pop/imap servers
NH> delivering to an active/passive veritas cluster on mirrored external
NH> raid (two external raid boxes mirrored in software by veritas). I'm
NH> moving this to a pair of NetApp filer heads, each offering different
NH> services and able to take over for each other (I feel this is less
NH> wasted hardware).  I'm also running a pair of fail-over capable MySQL
NH> servers on my NFS servers, though those'll soon be moved off to a
NH> different pair of servers with more horsepower. I'm using Linux-HA tools
NH> to cluster MySQL, Veritas Foundation Suite and Cluster Server to cluster
NH> NFS, and a Foundry load balancer to cluster/balance my front end
NH> servers.
NH> Its a rather complex setup, but so far I've got 5 9s of availability,
NH> which means I get to sleep through the night, every night.
NH> Hope that provides some suggestions.
it does, thank you, but i guess these setups are a little to oversized
for me. main problem is that i won't get a "fitting" amount of money
to realize these scenarios. wen went really good with this single box
for about 3 years, but recently there were some nasty down-times due
to these raid problems. besides the os is a 7.x suse and a lot of
patches are not aplied (no smtp.auth, but "smtp after imap" instead...),
so i thought, if i have to reinstall the system i might do it with
a spare server to increase the availability.


NH> Obviously a lot of this can be done on a more modest, or more
NH> extravagant, scale as the needs of your platform are very definitely
NH> going to be different. You can get external storage really cheap these
NH> days using internal IDE and external scsi interfaces (I bought a 1.2TB
NH> raw unit for under $7K USD recently).
Hell :) .... im talking about 700 account with about 8 Gig of imap
dirs... i thought about mirroring to 40 Gig-IDE-drives... that will
last for the next decade :)


NH> You can build an LVS load balanced
NH> cluster with pretty low end hardware that'll keep up with full 100Mbs
NH> line speed.
thanks for describing how it should be :)

bye
 tom


Reply via email to