Tren Blackburn wrote:
I've used it in several 2-node server clusters (even a 3-node, but that's a weird setup ;)

When I ran active/active the issues I had were rarely with drbd but with ocfs2 (I haven't tried gfs2), so since I don't have any *real* need for active/active I switched to an active/passive but each server falling back to the other...been working wonderfully with xfs for ages.

One note is you may need to re-compile your kernel to support 8k memory pages instead of 4k due to the depth of block layers you might running (under kernel hacking options...might have the name wrong, but you should be able to spot it)

Also, I haven't run it over a WAN link but if you've got a gig link between centres I doubt you'll have many issues.

Hope the info helps a bit, and if ya have any more specific questions let me know on or off list.

t.

----- Original Message -----
From: DAve <[EMAIL PROTECTED]>
To: vchkpw@inter7.com <vchkpw@inter7.com>
Sent: Thu Feb 01 12:21:27 2007
Subject: Re: [vchkpw] NFS Clustering

Tren Blackburn wrote:
 > Another option possibly is DRBD (http://www.drbd.org).  Version 8 is
 > nearing gold which will allow concurrent access to 2 block devices.
 >
 > Of course, I'm not entirely sure if this is a linux only solution and am
 > only tossing it out as an option for those looking for a cheaper/free
 > solution ;)

Looks only Linux, but that is fine. Best OS for the job I say, no torch
to carry here. We run a lot of FreeBSD, but Linux, Windows, and Solaris
as well.

I like the looks of it, do you have an experience with drbd?

DAve

 >
 > t.
 >
 >> -----Original Message-----
 >> From: DAve [mailto:[EMAIL PROTECTED]
 >> Sent: Tuesday, January 30, 2007 5:49 AM
 >> To: vchkpw@inter7.com
 >> Subject: Re: [vchkpw] NFS Clustering
 >>
 >> Nicholas Harring wrote:
 >>> If you haven't yet bought hardware for the NFS, NetApp makes this a
 >>> snap with SnapMirror. I don't remember all of the ins and outs, but
 >>> its perfect for situations like this, its very bandwidth efficient,
 >>> and its got the same bullet-proof reliability their
 >> products are known for.
 >>
 >> We currently use a Solaris Enterprise 250, old, but stone
 >> cold reliable.
 >> I'll need something in the other NOC. I've not looked at NetAPP.
 >>
 >>> Otherwise I'd think your SAN vendor should have some form of block
 >>> level replication available.
 >> This is where I am headed. We are looking at SANiq (Lefthand
 >> Networks) which can do volume replication in realtime. I am
 >> thinking that a NFS server in each location, sharing a iSCSI
 >> volume, would be worth looking into. Let the SAN keep the two
 >> volumes in sync and let NFS handle the multiple access to the volume.
 >>
 >>> Hope that helps,
 >> Absolutely, thanks.
 >>
 >> DAve
 >>
 >>> Nick
 >>>
 >>> -----Original Message-----
 >>> From: DAve [mailto:[EMAIL PROTECTED]
 >>> Sent: Monday, January 29, 2007 3:59 PM
 >>> To: vpopmail
 >>> Subject: [vchkpw] NFS Clustering
 >>>
 >>> Good afternoon/evening/morning,
 >>>
 >>> We have been tasked with splitting our mail services
 >> between our two
 >>> NOCs. We have ordered a 1GB fiber connection between both locations.
 >>>
 >>> We will be moving one of two mailgateways, two of four pop
 >> toasters,
 >>> and
 >>>
 >>> one of two smtp servers to the second NOC. Both border
 >> routers will be
 >>> BGP advertising the same IP range and each location will
 >> have hardware
 >>> load balancing. I can easily setup replication for my MySQL
 >> backend,
 >>> but
 >>>
 >>> my NFS mail store is another concern.
 >>>
 >>> Is anyone else working with this type of configuration?
 >> I've not yet
 >>> looked into NFS clustering or what may be involved. (I will have a
 >>> iSCSI
 >>>
 >>> based SAN available which will have nodes/modules in both
 >> geographical
 >>> locations, which may help).
 >>>
 >>> Any advice on what methods/tools work well is appreciated.
 >>>
 >>> Thanks,
 >>>
 >>> DAve
 >>
 >> --
 >> Three years now I've asked Google why they don't have a logo
 >> change for Memorial Day. Why do they choose to do logos for
 >> other non-international holidays, but nothing for Veterans?
 >>
 >> Maybe they forgot who made that choice possible.
 >>
 >
 >


--
Three years now I've asked Google why they don't have a
logo change for Memorial Day. Why do they choose to do logos
for other non-international holidays, but nothing for
Veterans?

Maybe they forgot who made that choice possible.



sigh...top-posted.



i use drbd+linux-ha in a production nfs active/passive failover scenario.

the frontend boxes nfs-mount the VIP of the drbd box (linux-ha takes care of which box gets the VIP via IPFail2 module)

when there's a failover, the vip moves to the other box after the heartbeat timeout.

as a note, i made the heartbeat timeout longer than a reboot would take to avoid failover just for reboots. the customer finds a 5 minute window between failover perfectly acceptable, as the frontend qmail servers will queue the mail until it can write to the nfs mount again.

im interested in the active/active in drbd 0.8...i havent yet tested it.
also, you must purchase the licensed version of drbd to have more than 2 servers in the storage cluster. this would be ideal for having the 3rd box in a DR location, but the customer doesnt need this yet, in fact its kinda cost prohibitive.

also you can failover mysql as long as the database exists on the drbd mounted partition. in this config its imperative that you keep the storage "peers" identical as far as software (duh?).

at the end of the day we are going to move the mail to a rhel+gfs fiber san anyway, but drbd is awesome and i recommend it for redundant nfs.

thats my 2 cents on commodity replication.



--
aichains

Reply via email to