For what it's worth; A lot of HA specialists spent a lot of time trying
to find the simplest _reliable_ way to do multi-site/geo-replicated HA.
I am certain you'll find a simpler solution, but I would also wager that
when it counts, it's going to let you down.

The only way to make things simpler is to start making assumptions, and
if you do that, at some point you will end up with a split-brain (both
sites thinking the other is gone and trying to take the primary role) or
both sites will think the other is running, and neither will be. Add
shared storage to the mix, and there's a high chance you will corrupt
data when you need it most.

Of course, there's always a chance you'll come up with a system no one
else has thought of, just be aware of what you know and what you don't.
HA is fun, in big part, because it's a challenge to get right.

digimer

On 2020-04-17 4:43 p.m., Daniel Smith wrote:
> We only have 1 cluster per site so adding additional hardware is not
> optimal. I feel like I'm trying to use a saw where an axe would be the
> proper tool. I thank you or your time, but it appears that it may be
> best for me to write something from scratch for the monitoring and
> controlling of the failover rather than try and force pacemaker to do
> something it was not built for.
> 
> **Daniel Smith
> Network Engineer
> **15894 Diplomatic Plaza Dr | Houston, TX 77032
> P: 281-233-8487 | M: 832-301-1087
> daniel.sm...@craneww.com <mailto:daniel.sm...@craneww.com>
> <https://craneww.com/>
> 
> 
> -----Original Message-----
> From: Digimer [mailto:li...@alteeve.ca]
> Sent: Friday, April 17, 2020 2:38 PM
> To: Daniel Smith <daniel.sm...@craneww.com>; Cluster Labs - Users
> <users@clusterlabs.org>
> Subject: Re: NFS in different subnets
> 
> EXTERNAL SENDER: Use caution with links/attachments.
> 
> 
> On 2020-04-17 3:20 p.m., Daniel Smith wrote:
>> Thank you digimer, and I apologize for getting the wrong email.
>>
>>
>>
>> Booth was the piece I was missing.  Have been researching setting that
>> up and finding a third location for quorum. From what I have found, I
>> believe I will need to setup single node pacemaker clusters at each
> 
> No, each sites needs to be a proper cluster (2 nodes minimum). The idea
> is that, if the link to the building is lost, the cluster at the lost
> site will shut down. With only one node, a hung node (that might recover
> later) could recover and think it could still do things before it
> realized it shouldn't. Booth is "a cluster of clusters".
> 
> The nodes at each site should be on different hardware, for the same
> reason. It is very much NOT a waste of resources (and, of course, use
> proper, tested STONITH/fencing).
> 
>> datacenter to use with booth. Since we have ESX clusters at each site
>> which has its own redundancies built in, building redundant nodes at
>> each site is pretty much a waste of resources imho. I have 2 questions
>> about this setup though:
>>
>> 1.       If I setup pacemaker with a single node an no virtual IP, is
>> there any problems I need to be aware of?
> 
> Yes, see above.
> 
>> 2.       Is drbd the best tool for the data sync between the sites? I've
>> looked at drbd proxy, but I get the sense that it's not open source,
>> or would rsync with incrond be a better option?
> 
> DRBD would work, but you have to make a choice; If you run synchronous
> so that data is never lost (writes are confirmed when they hit both
> sites), then your disk latency/bandwidth is your network
> latency/bandwidth. Otherwise, you run asychronous but you'll lose any
> data that didn't get transmitted before a site is lost.
> 
> As for proxy; Yes, it's a commercial add-on. If protocol A (async)
> replication can't buffer the data to be transmitted (because the data is
> changing faster than it can be flushed out), DRBD proxy provides a
> system to have a MUCH larger send cache. It's specifically designed for
> long-throw asynchronous replication.
> 
>> I already made a script that executes with the network startup that
>> updates DNS using nsupdate so that should be easy to create a resource
>> based on it I would think.
> 
> Yes, RAs are fairly simple to write. See:
> 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ClusterLabs_OCF-2Dspec_blob_master_ra_1.0_resource-2Dagent-2Dapi.md&d=DwIF-g&c=Wu-45Ur27gGDpxsjlug4Cg&r=q-8ZpsKaCLRx2o0mEXxy3Wwikv-bFgowvCmzCB6rH1g&m=yidl5gIjUPBx1vNdS5f1AytCDGDNSc-d6sEoth4tJzM&s=LvXxFjX_zy6Agy-HIyocIlL4G8OJnm9QYktJtWiUL6M&e=
> 
> digimer
> 
> 
> --
> Digimer
> Papers and Projects:
> https://urldefense.proofpoint.com/v2/url?u=https-3A__alteeve.com_w_&d=DwIF-g&c=Wu-45Ur27gGDpxsjlug4Cg&r=q-8ZpsKaCLRx2o0mEXxy3Wwikv-bFgowvCmzCB6rH1g&m=yidl5gIjUPBx1vNdS5f1AytCDGDNSc-d6sEoth4tJzM&s=ij-evc6FbvnQZ-18Aah_BLc8RfpAeEKfs4VR_l1LRbw&e=
> "I am, somehow, less interested in the weight and convolutions of
> Einstein's brain than in the near certainty that people of equal talent
> have lived and died in cotton fields and sweatshops." - Stephen Jay Gould


-- 
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the near certainty that people of equal talent
have lived and died in cotton fields and sweatshops." - Stephen Jay Gould
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to