Re: [ClusterLabs] NFS in different subnets

2020-04-17 Thread Daniel Smith


We only have 1 cluster per site so adding additional hardware is not optimal. I feel like I'm trying to use a saw where an axe would be the proper tool. I thank you or your time, but it appears that it may be best for me to write something from scratch for the monitoring and controlling of the failover rather than try and force pacemaker to do something it was not built for.

Daniel SmithNetwork Engineer15894 Diplomatic Plaza Dr | 
Houston, TX 77032P: 281-233-8487 | M: 
832-301-1087daniel.sm...@craneww.com
-Original Message-
From: Digimer [mailto:li...@alteeve.ca] 
Sent: Friday, April 17, 2020 2:38 PM
To: Daniel Smith ; Cluster Labs - Users 
Subject: Re: NFS in different subnets

EXTERNAL SENDER: Use caution with links/attachments.


On 2020-04-17 3:20 p.m., Daniel Smith wrote:
> Thank you digimer, and I apologize for getting the wrong email.
>
>
>
> Booth was the piece I was missing.  Have been researching setting that 
> up and finding a third location for quorum. From what I have found, I 
> believe I will need to setup single node pacemaker clusters at each

No, each sites needs to be a proper cluster (2 nodes minimum). The idea is that, if the link to the building is lost, the cluster at the lost site will shut down. With only one node, a hung node (that might recover
later) could recover and think it could still do things before it realized it shouldn't. Booth is "a cluster of clusters".

The nodes at each site should be on different hardware, for the same reason. It is very much NOT a waste of resources (and, of course, use proper, tested STONITH/fencing).

> datacenter to use with booth. Since we have ESX clusters at each site 
> which has its own redundancies built in, building redundant nodes at 
> each site is pretty much a waste of resources imho. I have 2 questions 
> about this setup though:
>
> 1.   If I setup pacemaker with a single node an no virtual IP, is
> there any problems I need to be aware of?

Yes, see above.

> 2.   Is drbd the best tool for the data sync between the sites? I've
> looked at drbd proxy, but I get the sense that it's not open source, 
> or would rsync with incrond be a better option?

DRBD would work, but you have to make a choice; If you run synchronous so that data is never lost (writes are confirmed when they hit both sites), then your disk latency/bandwidth is your network latency/bandwidth. Otherwise, you run asychronous but you'll lose any data that didn't get transmitted before a site is lost.

As for proxy; Yes, it's a commercial add-on. If protocol A (async) replication can't buffer the data to be transmitted (because the data is changing faster than it can be flushed out), DRBD proxy provides a system to have a MUCH larger send cache. It's specifically designed for long-throw asynchronous replication.

> I already made a script that executes with the network startup that 
> updates DNS using nsupdate so that should be easy to create a resource 
> based on it I would think.

Yes, RAs are fairly simple to write. See:

https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ClusterLabs_OCF-2Dspec_blob_master_ra_1.0_resource-2Dagent-2Dapi.md=DwIF-g=Wu-45Ur27gGDpxsjlug4Cg=q-8ZpsKaCLRx2o0mEXxy3Wwikv-bFgowvCmzCB6rH1g=yidl5gIjUPBx1vNdS5f1AytCDGDNSc-d6sEoth4tJzM=LvXxFjX_zy6Agy-HIyocIlL4G8OJnm9QYktJtWiUL6M=

digimer


--
Digimer
Papers and Projects: https://urldefense.proofpoint.com/v2/url?u=https-3A__alteeve.com_w_=DwIF-g=Wu-45Ur27gGDpxsjlug4Cg=q-8ZpsKaCLRx2o0mEXxy3Wwikv-bFgowvCmzCB6rH1g=yidl5gIjUPBx1vNdS5f1AytCDGDNSc-d6sEoth4tJzM=ij-evc6FbvnQZ-18Aah_BLc8RfpAeEKfs4VR_l1LRbw=
"I am, somehow, less interested in the weight and convolutions of Einstein's brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops." - Stephen Jay Gould



___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] NFS in different subnets

2020-04-17 Thread Daniel Smith
Thank you digimer, and I apologize for getting the wrong email.

Booth was the piece I was missing.  Have been researching setting that up and 
finding a third location for quorum. From what I have found, I believe I will 
need to setup single node pacemaker clusters at each datacenter to use with 
booth. Since we have ESX clusters at each site which has its own redundancies 
built in, building redundant nodes at each site is pretty much a waste of 
resources imho. I have 2 questions about this setup though:

1.   If I setup pacemaker with a single node an no virtual IP, is there any 
problems I need to be aware of?

2.   Is drbd the best tool for the data sync between the sites? I've looked 
at drbd proxy, but I get the sense that it's not open source, or would rsync 
with incrond be a better option?

I already made a script that executes with the network startup that updates DNS 
using nsupdate so that should be easy to create a resource based on it I would 
think.

From: Digimer [mailto:li...@alteeve.ca]
Sent: Friday, April 17, 2020 11:01 AM
To: Daniel Smith ; Cluster Labs - Users 

Subject: Re: NFS in different subnets

EXTERNAL SENDER: Use caution with links/attachments.


Hi Daniel,

  You sent this to clusterlabs owners, instead of users. I've changed the CC to 
send it to the list for a larger discussion.

  The biggest problem with stretch clustering is knowing the difference between 
a link fault and a site loss. Pacemaker Booth was designed to solve this 
problem by using a "cluster of clusters". The logic being that if a site is 
lost, an arbiter node at a third site can decide which site should live, and 
trust the lost side will either behave sensibly (because it is itself a 
cluster), or it's destroyed.

  After this, it just becomes a question of implementation details. Have the 
master side update a DNS entry should be fine (though you may need to write a 
small resource agent to do it, not sure if one exists for DNS yet).

digimer
On 2020-04-17 11:44 a.m., Daniel Smith wrote:
I have been searching how to customize pacemaker to manage NFS servers in 
separate datacenters, but I am finding older data that suggests this is a bad 
idea and not much information about how to customize it to do this without the 
1 IP being moved back and forth. If this isn't the best tool, please let me 
know, but here is the setup I am trying to do if someone can help point me to 
some information on how the best way is to do this.

Server 1:  DC01-NFS01, 
10.0.1.10/24<https://urldefense.proofpoint.com/v2/url?u=http-3A__10.0.1.10_24=DwQF-g=Wu-45Ur27gGDpxsjlug4Cg=q-8ZpsKaCLRx2o0mEXxy3Wwikv-bFgowvCmzCB6rH1g=55Uvz_i91WnbPX2YcZy4D8fuOxfKaswxUUQOUE8_3OM=KgYE6g5eRefvMqaVB3DF6zJehnQckYcqsjgq7F87OAw=>
Server 2:  DC02-NFS01, 
10.0.2.10/24<https://urldefense.proofpoint.com/v2/url?u=http-3A__10.0.2.10_24=DwQF-g=Wu-45Ur27gGDpxsjlug4Cg=q-8ZpsKaCLRx2o0mEXxy3Wwikv-bFgowvCmzCB6rH1g=55Uvz_i91WnbPX2YcZy4D8fuOxfKaswxUUQOUE8_3OM=5DLmpw3KoLmznfpizJa1N-QMNJnZ6_uoiXbDs6kZETM=>
NFS share:   nfs01.domain.local:/opt/nfsmounts using drbd to sync 
between datacenters
DC01 to DC02 has a 2Gb layer 2 connection between the datacenters

I would like to have pacemaker manage the NFS services on both systems in an 
active/passive setup where it updates the DNS servers with the active server's 
IP for nfs01.domain.local. Eventually, we will have a virtual switch in VMWare 
that I would like pacemaker to update, but for now, the delay in DNS updates 
will be acceptable for failover.

Thank you in advance for any help you can provide.

Daniel Smith
Network Engineer
15894 Diplomatic Plaza Dr | Houston, TX 77032
P: 281-233-8487 | M: 832-301-1087
daniel.sm...@craneww.com<mailto:daniel.sm...@craneww.com>
[cid:image001.png@01D614BE.6019BE90]<https://craneww.com/>

--

Digimer

Papers and Projects: 
https://alteeve.com/w/<https://urldefense.proofpoint.com/v2/url?u=https-3A__alteeve.com_w_=DwMF-g=Wu-45Ur27gGDpxsjlug4Cg=q-8ZpsKaCLRx2o0mEXxy3Wwikv-bFgowvCmzCB6rH1g=55Uvz_i91WnbPX2YcZy4D8fuOxfKaswxUUQOUE8_3OM=MKtKRVpHiCA-fhjJLdIO80wqBFRIS5gk160SJew523A=>

"I am, somehow, less interested in the weight and convolutions of Einstein's 
brain than in the near certainty that people of equal talent have lived and 
died in cotton fields and sweatshops." - Stephen Jay Gould
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[ClusterLabs] NFS in different subnets

2020-04-17 Thread Daniel Smith
I have been searching how to customize pacemaker to manage NFS servers in 
separate datacenters, but I am finding older data that suggests this is a bad 
idea and not much information about how to customize it to do this without the 
1 IP being moved back and forth. If this isn't the best tool, please let me 
know, but here is the setup I am trying to do if someone can help point me to 
some information on how the best way is to do this.

Server 1:  DC01-NFS01, 10.0.1.10/24
Server 2:  DC02-NFS01, 10.0.2.10/24
NFS share:   nfs01.domain.local:/opt/nfsmounts using drbd to sync 
between datacenters
DC01 to DC02 has a 2Gb layer 2 connection between the datacenters

I would like to have pacemaker manage the NFS services on both systems in an 
active/passive setup where it updates the DNS servers with the active server's 
IP for nfs01.domain.local. Eventually, we will have a virtual switch in VMWare 
that I would like pacemaker to update, but for now, the delay in DNS updates 
will be acceptable for failover.

Thank you in advance for any help you can provide.

Daniel Smith
Network Engineer
15894 Diplomatic Plaza Dr | Houston, TX 77032
P: 281-233-8487 | M: 832-301-1087
daniel.sm...@craneww.com<mailto:daniel.sm...@craneww.com>
[cid:CraneWorldwideLogistics_3e3ea50c-1eb6-4b10-bea1-160963e7464a.png]<https://craneww.com/>
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/