Re: [ClusterLabs] NFS in different subnets

2020-04-18 Thread Digimer
On 2020-04-18 2:48 a.m., Strahil Nikolov wrote:
> On April 18, 2020 8:43:51 AM GMT+03:00, Digimer  wrote:
>> For what it's worth; A lot of HA specialists spent a lot of time trying
>> to find the simplest _reliable_ way to do multi-site/geo-replicated HA.
>> I am certain you'll find a simpler solution, but I would also wager
>> that
>> when it counts, it's going to let you down.
>>
>> The only way to make things simpler is to start making assumptions, and
>> if you do that, at some point you will end up with a split-brain (both
>> sites thinking the other is gone and trying to take the primary role)
>> or
>> both sites will think the other is running, and neither will be. Add
>> shared storage to the mix, and there's a high chance you will corrupt
>> data when you need it most.
>>
>> Of course, there's always a chance you'll come up with a system no one
>> else has thought of, just be aware of what you know and what you don't.
>> HA is fun, in big part, because it's a challenge to get right.
>>
>> digimer
>>
> 
> I don't get something.
> 
> Why this cannot be done?
> 
> One  node is in siteA, one in siteB , qnet on third location.Routing between 
> the 2 subnets is established and symmetrical.
> Fencing via IPMI or  SBD (for  example from a HA iSCSI cluster) is  configured
> 
> The NFS resource is started on 1  node and a special RA is  used for the DNS 
> records. If node1 dies, the cluster  will fence  it and node2  will  power up 
> the NFS and update the records.
> 
> Of course, updating DNS only from 1  side must work for both sites.
> 
> Best Regards,
> Strahil Nikolov

It comes down to differentiating between a link loss to a site versus
the destruction/loss of the site. In either case, you can't fence the
lost node, so what do you do? If you decide that you don't need to fence
it, then you face all the issues of any other normal cluster with broken
or missing fencing. It's just a question of time before you assume wrong
and end up with a split brain / data divergence / data loss.

The reason that Booth has been designed the way it has solves this
problem by having "a cluster of clusters". If a site is lost because of
a comms break, you can trust the cluster at the site to act in a
predictable way. This is only possible because that site is a
self-contained HA cluster, so it can be confidently assumed that it will
shut down services when it loses contact with the peer and quorum sites.

The only safe way to operate without this setup over a stretch cluster
is to accept that a comms loss or site loss hangs the cluster until a
human intervenes, but then, that's not really HA now.

-- 
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the near certainty that people of equal talent
have lived and died in cotton fields and sweatshops." - Stephen Jay Gould
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] NFS in different subnets

2020-04-18 Thread Strahil Nikolov
On April 18, 2020 8:43:51 AM GMT+03:00, Digimer  wrote:
>For what it's worth; A lot of HA specialists spent a lot of time trying
>to find the simplest _reliable_ way to do multi-site/geo-replicated HA.
>I am certain you'll find a simpler solution, but I would also wager
>that
>when it counts, it's going to let you down.
>
>The only way to make things simpler is to start making assumptions, and
>if you do that, at some point you will end up with a split-brain (both
>sites thinking the other is gone and trying to take the primary role)
>or
>both sites will think the other is running, and neither will be. Add
>shared storage to the mix, and there's a high chance you will corrupt
>data when you need it most.
>
>Of course, there's always a chance you'll come up with a system no one
>else has thought of, just be aware of what you know and what you don't.
>HA is fun, in big part, because it's a challenge to get right.
>
>digimer
>
>On 2020-04-17 4:43 p.m., Daniel Smith wrote:
>> We only have 1 cluster per site so adding additional hardware is not
>> optimal. I feel like I'm trying to use a saw where an axe would be
>the
>> proper tool. I thank you or your time, but it appears that it may be
>> best for me to write something from scratch for the monitoring and
>> controlling of the failover rather than try and force pacemaker to do
>> something it was not built for.
>> 
>> **Daniel Smith
>> Network Engineer
>> **15894 Diplomatic Plaza Dr | Houston, TX 77032
>> P: 281-233-8487 | M: 832-301-1087
>> daniel.sm...@craneww.com 
>> 
>> 
>> 
>> -Original Message-
>> From: Digimer [mailto:li...@alteeve.ca]
>> Sent: Friday, April 17, 2020 2:38 PM
>> To: Daniel Smith ; Cluster Labs - Users
>> 
>> Subject: Re: NFS in different subnets
>> 
>> EXTERNAL SENDER: Use caution with links/attachments.
>> 
>> 
>> On 2020-04-17 3:20 p.m., Daniel Smith wrote:
>>> Thank you digimer, and I apologize for getting the wrong email.
>>>
>>>
>>>
>>> Booth was the piece I was missing.  Have been researching setting
>that
>>> up and finding a third location for quorum. From what I have found,
>I
>>> believe I will need to setup single node pacemaker clusters at each
>> 
>> No, each sites needs to be a proper cluster (2 nodes minimum). The
>idea
>> is that, if the link to the building is lost, the cluster at the lost
>> site will shut down. With only one node, a hung node (that might
>recover
>> later) could recover and think it could still do things before it
>> realized it shouldn't. Booth is "a cluster of clusters".
>> 
>> The nodes at each site should be on different hardware, for the same
>> reason. It is very much NOT a waste of resources (and, of course, use
>> proper, tested STONITH/fencing).
>> 
>>> datacenter to use with booth. Since we have ESX clusters at each
>site
>>> which has its own redundancies built in, building redundant nodes at
>>> each site is pretty much a waste of resources imho. I have 2
>questions
>>> about this setup though:
>>>
>>> 1.   If I setup pacemaker with a single node an no virtual IP,
>is
>>> there any problems I need to be aware of?
>> 
>> Yes, see above.
>> 
>>> 2.   Is drbd the best tool for the data sync between the sites?
>I've
>>> looked at drbd proxy, but I get the sense that it's not open source,
>>> or would rsync with incrond be a better option?
>> 
>> DRBD would work, but you have to make a choice; If you run
>synchronous
>> so that data is never lost (writes are confirmed when they hit both
>> sites), then your disk latency/bandwidth is your network
>> latency/bandwidth. Otherwise, you run asychronous but you'll lose any
>> data that didn't get transmitted before a site is lost.
>> 
>> As for proxy; Yes, it's a commercial add-on. If protocol A (async)
>> replication can't buffer the data to be transmitted (because the data
>is
>> changing faster than it can be flushed out), DRBD proxy provides a
>> system to have a MUCH larger send cache. It's specifically designed
>for
>> long-throw asynchronous replication.
>> 
>>> I already made a script that executes with the network startup that
>>> updates DNS using nsupdate so that should be easy to create a
>resource
>>> based on it I would think.
>> 
>> Yes, RAs are fairly simple to write. See:
>> 
>>
>https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ClusterLabs_OCF-2Dspec_blob_master_ra_1.0_resource-2Dagent-2Dapi.md=DwIF-g=Wu-45Ur27gGDpxsjlug4Cg=q-8ZpsKaCLRx2o0mEXxy3Wwikv-bFgowvCmzCB6rH1g=yidl5gIjUPBx1vNdS5f1AytCDGDNSc-d6sEoth4tJzM=LvXxFjX_zy6Agy-HIyocIlL4G8OJnm9QYktJtWiUL6M=
>> 
>> digimer
>> 
>> 
>> --
>> Digimer
>> Papers and Projects:
>>
>https://urldefense.proofpoint.com/v2/url?u=https-3A__alteeve.com_w_=DwIF-g=Wu-45Ur27gGDpxsjlug4Cg=q-8ZpsKaCLRx2o0mEXxy3Wwikv-bFgowvCmzCB6rH1g=yidl5gIjUPBx1vNdS5f1AytCDGDNSc-d6sEoth4tJzM=ij-evc6FbvnQZ-18Aah_BLc8RfpAeEKfs4VR_l1LRbw=
>> "I am, somehow, less interested in the weight and convolutions of
>> Einstein's brain than in the 

Re: [ClusterLabs] NFS in different subnets

2020-04-17 Thread Digimer
For what it's worth; A lot of HA specialists spent a lot of time trying
to find the simplest _reliable_ way to do multi-site/geo-replicated HA.
I am certain you'll find a simpler solution, but I would also wager that
when it counts, it's going to let you down.

The only way to make things simpler is to start making assumptions, and
if you do that, at some point you will end up with a split-brain (both
sites thinking the other is gone and trying to take the primary role) or
both sites will think the other is running, and neither will be. Add
shared storage to the mix, and there's a high chance you will corrupt
data when you need it most.

Of course, there's always a chance you'll come up with a system no one
else has thought of, just be aware of what you know and what you don't.
HA is fun, in big part, because it's a challenge to get right.

digimer

On 2020-04-17 4:43 p.m., Daniel Smith wrote:
> We only have 1 cluster per site so adding additional hardware is not
> optimal. I feel like I'm trying to use a saw where an axe would be the
> proper tool. I thank you or your time, but it appears that it may be
> best for me to write something from scratch for the monitoring and
> controlling of the failover rather than try and force pacemaker to do
> something it was not built for.
> 
> **Daniel Smith
> Network Engineer
> **15894 Diplomatic Plaza Dr | Houston, TX 77032
> P: 281-233-8487 | M: 832-301-1087
> daniel.sm...@craneww.com 
> 
> 
> 
> -Original Message-
> From: Digimer [mailto:li...@alteeve.ca]
> Sent: Friday, April 17, 2020 2:38 PM
> To: Daniel Smith ; Cluster Labs - Users
> 
> Subject: Re: NFS in different subnets
> 
> EXTERNAL SENDER: Use caution with links/attachments.
> 
> 
> On 2020-04-17 3:20 p.m., Daniel Smith wrote:
>> Thank you digimer, and I apologize for getting the wrong email.
>>
>>
>>
>> Booth was the piece I was missing.  Have been researching setting that
>> up and finding a third location for quorum. From what I have found, I
>> believe I will need to setup single node pacemaker clusters at each
> 
> No, each sites needs to be a proper cluster (2 nodes minimum). The idea
> is that, if the link to the building is lost, the cluster at the lost
> site will shut down. With only one node, a hung node (that might recover
> later) could recover and think it could still do things before it
> realized it shouldn't. Booth is "a cluster of clusters".
> 
> The nodes at each site should be on different hardware, for the same
> reason. It is very much NOT a waste of resources (and, of course, use
> proper, tested STONITH/fencing).
> 
>> datacenter to use with booth. Since we have ESX clusters at each site
>> which has its own redundancies built in, building redundant nodes at
>> each site is pretty much a waste of resources imho. I have 2 questions
>> about this setup though:
>>
>> 1.   If I setup pacemaker with a single node an no virtual IP, is
>> there any problems I need to be aware of?
> 
> Yes, see above.
> 
>> 2.   Is drbd the best tool for the data sync between the sites? I've
>> looked at drbd proxy, but I get the sense that it's not open source,
>> or would rsync with incrond be a better option?
> 
> DRBD would work, but you have to make a choice; If you run synchronous
> so that data is never lost (writes are confirmed when they hit both
> sites), then your disk latency/bandwidth is your network
> latency/bandwidth. Otherwise, you run asychronous but you'll lose any
> data that didn't get transmitted before a site is lost.
> 
> As for proxy; Yes, it's a commercial add-on. If protocol A (async)
> replication can't buffer the data to be transmitted (because the data is
> changing faster than it can be flushed out), DRBD proxy provides a
> system to have a MUCH larger send cache. It's specifically designed for
> long-throw asynchronous replication.
> 
>> I already made a script that executes with the network startup that
>> updates DNS using nsupdate so that should be easy to create a resource
>> based on it I would think.
> 
> Yes, RAs are fairly simple to write. See:
> 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ClusterLabs_OCF-2Dspec_blob_master_ra_1.0_resource-2Dagent-2Dapi.md=DwIF-g=Wu-45Ur27gGDpxsjlug4Cg=q-8ZpsKaCLRx2o0mEXxy3Wwikv-bFgowvCmzCB6rH1g=yidl5gIjUPBx1vNdS5f1AytCDGDNSc-d6sEoth4tJzM=LvXxFjX_zy6Agy-HIyocIlL4G8OJnm9QYktJtWiUL6M=
> 
> digimer
> 
> 
> --
> Digimer
> Papers and Projects:
> https://urldefense.proofpoint.com/v2/url?u=https-3A__alteeve.com_w_=DwIF-g=Wu-45Ur27gGDpxsjlug4Cg=q-8ZpsKaCLRx2o0mEXxy3Wwikv-bFgowvCmzCB6rH1g=yidl5gIjUPBx1vNdS5f1AytCDGDNSc-d6sEoth4tJzM=ij-evc6FbvnQZ-18Aah_BLc8RfpAeEKfs4VR_l1LRbw=
> "I am, somehow, less interested in the weight and convolutions of
> Einstein's brain than in the near certainty that people of equal talent
> have lived and died in cotton fields and sweatshops." - Stephen Jay Gould


-- 
Digimer
Papers and Projects: https://alteeve.com/w/
"I 

Re: [ClusterLabs] NFS in different subnets

2020-04-17 Thread Daniel Smith


We only have 1 cluster per site so adding additional hardware is not optimal. I feel like I'm trying to use a saw where an axe would be the proper tool. I thank you or your time, but it appears that it may be best for me to write something from scratch for the monitoring and controlling of the failover rather than try and force pacemaker to do something it was not built for.

Daniel SmithNetwork Engineer15894 Diplomatic Plaza Dr | 
Houston, TX 77032P: 281-233-8487 | M: 
832-301-1087daniel.sm...@craneww.com
-Original Message-
From: Digimer [mailto:li...@alteeve.ca] 
Sent: Friday, April 17, 2020 2:38 PM
To: Daniel Smith ; Cluster Labs - Users 
Subject: Re: NFS in different subnets

EXTERNAL SENDER: Use caution with links/attachments.


On 2020-04-17 3:20 p.m., Daniel Smith wrote:
> Thank you digimer, and I apologize for getting the wrong email.
>
>
>
> Booth was the piece I was missing.  Have been researching setting that 
> up and finding a third location for quorum. From what I have found, I 
> believe I will need to setup single node pacemaker clusters at each

No, each sites needs to be a proper cluster (2 nodes minimum). The idea is that, if the link to the building is lost, the cluster at the lost site will shut down. With only one node, a hung node (that might recover
later) could recover and think it could still do things before it realized it shouldn't. Booth is "a cluster of clusters".

The nodes at each site should be on different hardware, for the same reason. It is very much NOT a waste of resources (and, of course, use proper, tested STONITH/fencing).

> datacenter to use with booth. Since we have ESX clusters at each site 
> which has its own redundancies built in, building redundant nodes at 
> each site is pretty much a waste of resources imho. I have 2 questions 
> about this setup though:
>
> 1.   If I setup pacemaker with a single node an no virtual IP, is
> there any problems I need to be aware of?

Yes, see above.

> 2.   Is drbd the best tool for the data sync between the sites? I've
> looked at drbd proxy, but I get the sense that it's not open source, 
> or would rsync with incrond be a better option?

DRBD would work, but you have to make a choice; If you run synchronous so that data is never lost (writes are confirmed when they hit both sites), then your disk latency/bandwidth is your network latency/bandwidth. Otherwise, you run asychronous but you'll lose any data that didn't get transmitted before a site is lost.

As for proxy; Yes, it's a commercial add-on. If protocol A (async) replication can't buffer the data to be transmitted (because the data is changing faster than it can be flushed out), DRBD proxy provides a system to have a MUCH larger send cache. It's specifically designed for long-throw asynchronous replication.

> I already made a script that executes with the network startup that 
> updates DNS using nsupdate so that should be easy to create a resource 
> based on it I would think.

Yes, RAs are fairly simple to write. See:

https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ClusterLabs_OCF-2Dspec_blob_master_ra_1.0_resource-2Dagent-2Dapi.md=DwIF-g=Wu-45Ur27gGDpxsjlug4Cg=q-8ZpsKaCLRx2o0mEXxy3Wwikv-bFgowvCmzCB6rH1g=yidl5gIjUPBx1vNdS5f1AytCDGDNSc-d6sEoth4tJzM=LvXxFjX_zy6Agy-HIyocIlL4G8OJnm9QYktJtWiUL6M=

digimer


--
Digimer
Papers and Projects: https://urldefense.proofpoint.com/v2/url?u=https-3A__alteeve.com_w_=DwIF-g=Wu-45Ur27gGDpxsjlug4Cg=q-8ZpsKaCLRx2o0mEXxy3Wwikv-bFgowvCmzCB6rH1g=yidl5gIjUPBx1vNdS5f1AytCDGDNSc-d6sEoth4tJzM=ij-evc6FbvnQZ-18Aah_BLc8RfpAeEKfs4VR_l1LRbw=
"I am, somehow, less interested in the weight and convolutions of Einstein's brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops." - Stephen Jay Gould



___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] NFS in different subnets

2020-04-17 Thread Digimer
On 2020-04-17 3:20 p.m., Daniel Smith wrote:
> Thank you digimer, and I apologize for getting the wrong email.
> 
>  
> 
> Booth was the piece I was missing.  Have been researching setting that
> up and finding a third location for quorum. From what I have found, I
> believe I will need to setup single node pacemaker clusters at each

No, each sites needs to be a proper cluster (2 nodes minimum). The idea
is that, if the link to the building is lost, the cluster at the lost
site will shut down. With only one node, a hung node (that might recover
later) could recover and think it could still do things before it
realized it shouldn't. Booth is "a cluster of clusters".

The nodes at each site should be on different hardware, for the same
reason. It is very much NOT a waste of resources (and, of course, use
proper, tested STONITH/fencing).

> datacenter to use with booth. Since we have ESX clusters at each site
> which has its own redundancies built in, building redundant nodes at
> each site is pretty much a waste of resources imho. I have 2 questions
> about this setup though:
> 
> 1.   If I setup pacemaker with a single node an no virtual IP, is
> there any problems I need to be aware of?

Yes, see above.

> 2.   Is drbd the best tool for the data sync between the sites? I’ve
> looked at drbd proxy, but I get the sense that it’s not open source, or
> would rsync with incrond be a better option?

DRBD would work, but you have to make a choice; If you run synchronous
so that data is never lost (writes are confirmed when they hit both
sites), then your disk latency/bandwidth is your network
latency/bandwidth. Otherwise, you run asychronous but you'll lose any
data that didn't get transmitted before a site is lost.

As for proxy; Yes, it's a commercial add-on. If protocol A (async)
replication can't buffer the data to be transmitted (because the data is
changing faster than it can be flushed out), DRBD proxy provides a
system to have a MUCH larger send cache. It's specifically designed for
long-throw asynchronous replication.

> I already made a script that executes with the network startup that
> updates DNS using nsupdate so that should be easy to create a resource
> based on it I would think.

Yes, RAs are fairly simple to write. See:

https://github.com/ClusterLabs/OCF-spec/blob/master/ra/1.0/resource-agent-api.md

digimer


-- 
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the near certainty that people of equal talent
have lived and died in cotton fields and sweatshops." - Stephen Jay Gould
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] NFS in different subnets

2020-04-17 Thread Daniel Smith
Thank you digimer, and I apologize for getting the wrong email.

Booth was the piece I was missing.  Have been researching setting that up and 
finding a third location for quorum. From what I have found, I believe I will 
need to setup single node pacemaker clusters at each datacenter to use with 
booth. Since we have ESX clusters at each site which has its own redundancies 
built in, building redundant nodes at each site is pretty much a waste of 
resources imho. I have 2 questions about this setup though:

1.   If I setup pacemaker with a single node an no virtual IP, is there any 
problems I need to be aware of?

2.   Is drbd the best tool for the data sync between the sites? I've looked 
at drbd proxy, but I get the sense that it's not open source, or would rsync 
with incrond be a better option?

I already made a script that executes with the network startup that updates DNS 
using nsupdate so that should be easy to create a resource based on it I would 
think.

From: Digimer [mailto:li...@alteeve.ca]
Sent: Friday, April 17, 2020 11:01 AM
To: Daniel Smith ; Cluster Labs - Users 

Subject: Re: NFS in different subnets

EXTERNAL SENDER: Use caution with links/attachments.


Hi Daniel,

  You sent this to clusterlabs owners, instead of users. I've changed the CC to 
send it to the list for a larger discussion.

  The biggest problem with stretch clustering is knowing the difference between 
a link fault and a site loss. Pacemaker Booth was designed to solve this 
problem by using a "cluster of clusters". The logic being that if a site is 
lost, an arbiter node at a third site can decide which site should live, and 
trust the lost side will either behave sensibly (because it is itself a 
cluster), or it's destroyed.

  After this, it just becomes a question of implementation details. Have the 
master side update a DNS entry should be fine (though you may need to write a 
small resource agent to do it, not sure if one exists for DNS yet).

digimer
On 2020-04-17 11:44 a.m., Daniel Smith wrote:
I have been searching how to customize pacemaker to manage NFS servers in 
separate datacenters, but I am finding older data that suggests this is a bad 
idea and not much information about how to customize it to do this without the 
1 IP being moved back and forth. If this isn't the best tool, please let me 
know, but here is the setup I am trying to do if someone can help point me to 
some information on how the best way is to do this.

Server 1:  DC01-NFS01, 
10.0.1.10/24
Server 2:  DC02-NFS01, 
10.0.2.10/24
NFS share:   nfs01.domain.local:/opt/nfsmounts using drbd to sync 
between datacenters
DC01 to DC02 has a 2Gb layer 2 connection between the datacenters

I would like to have pacemaker manage the NFS services on both systems in an 
active/passive setup where it updates the DNS servers with the active server's 
IP for nfs01.domain.local. Eventually, we will have a virtual switch in VMWare 
that I would like pacemaker to update, but for now, the delay in DNS updates 
will be acceptable for failover.

Thank you in advance for any help you can provide.

Daniel Smith
Network Engineer
15894 Diplomatic Plaza Dr | Houston, TX 77032
P: 281-233-8487 | M: 832-301-1087
daniel.sm...@craneww.com
[cid:image001.png@01D614BE.6019BE90]

--

Digimer

Papers and Projects: 
https://alteeve.com/w/

"I am, somehow, less interested in the weight and convolutions of Einstein's 
brain than in the near certainty that people of equal talent have lived and 
died in cotton fields and sweatshops." - Stephen Jay Gould
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] NFS in different subnets

2020-04-17 Thread Digimer

  
  
Hi Daniel,
  You sent this to clusterlabs owners, instead of users. I've
  changed the CC to send it to the list for a larger discussion.

  The biggest problem with stretch clustering is knowing the
  difference between a link fault and a site loss. Pacemaker Booth
  was designed to solve this problem by using a "cluster of
  clusters". The logic being that if a site is lost, an arbiter node
  at a third site can decide which site should live, and trust the
  lost side will either behave sensibly (because it is itself a
  cluster), or it's destroyed.
  After this, it just becomes a question of implementation
  details. Have the master side update a DNS entry should be fine
  (though you may need to write a small resource agent to do it, not
  sure if one exists for DNS yet).
digimer

On 2020-04-17 11:44 a.m., Daniel Smith
  wrote:


  
  
  
  
I have been searching how to customize
  pacemaker to manage NFS servers in separate datacenters, but I
  am finding older data that suggests this is a bad idea and not
  much information about how to customize it to do this without
  the 1 IP being moved back and forth. If this isn’t the best
  tool, please let me know, but here is the setup I am trying to
  do if someone can help point me to some information on how the
  best way is to do this.
 
Server 1:  DC01-NFS01,
  10.0.1.10/24
Server 2:  DC02-NFS01,
  10.0.2.10/24
NFS share:  
  nfs01.domain.local:/opt/nfsmounts using drbd to sync between
  datacenters
DC01 to DC02 has a 2Gb layer 2 connection
  between the datacenters
 
I would like to have pacemaker manage the
  NFS services on both systems in an active/passive setup where
  it updates the DNS servers with the active server’s IP for
  nfs01.domain.local. Eventually, we will have a virtual switch
  in VMWare that I would like pacemaker to update, but for now,
  the delay in DNS updates will be acceptable for failover.
  
 
Thank you in advance for any help you can
  provide.
  
  Daniel Smith
  Network Engineer
15894
Diplomatic Plaza Dr |
Houston, TX 77032
  P:
281-233-8487 | M:
832-301-1087
  daniel.sm...@craneww.com
  

-- 
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops." - Stephen Jay Gould
  

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/