I agree completely. Are you offering to make those changes? Because they would
expand the capability of resource angent and would be a welcome addition. Also,
full disclosure, I need to have something in place by the weekend, lol.
From: Ken Gaillot
Sent: Thursday
Unfortunately I can't post the full resource agent here.
In our search for solutions we did find a resource agent for managing AWS
Elastic IPs:
https://github.com/moomindani/aws-eip-resource-agent/blob/master/eip. This
was not what we wanted, but it will give you an idea of how it can work.
Our
That would definitely be of wider interest.
I could see modifying the IPaddr2 RA to take some new arguments for
AWS/Azure parameters, and if those are configured, it would do the
appropriate API requests.
On Thu, 2017-08-24 at 23:27 +, Eric Robinson wrote:
> Leon -- I will pay you one trillio
Leon -- I will pay you one trillion samolians for that resource agent! Any way
we can get our hands on a copy?
--
Eric Robinson
From: Leon Steffens [mailto:l...@steffensonline.com]
Sent: Thursday, August 24, 2017 3:48 PM
To: Cluster Labs - All topics related to open-source clustering welcomed
On 08/24/2017 04:24 PM, Ken Gaillot wrote:
> How could it know that, from a cold boot? It doesn't know if the other
> node is down, or up but unreachable. wait_for_all is how to keep that
> fencing from happening at every cluster start, but the trade-off is you
> can't cold-boot a partial cluster.
That's what we did in AWS. The IPaddr2 resource agent does an arp
broadcast after changing the local IP but this does not work in AWS
(probably for the same reasons as Azure).
We created our own OCF resource agent that uses the Amazon APIs to move the
IP in AWS land and made that dependent on the
> Don't use Azure? ;)
That would be my preference. But since I'm stuck with Azure (management
decision) I need to come up with something. It appears there is an Azure API to
make changes on-the-fly from a Linux box. Maybe I'll write a resource agent to
change Azure and make IPaddr2 dependent on
On Thu, 2017-08-24 at 15:53 -0500, Dimitri Maziuk wrote:
> On 08/24/2017 03:40 PM, Ken Gaillot wrote:
>
> > You could set wait_for_all to 0 in corosync.conf, then boot. The living
> > node should try to fence the other one, and proceed if fencing succeeds.
>
> Didn't I just read a thread that say
On 08/24/2017 03:40 PM, Ken Gaillot wrote:
> You could set wait_for_all to 0 in corosync.conf, then boot. The living
> node should try to fence the other one, and proceed if fencing succeeds.
Didn't I just read a thread that says it won't: the other node is
already down?
--
Dimitri Maziuk
Progr
On Wed, 2017-08-23 at 23:33 +, Eric Robinson wrote:
> I have a BIG correction.
>
> If you follow the instructions titled, "Pacemaker 1.1 for Corosync 2.x," and
> NOT the ones entitled, "Pacemaker 1.1 for CMAN or Corosync 1.x," guess what?
> It installs cman anyway, and you spend a couple of
On Thu, 2017-08-24 at 15:10 -0500, Dimitri Maziuk wrote:
> Hi everyone,
>
> I seem to remember seeing theis once before, but my google-fu is
> failing: I've a 2-node active-passive cluster, when I power up one node
> only, resources remain stopped. Is there a way to boot a cluster on one
> node on
PS. centos 7.latest w/ the current pcs/corosync/pacemaker rpms as
distributed by centos, resources are stonith:fence_scsi, IPaddr2, and ZFS.
--
Dimitri Maziuk
Programmer/sysadmin
BioMagResBank, UW-Madison -- http://www.bmrb.wisc.edu
signature.asc
Description: OpenPGP digital signature
Hi everyone,
I seem to remember seeing theis once before, but my google-fu is
failing: I've a 2-node active-passive cluster, when I power up one node
only, resources remain stopped. Is there a way to boot a cluster on one
node only?
-- Note that if I boot up the other node everything starts, and
On 2017-08-24 03:56 PM, Eric Robinson wrote:
> I deployed a couple of cluster nodes in Azure and found out right away
> that floating a virtual IP address between nodes does not work because
> Azure does not honor IP changes made from within the VMs. IP changes
> must be made to virtual NICs in the
I deployed a couple of cluster nodes in Azure and found out right away that
floating a virtual IP address between nodes does not work because Azure does
not honor IP changes made from within the VMs. IP changes must be made to
virtual NICs in the Azure portal itself. Anybody know of an easy way
15 matches
Mail list logo