[ClusterLabs] Is there an Export Control Classification Number (ECCN) associated with Corosyn?

2017-05-02 Thread Fry, George H


___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] How to fence cluster node when SAN filesystem fail

2017-05-02 Thread Albert Weng
Hi Klaus,

Thank you for your quickly reply.

Below is my crm_resource output :
# crm_resource --resource ora_fs -query-xml
ora_fs (ocf::heartbeat:Filesystem):Started node2.albertlab.com
xml:

  



  
  



  


I checked the document about 'on-fail' operation, you're right, my
filesystem resource behavior work correctly , it failover to another node
to 'restart' the resource. so, if i add the on-fail parameter for "move to
another node and fence the node itself" purpose as below, am i right ?

# pcs resource op remove ora_fs monitor
# pcs resource op add ora_fs monitor interval=20 timeout=40 on-fail=fence

I'm curios about why only my 'filesystem' resource will not trigger
stonith, but when 'vip' resource fail, the resource located host will be
triggered reboot immediately. (stonith : fence_ipmilan)

here is my 'ora_vip' resource output
# crm_resource --resource ora_vip --query-xml
ora_vip(ocf::heartbeat:IPaddr2):   Started node2.albertlab.com
xml:

  


  
  



  


thanks a lot.

On Tue, May 2, 2017 at 9:17 PM, Klaus Wenninger  wrote:

> On 05/02/2017 02:57 PM, Ken Gaillot wrote:
> > Hi,
> >
> > Upstream documentation on fencing in Pacemaker is available at:
> >
> > http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-
> single/Pacemaker_Explained/index.html#idm139683949958512
> >
> > Higher-level tools such as crm shell and pcs make it easier; see their
> > man pages and other documentation for details.
> >
> >
> > On 05/01/2017 10:35 PM, Albert Weng wrote:
> >> Hi All,
> >>
> >> My environment :
> >> (1) two node (active/passive) pacemaker cluster
> >> (2) SAN storage attached, add resource type "filesystem"
> >> (3) OS : RHEL 7.2
> >>
> >> In old version of RHEL cluster, when attached SAN storage path lost(ex.
> >> filesystem fail),
> >> active node will trigger fence device to reboot itself.
> >>
> >> but when i use pacemaker on RHEL cluster, when i remove fiber cable on
> >> active node, all resources failover to passive node normally, but active
> >> node doesn't reboot.
>
> That is the default on-fail behavior of pacemaker-operations (==restart -
> either on the node itself or another node - except for stop where it is
> fence).
> Using the on-fail behavior fence as well for start & monitor should give
> you the desired behavior as I got it from your description.
>
> Regards,
> Klaus
>
> >>
> >> how to trigger fence reboot action when SAN filesystem lost?
> >>
> >> Thank a lot~~~
> >>
> >>
> >> --
> >> Kind regards,
> >> Albert Weng
> >>
> >>  source=link_campaign=sig-email_content=webmail>
> >>  不含病毒。www.avast.com
> > ___
> > Users mailing list: Users@clusterlabs.org
> > http://lists.clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>



-- 
Kind regards,
Albert Weng
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Clusterlabs Summit 2017 (Nuremberg, 6-7 September) - Hotels and Topics

2017-05-02 Thread Digimer
On 02/05/17 06:47 AM, Kristoffer Grönlund wrote:
> Hi everyone!
> 
> Here's a quick update on the summit happening at the SUSE office in
> Nuremberg on September 6-7.
> 
> I am still collecting hotel reservations from attendees. In order to
> notify the hotel about how many rooms we actually need, I'll need a
> complete list of people who want to attend before 15 June, at the
> latest. So if you plan to attend and need a hotel room, let me know as
> soon as possible by emailing me! There are 40 hotel rooms reserved,
> and about half of those are claimed at this point.
> 
> We are starting to have a preliminary list of topics ready. The event
> area has a projector and A/V equipment available, so we should be able
> to show slides for those wanting to present a particular topic.
> 
> This is the current list of topics:
> 
> Topic Requester/Presenter Topic
> 
> Andrew Beekhof or Ken Gaillot New container "bundle" feature in Pacemaker 
> Ken Gaillot   What would Pacemaker 1.2 or 2.0 look like?  
> Ken Gaillot   Ideas for the OCF resource agent standard   
> Klaus Wenninger   Recent work and future plans for SBD
> Chrissie Caulfieldknet and corosync 3 
> Chris Feist (requestor)   kubernetes  
> Chris Feist (requestor)   Multisite (QDevice/Booth)   
> Madison Kelly ScanCore and "Intelligent Availability" 
> Kristoffer Gronlund,  Hawk, Cluster API   and future plans
> Ayoub Belarbi
> 
> We also have Kai Wagner from the openATTIC team attending, and he has
> agreed to present openATTIC. For those who aren't familiar with it,
> openATTIC is a storage management tool with some support for managing
> things like LVM, DRBD and Ceph.
> 
> I am also happy to say that Adam Spiers from the SUSE Cloud team will be
> attending the summit, and hopefully I can convince him to present their
> work on using Pacemaker with Openstack, the current state of Openstack
> HA and perhaps some of his future plans and wishes around HA.
> 
> Keep adding topics to the list! We'll work out a rough schedule for
> the two days as the event draws nearer, but I'd hope to leave enough
> room for deeper discussions around the topics as we work through
> them.
> 
> As a reminder, the plans for the summit are being collected at the
> Alteeve! planning wiki, here:
> 
> http://plan.alteeve.ca/index.php/Main_Page
> 
> Cheers,
> Kristoffer

Anyone who wants an account on the wiki, please just email me. I keep
registration closed because even with recaptcha, spammers were getting
through. :)

-- 
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the near certainty that people of equal talent
have lived and died in cotton fields and sweatshops." - Stephen Jay Gould

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] How to fence cluster node when SAN filesystem fail

2017-05-02 Thread Klaus Wenninger
On 05/02/2017 02:57 PM, Ken Gaillot wrote:
> Hi,
>
> Upstream documentation on fencing in Pacemaker is available at:
>
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#idm139683949958512
>
> Higher-level tools such as crm shell and pcs make it easier; see their
> man pages and other documentation for details.
>
>
> On 05/01/2017 10:35 PM, Albert Weng wrote:
>> Hi All,
>>
>> My environment :
>> (1) two node (active/passive) pacemaker cluster
>> (2) SAN storage attached, add resource type "filesystem"
>> (3) OS : RHEL 7.2
>>
>> In old version of RHEL cluster, when attached SAN storage path lost(ex.
>> filesystem fail),
>> active node will trigger fence device to reboot itself.
>>
>> but when i use pacemaker on RHEL cluster, when i remove fiber cable on
>> active node, all resources failover to passive node normally, but active
>> node doesn't reboot.

That is the default on-fail behavior of pacemaker-operations (==restart -
either on the node itself or another node - except for stop where it is
fence).
Using the on-fail behavior fence as well for start & monitor should give
you the desired behavior as I got it from your description.

Regards,
Klaus

>>
>> how to trigger fence reboot action when SAN filesystem lost?
>>
>> Thank a lot~~~
>>
>>
>> -- 
>> Kind regards,
>> Albert Weng
>>
>> 
>>  不含病毒。www.avast.com
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org


___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] How to fence cluster node when SAN filesystem fail

2017-05-02 Thread Ken Gaillot
Hi,

Upstream documentation on fencing in Pacemaker is available at:

http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#idm139683949958512

Higher-level tools such as crm shell and pcs make it easier; see their
man pages and other documentation for details.


On 05/01/2017 10:35 PM, Albert Weng wrote:
> Hi All,
> 
> My environment :
> (1) two node (active/passive) pacemaker cluster
> (2) SAN storage attached, add resource type "filesystem"
> (3) OS : RHEL 7.2
> 
> In old version of RHEL cluster, when attached SAN storage path lost(ex.
> filesystem fail),
> active node will trigger fence device to reboot itself.
> 
> but when i use pacemaker on RHEL cluster, when i remove fiber cable on
> active node, all resources failover to passive node normally, but active
> node doesn't reboot.
> 
> how to trigger fence reboot action when SAN filesystem lost?
> 
> Thank a lot~~~
> 
> 
> -- 
> Kind regards,
> Albert Weng
> 
> 
>   不含病毒。www.avast.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Clusterlabs Summit 2017 (Nuremberg, 6-7 September) - Hotels and Topics

2017-05-02 Thread Kristoffer Grönlund
Hi everyone!

Here's a quick update on the summit happening at the SUSE office in
Nuremberg on September 6-7.

I am still collecting hotel reservations from attendees. In order to
notify the hotel about how many rooms we actually need, I'll need a
complete list of people who want to attend before 15 June, at the
latest. So if you plan to attend and need a hotel room, let me know as
soon as possible by emailing me! There are 40 hotel rooms reserved,
and about half of those are claimed at this point.

We are starting to have a preliminary list of topics ready. The event
area has a projector and A/V equipment available, so we should be able
to show slides for those wanting to present a particular topic.

This is the current list of topics:

Topic Requester/Presenter Topic

Andrew Beekhof or Ken Gaillot New container "bundle" feature in Pacemaker   
Ken Gaillot   What would Pacemaker 1.2 or 2.0 look like?
Ken Gaillot   Ideas for the OCF resource agent standard 
Klaus Wenninger   Recent work and future plans for SBD  
Chrissie Caulfieldknet and corosync 3   
Chris Feist (requestor)   kubernetes
Chris Feist (requestor)   Multisite (QDevice/Booth) 
Madison Kelly ScanCore and "Intelligent Availability"   
Kristoffer Gronlund,  Hawk, Cluster API and future plans
Ayoub Belarbi

We also have Kai Wagner from the openATTIC team attending, and he has
agreed to present openATTIC. For those who aren't familiar with it,
openATTIC is a storage management tool with some support for managing
things like LVM, DRBD and Ceph.

I am also happy to say that Adam Spiers from the SUSE Cloud team will be
attending the summit, and hopefully I can convince him to present their
work on using Pacemaker with Openstack, the current state of Openstack
HA and perhaps some of his future plans and wishes around HA.

Keep adding topics to the list! We'll work out a rough schedule for
the two days as the event draws nearer, but I'd hope to leave enough
room for deeper discussions around the topics as we work through
them.

As a reminder, the plans for the summit are being collected at the
Alteeve! planning wiki, here:

http://plan.alteeve.ca/index.php/Main_Page

Cheers,
Kristoffer

-- 
// Kristoffer Grönlund
// kgronl...@suse.com

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] stonith device locate on same host in active/passive cluster

2017-05-02 Thread Albert Weng
Hi Marek,

thanks for your quickly responding.

According to you opinion, when i type "pcs status" then i saw the following
result of fence :
ipmi-fence-node1(stonith:fence_ipmilan):Started cluaterb
ipmi-fence-node2(stonith:fence_ipmilan):Started clusterb

Does it means both ipmi stonith devices are working correctly? (rest of
resources can failover to another node correctly)

should i have to use location constraint to avoid stonith device running on
same node ?
# pcs constraint location ipmi-fence-node1 prefers clustera
# pcs constraint location ipmi-fence-node2 prefers clusterb

thanks a lot

On Tue, May 2, 2017 at 4:25 PM, Marek Grac  wrote:

> Hi,
>
>
>
> On Tue, May 2, 2017 at 3:39 AM, Albert Weng  wrote:
>
>> Hi All,
>>
>> I have created active/passive pacemaker cluster on RHEL 7.
>>
>> here is my environment:
>> clustera : 192.168.11.1
>> clusterb : 192.168.11.2
>> clustera-ilo4 : 192.168.11.10
>> clusterb-ilo4 : 192.168.11.11
>>
>> both nodes are connected SAN storage for shared storage.
>>
>> i used the following cmd to create my stonith devices on each node :
>> # pcs -f stonith_cfg stonith create ipmi-fence-node1 fence_ipmilan parms
>> lanplus="ture" pcmk_host_list="clustera" pcmk_host_check="static-list"
>> action="reboot" ipaddr="192.168.11.10" login=adminsitrator passwd=1234322
>> op monitor interval=60s
>>
>> # pcs -f stonith_cfg stonith create ipmi-fence-node02 fence_ipmilan parms
>> lanplus="true" pcmk_host_list="clusterb" pcmk_host_check="static-list"
>> action="reboot" ipaddr="192.168.11.11" login=USERID passwd=password op
>> monitor interval=60s
>>
>> # pcs status
>> ipmi-fence-node1 clustera
>> ipmi-fence-node2 clusterb
>>
>> but when i failover to passive node, then i ran
>> # pcs status
>>
>> ipmi-fence-node1clusterb
>> ipmi-fence-node2clusterb
>>
>> why both fence device locate on the same node ?
>>
>
> When node 'clustera' is down, is there any place where ipmi-fence-node*
> can be executed?
>
> If you are worrying that node can not self-fence itself you are right. But
> if 'clustera' will become available then attempt to fence clusterb will
> work as expected.
>
> m,
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>


-- 
Kind regards,
Albert Weng


不含病毒。www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Question about fence_mpath

2017-05-02 Thread Marek Grac
Hi,

On Fri, Apr 28, 2017 at 8:09 PM, Chris Adams  wrote:

> It creates, but any time anything tries to fence (manually or by
> rebooting a node), I get errors in /var/log/messages.  Trying to
> manually fence a node gets:
>
> # pcs stonith fence node2 --off
> Error: unable to fence 'node2'
> Command failed: No such device
>
> Another issue I run into is that fence_mpath tries to access/write to
> /var/run/cluster/mpath.devices, but nothing else creates that directory
> (and it seems that fence_mpath tries to read from it before writing it
> out).
>

File mpath.devices is created during 'unfence' (ON) action, it is very
similar to fence_scsi where unfence is required as well.


Anybody using fence_mpath as a STONITH device with pacemaker/corosync on
> CentOS 7?
>

This is my testing scenario:

1) working multipath
2) add name to the multipath device [for every node]
  * multipath -l (will get you WWID of device)
  * in /etc/multipath.conf
  * uncomment multipaths, multipath section and set WWID & alias
  * in this example [yellow]
3) on each node:
  * add reservation_key 0x123 (where 0x123 is a value unique for each node)
  * in this example [0x123, 0x456]
4) on each node; restart multipathd and check if you have /dev/mapper/yellow

node63:

[root@host-063 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 123
Status: OFF
[root@host-063 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 456
Status: OFF
--
node63:

[root@host-063 ~]# fence_mpath -d /dev/mapper/yellow -o on -k 123
Success: Powered ON
[root@host-063 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 123
Status: ON
[root@host-063 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 456
Status: OFF
--
node64:

[root@host-064 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 123
Status: ON
[root@host-064 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 456
Status: OFF
--
node64:
(attempt to fence machine without node itself being unfenced)

[root@host-064 ~]# fence_mpath -d /dev/mapper/yellow -o off -k 123
Failed: Cannot open file "/var/run/cluster/mpath.devices"
-
node64:

[root@host-064 ~]# fence_mpath -d /dev/mapper/yellow -o on -k 456
Success: Powered ON
[root@host-064 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 123
Status: ON
[root@host-064 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 456
Status: ON
[root@host-064 ~]# fence_mpath -d /dev/mapper/yellow -o off -k 123
Success: Powered OFF
[root@host-064 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 123
Status: OFF
[root@host-064 ~]# fence_mpath -d /dev/mapper/yellow -o status -k 456
Status: ON

 m,
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] stonith device locate on same host in active/passive cluster

2017-05-02 Thread Marek Grac
Hi,



On Tue, May 2, 2017 at 3:39 AM, Albert Weng  wrote:

> Hi All,
>
> I have created active/passive pacemaker cluster on RHEL 7.
>
> here is my environment:
> clustera : 192.168.11.1
> clusterb : 192.168.11.2
> clustera-ilo4 : 192.168.11.10
> clusterb-ilo4 : 192.168.11.11
>
> both nodes are connected SAN storage for shared storage.
>
> i used the following cmd to create my stonith devices on each node :
> # pcs -f stonith_cfg stonith create ipmi-fence-node1 fence_ipmilan parms
> lanplus="ture" pcmk_host_list="clustera" pcmk_host_check="static-list"
> action="reboot" ipaddr="192.168.11.10" login=adminsitrator passwd=1234322
> op monitor interval=60s
>
> # pcs -f stonith_cfg stonith create ipmi-fence-node02 fence_ipmilan parms
> lanplus="true" pcmk_host_list="clusterb" pcmk_host_check="static-list"
> action="reboot" ipaddr="192.168.11.11" login=USERID passwd=password op
> monitor interval=60s
>
> # pcs status
> ipmi-fence-node1 clustera
> ipmi-fence-node2 clusterb
>
> but when i failover to passive node, then i ran
> # pcs status
>
> ipmi-fence-node1clusterb
> ipmi-fence-node2clusterb
>
> why both fence device locate on the same node ?
>

When node 'clustera' is down, is there any place where ipmi-fence-node* can
be executed?

If you are worrying that node can not self-fence itself you are right. But
if 'clustera' will become available then attempt to fence clusterb will
work as expected.

m,
___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org