Re: [ClusterLabs] split brain cluster

2015-11-16 Thread Richard Korsten
Hi Emmanuel,

No stonith is not enabled. And yes i'm on a redhat based system.

Greetings.

Op ma 16 nov. 2015 om 15:09 schreef emmanuel segura :

>
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/ch08.html
> and
> https://github.com/ClusterLabs/pacemaker/blob/master/doc/pcs-crmsh-quick-ref.md
> anyway you can use pcs config if you are using redhat
>
> 2015-11-16 15:01 GMT+01:00 Richard Korsten :
> > Hi Emmanuel,
> >
> > I'm not sure, how can i check it?
> >
> > Greetings Richard
> >
> > Op ma 16 nov. 2015 om 14:58 schreef emmanuel segura  >:
> >>
> >> you configured the stonith?
> >>
> >> 2015-11-16 14:43 GMT+01:00 Richard Korsten :
> >> > Hello Cluster guru's.
> >> >
> >> > I'm having a bit of trouble with a cluster of ours. After an outage
> of 1
> >> > node it went into a split brain situation where both nodes aren't
> >> > talking to
> >> > each other. Both say the other node is offline. I've tried to get them
> >> > both
> >> > up and running again by stopping and starting the cluster services on
> >> > both
> >> > nodes, one at a time. with out luck.
> >> >
> >> > I've been trying to reproduce the problem with a set of test servers
> but
> >> > i
> >> > can't seem to get it in the same state.
> >> >
> >> > Because of this i'm looking for some help because i'm not that known
> >> > with
> >> > pacemaker/corosync.
> >> >
> >> > this is the output of the command pcs status:
> >> > Cluster name: MXloadbalancer
> >> > Last updated: Mon Nov 16 10:18:44 2015
> >> > Last change: Fri Nov 6 15:35:22 2015
> >> > Stack: corosync
> >> > Current DC: bckilb01 (1) - partition WITHOUT quorum
> >> > Version: 1.1.12-a14efad
> >> > 2 Nodes configured
> >> > 3 Resources configured
> >> >
> >> > Online: [ bckilb01 ]
> >> > OFFLINE: [ bckilb02 ]
> >> >
> >> > Full list of resources:
> >> >  haproxy (systemd:haproxy): Stopped
> >> >
> >> > Resource Group: MXVIP
> >> > ip-192.168.250.200 (ocf::heartbeat:IPaddr2): Stopped
> >> > ip-192.168.250.201 (ocf::heartbeat:IPaddr2): Stopped
> >> >
> >> > PCSD Status:
> >> > bckilb01: Online
> >> > bckilb02: Online
> >> >
> >> > Daemon Status:
> >> > corosync: active/enabled
> >> > pacemaker: active/enabled
> >> > pcsd: active/enabled
> >> >
> >> >
> >> > And the config:
> >> > totem {
> >> > version: 2
> >> > secauth: off
> >> > cluster_name: MXloadbalancer
> >> > transport: udpu }
> >> >
> >> > nodelist {
> >> > node { ring0_addr: bckilb01 nodeid: 1 }
> >> > node { ring0_addr: bckilb02 nodeid: 2 } }
> >> > quorum { provider: corosync_votequorum two_node: 1 }
> >> > logging { to_syslog: yes }
> >> >
> >> > If any has an idea about how to get them working together again please
> >> > let
> >> > me know.
> >> >
> >> > Greetings Richard
> >> >
> >> > ___
> >> > Users mailing list: Users@clusterlabs.org
> >> > http://clusterlabs.org/mailman/listinfo/users
> >> >
> >> > Project Home: http://www.clusterlabs.org
> >> > Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >> > Bugs: http://bugs.clusterlabs.org
> >> >
> >>
> >>
> >>
> >> --
> >>   .~.
> >>   /V\
> >>  //  \\
> >> /(   )\
> >> ^`~'^
> >>
> >> ___
> >> Users mailing list: Users@clusterlabs.org
> >> http://clusterlabs.org/mailman/listinfo/users
> >>
> >> Project Home: http://www.clusterlabs.org
> >> Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >> Bugs: http://bugs.clusterlabs.org
> >
> >
> > ___
> > Users mailing list: Users@clusterlabs.org
> > http://clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> >
>
>
>
> --
>   .~.
>   /V\
>  //  \\
> /(   )\
> ^`~'^
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] split brain cluster

2015-11-16 Thread Bliu
Richard,
could you use tcpdump to detect whether there is communication between the two 
nodes,and please check the firewall as well

bin

> 在 2015年11月16日,21:43,Richard Korsten  写道:
> 
> Hello Cluster guru's.
> 
> I'm having a bit of trouble with a cluster of ours. After an outage of 1 node 
> it went into a split brain situation where both nodes aren't talking to each 
> other. Both say the other node is offline. I've tried to get them both up and 
> running again by stopping and starting the cluster services on both nodes, 
> one at a time. with out luck.
> 
> I've been trying to reproduce the problem with a set of test servers but i 
> can't seem to get it in the same state. 
> 
> Because of this i'm looking for some help because i'm not that known with 
> pacemaker/corosync.
> 
> this is the output of the command pcs status:
> Cluster name: MXloadbalancer 
> Last updated: Mon Nov 16 10:18:44 2015 
> Last change: Fri Nov 6 15:35:22 2015 
> Stack: corosync 
> Current DC: bckilb01 (1) - partition WITHOUT quorum 
> Version: 1.1.12-a14efad 
> 2 Nodes configured 
> 3 Resources configured 
> 
> Online: [ bckilb01 ] 
> OFFLINE: [ bckilb02 ] 
> 
> Full list of resources:
>  haproxy (systemd:haproxy): Stopped 
> 
> Resource Group: MXVIP 
> ip-192.168.250.200 (ocf::heartbeat:IPaddr2): Stopped 
> ip-192.168.250.201 (ocf::heartbeat:IPaddr2): Stopped 
> 
> PCSD Status: 
> bckilb01: Online 
> bckilb02: Online 
> 
> Daemon Status: 
> corosync: active/enabled 
> pacemaker: active/enabled 
> pcsd: active/enabled 
> 
> 
> And the config:
> totem { 
> version: 2 
> secauth: off 
> cluster_name: MXloadbalancer 
> transport: udpu }
>  
> nodelist { 
> node { ring0_addr: bckilb01 nodeid: 1 } 
> node { ring0_addr: bckilb02 nodeid: 2 } } 
> quorum { provider: corosync_votequorum two_node: 1 } 
> logging { to_syslog: yes }
> 
> If any has an idea about how to get them working together again please let me 
> know.
> 
> Greetings Richard
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] split brain cluster

2015-11-16 Thread emmanuel segura
http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/ch08.html
and 
https://github.com/ClusterLabs/pacemaker/blob/master/doc/pcs-crmsh-quick-ref.md
anyway you can use pcs config if you are using redhat

2015-11-16 15:01 GMT+01:00 Richard Korsten :
> Hi Emmanuel,
>
> I'm not sure, how can i check it?
>
> Greetings Richard
>
> Op ma 16 nov. 2015 om 14:58 schreef emmanuel segura :
>>
>> you configured the stonith?
>>
>> 2015-11-16 14:43 GMT+01:00 Richard Korsten :
>> > Hello Cluster guru's.
>> >
>> > I'm having a bit of trouble with a cluster of ours. After an outage of 1
>> > node it went into a split brain situation where both nodes aren't
>> > talking to
>> > each other. Both say the other node is offline. I've tried to get them
>> > both
>> > up and running again by stopping and starting the cluster services on
>> > both
>> > nodes, one at a time. with out luck.
>> >
>> > I've been trying to reproduce the problem with a set of test servers but
>> > i
>> > can't seem to get it in the same state.
>> >
>> > Because of this i'm looking for some help because i'm not that known
>> > with
>> > pacemaker/corosync.
>> >
>> > this is the output of the command pcs status:
>> > Cluster name: MXloadbalancer
>> > Last updated: Mon Nov 16 10:18:44 2015
>> > Last change: Fri Nov 6 15:35:22 2015
>> > Stack: corosync
>> > Current DC: bckilb01 (1) - partition WITHOUT quorum
>> > Version: 1.1.12-a14efad
>> > 2 Nodes configured
>> > 3 Resources configured
>> >
>> > Online: [ bckilb01 ]
>> > OFFLINE: [ bckilb02 ]
>> >
>> > Full list of resources:
>> >  haproxy (systemd:haproxy): Stopped
>> >
>> > Resource Group: MXVIP
>> > ip-192.168.250.200 (ocf::heartbeat:IPaddr2): Stopped
>> > ip-192.168.250.201 (ocf::heartbeat:IPaddr2): Stopped
>> >
>> > PCSD Status:
>> > bckilb01: Online
>> > bckilb02: Online
>> >
>> > Daemon Status:
>> > corosync: active/enabled
>> > pacemaker: active/enabled
>> > pcsd: active/enabled
>> >
>> >
>> > And the config:
>> > totem {
>> > version: 2
>> > secauth: off
>> > cluster_name: MXloadbalancer
>> > transport: udpu }
>> >
>> > nodelist {
>> > node { ring0_addr: bckilb01 nodeid: 1 }
>> > node { ring0_addr: bckilb02 nodeid: 2 } }
>> > quorum { provider: corosync_votequorum two_node: 1 }
>> > logging { to_syslog: yes }
>> >
>> > If any has an idea about how to get them working together again please
>> > let
>> > me know.
>> >
>> > Greetings Richard
>> >
>> > ___
>> > Users mailing list: Users@clusterlabs.org
>> > http://clusterlabs.org/mailman/listinfo/users
>> >
>> > Project Home: http://www.clusterlabs.org
>> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> > Bugs: http://bugs.clusterlabs.org
>> >
>>
>>
>>
>> --
>>   .~.
>>   /V\
>>  //  \\
>> /(   )\
>> ^`~'^
>>
>> ___
>> Users mailing list: Users@clusterlabs.org
>> http://clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>



-- 
  .~.
  /V\
 //  \\
/(   )\
^`~'^

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] how to configure fence in a two node cluster?who can help me?

2015-11-16 Thread Shilu
The following configuration is mine.it work well when I didn't add the fence 
configureation.
But when I add the configureation primitive st-ipmilan1 stonith:ipmilan, it 
report the following error.
I want to know how to configure it and how to confirm that it will work well.

Failed actions:
st-ipmilan1_monitor_0 (node=ubuntu212, call=-1, rc=1, status=Timed Out, last
-rc-change=Mon Nov 16 06:47:19 2015
, queued=0ms, exec=0ms
): unknown error


node $id="3232244179" ubuntu211
node $id="3232244180" ubuntu212
primitive VIP ocf:heartbeat:IPaddr \
 params ip="192.168.33.129" \
 op monitor timeout="10s" interval="1s"
primitive lunit ocf:heartbeat:iSCSILogicalUnit \
 params implementation="tgt" target_iqn="hoo" lun="1" path="rbd/hoo" tgt
_bstype="rbd" \
 op monitor timeout="10s" interval="1s"
primitive st-ipmilan1 stonith:ipmilan \
 params hostname="ubuntu211" ipaddr="192.168.33.127" port="623" auth="md
5" priv="admin" login="admin" password="12345678"
primitive tgt ocf:heartbeat:iSCSITarget \
 params implementation="tgt" iqn="hoo" tid="5" \
 op monitor interval="1s" timeout="10s"
group cluster VIP tgt lunit
property $id="cib-bootstrap-options" \
 dc-version="1.1.10-42f2063" \
 cluster-infrastructure="corosync" \
 stonith-enabled="true" \
 no-quorum-policy="ignore"
-



???
This e-mail and its attachments contain confidential information from H3C, 
which is
intended only for the person or entity whose address is listed above. Any use 
of the
information contained herein in any way (including, but not limited to, total 
or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify 
the sender
by phone or email immediately and delete it!
___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] split brain cluster

2015-11-16 Thread emmanuel segura
Hi,

When you configure a cluster, the first thing you need to think about
is stonith aka fencing.

Thanks
Emmanuel

2015-11-16 15:18 GMT+01:00 Richard Korsten :
> Hi Emmanuel,
>
> No stonith is not enabled. And yes i'm on a redhat based system.
>
> Greetings.
>
> Op ma 16 nov. 2015 om 15:09 schreef emmanuel segura :
>>
>>
>> http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/ch08.html
>> and
>> https://github.com/ClusterLabs/pacemaker/blob/master/doc/pcs-crmsh-quick-ref.md
>> anyway you can use pcs config if you are using redhat
>>
>> 2015-11-16 15:01 GMT+01:00 Richard Korsten :
>> > Hi Emmanuel,
>> >
>> > I'm not sure, how can i check it?
>> >
>> > Greetings Richard
>> >
>> > Op ma 16 nov. 2015 om 14:58 schreef emmanuel segura
>> > :
>> >>
>> >> you configured the stonith?
>> >>
>> >> 2015-11-16 14:43 GMT+01:00 Richard Korsten :
>> >> > Hello Cluster guru's.
>> >> >
>> >> > I'm having a bit of trouble with a cluster of ours. After an outage
>> >> > of 1
>> >> > node it went into a split brain situation where both nodes aren't
>> >> > talking to
>> >> > each other. Both say the other node is offline. I've tried to get
>> >> > them
>> >> > both
>> >> > up and running again by stopping and starting the cluster services on
>> >> > both
>> >> > nodes, one at a time. with out luck.
>> >> >
>> >> > I've been trying to reproduce the problem with a set of test servers
>> >> > but
>> >> > i
>> >> > can't seem to get it in the same state.
>> >> >
>> >> > Because of this i'm looking for some help because i'm not that known
>> >> > with
>> >> > pacemaker/corosync.
>> >> >
>> >> > this is the output of the command pcs status:
>> >> > Cluster name: MXloadbalancer
>> >> > Last updated: Mon Nov 16 10:18:44 2015
>> >> > Last change: Fri Nov 6 15:35:22 2015
>> >> > Stack: corosync
>> >> > Current DC: bckilb01 (1) - partition WITHOUT quorum
>> >> > Version: 1.1.12-a14efad
>> >> > 2 Nodes configured
>> >> > 3 Resources configured
>> >> >
>> >> > Online: [ bckilb01 ]
>> >> > OFFLINE: [ bckilb02 ]
>> >> >
>> >> > Full list of resources:
>> >> >  haproxy (systemd:haproxy): Stopped
>> >> >
>> >> > Resource Group: MXVIP
>> >> > ip-192.168.250.200 (ocf::heartbeat:IPaddr2): Stopped
>> >> > ip-192.168.250.201 (ocf::heartbeat:IPaddr2): Stopped
>> >> >
>> >> > PCSD Status:
>> >> > bckilb01: Online
>> >> > bckilb02: Online
>> >> >
>> >> > Daemon Status:
>> >> > corosync: active/enabled
>> >> > pacemaker: active/enabled
>> >> > pcsd: active/enabled
>> >> >
>> >> >
>> >> > And the config:
>> >> > totem {
>> >> > version: 2
>> >> > secauth: off
>> >> > cluster_name: MXloadbalancer
>> >> > transport: udpu }
>> >> >
>> >> > nodelist {
>> >> > node { ring0_addr: bckilb01 nodeid: 1 }
>> >> > node { ring0_addr: bckilb02 nodeid: 2 } }
>> >> > quorum { provider: corosync_votequorum two_node: 1 }
>> >> > logging { to_syslog: yes }
>> >> >
>> >> > If any has an idea about how to get them working together again
>> >> > please
>> >> > let
>> >> > me know.
>> >> >
>> >> > Greetings Richard
>> >> >
>> >> > ___
>> >> > Users mailing list: Users@clusterlabs.org
>> >> > http://clusterlabs.org/mailman/listinfo/users
>> >> >
>> >> > Project Home: http://www.clusterlabs.org
>> >> > Getting started:
>> >> > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> >> > Bugs: http://bugs.clusterlabs.org
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >>   .~.
>> >>   /V\
>> >>  //  \\
>> >> /(   )\
>> >> ^`~'^
>> >>
>> >> ___
>> >> Users mailing list: Users@clusterlabs.org
>> >> http://clusterlabs.org/mailman/listinfo/users
>> >>
>> >> Project Home: http://www.clusterlabs.org
>> >> Getting started:
>> >> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> >> Bugs: http://bugs.clusterlabs.org
>> >
>> >
>> > ___
>> > Users mailing list: Users@clusterlabs.org
>> > http://clusterlabs.org/mailman/listinfo/users
>> >
>> > Project Home: http://www.clusterlabs.org
>> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> > Bugs: http://bugs.clusterlabs.org
>> >
>>
>>
>>
>> --
>>   .~.
>>   /V\
>>  //  \\
>> /(   )\
>> ^`~'^
>>
>> ___
>> Users mailing list: Users@clusterlabs.org
>> http://clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>
>
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>



-- 
  .~.
  /V\
 //  \\
/(   )\
^`~'^


Re: [ClusterLabs] Antw: Re: [Question] Question about mysql RA.

2015-11-16 Thread Dejan Muhamedagic
Hi Hideo-san,

On Thu, Nov 12, 2015 at 06:15:29PM +0900, renayama19661...@ybb.ne.jp wrote:
> Hi Ken,
> Hi Ulrich,
> 
> Hi All,
> 
> I sent a patch.
>  * https://github.com/ClusterLabs/resource-agents/pull/698

Your patch was merged. Many thanks.

Cheers,

Dejan

> 
> Please confirm it.
> 
> Best Regards,
> Hideo Yamauchi.
> 
> 
> - Original Message -
> > From: "renayama19661...@ybb.ne.jp" 
> > To: Cluster Labs - All topics related to open-source clustering welcomed 
> > 
> > Cc: 
> > Date: 2015/11/5, Thu 19:36
> > Subject: Re: [ClusterLabs] Antw: Re:  [Question] Question about mysql RA.
> > 
> > Hi Ken,
> > Hi Ulrich,
> > 
> > Thank you for comment
> > 
> > The RA of mysql seemed to have a problem somehow or other from the 
> > beginning as 
> > far as I heard the opinion of Ken and Ulrich.
> > 
> > I wait for the opinion of other people a little more, and I make a patch.
> > 
> > Best Regards,
> > Hideo Yamauchi.
> > 
> > 
> > - Original Message -
> >>  From: Ulrich Windl 
> >>  To: users@clusterlabs.org; kgail...@redhat.com
> >>  Cc: 
> >>  Date: 2015/11/5, Thu 16:11
> >>  Subject: [ClusterLabs] Antw: Re:  [Question] Question about mysql RA.
> >> 
> >   Ken Gaillot  schrieb am 04.11.2015 
> > um 
> >>  16:44 in Nachricht
> >>  <563a27c2.5090...@redhat.com>:
> >>>   On 11/04/2015 04:36 AM, renayama19661...@ybb.ne.jp wrote:
> >>  [...]
>        pid=`cat $OCF_RESKEY_pid 2> /dev/null `
>        /bin/kill $pid > /dev/null
> >>> 
> >>>   I think before this line, the RA should do a "kill -0" to 
> > check 
> >>  whether
> >>>   the PID is running, and return $OCF_SUCCESS if not. That way, we can
> >>>   still return an error if the real kill fails.
> >> 
> >>  And remove the stale PID file if there is no such pid. For very busy 
> > systems one 
> >>  could use ps for that PID to see whether the PID belongs to the expected 
> >>  process. There is a small chance that a PID exists, but does not belong 
> >> to 
> > the 
> >>  expected process...
> >> 
> >>> 
>        rc=$?
>        if [ $rc != 0 ]; then
>            ocf_exit_reason "MySQL couldn't be stopped"
>            return $OCF_ERR_GENERIC
>        fi
>    (snip)
>    -
>  
>    The mysql RA does such a code from old days.
>     * http://hg.linux-ha.org/agents/file/67234f982ab7/heartbeat/mysql 
> > 
>  
>    Does mysql RA know the reason becoming this made?
>    Possibly is it a factor to be conscious of mysql cluster?
>  
>    I think about a patch of this movement of mysql RA.
>    I want to know the detailed reason.
>  
>    Best Regards,
>    Hideo Yamauchi.
> >>> 
> >>> 
> >>>   ___
> >>>   Users mailing list: Users@clusterlabs.org 
> >>>   http://clusterlabs.org/mailman/listinfo/users 
> >>> 
> >>>   Project Home: http://www.clusterlabs.org 
> >>>   Getting started: 
> > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> >>>   Bugs: http://bugs.clusterlabs.org 
> >> 
> >> 
> >> 
> >> 
> >> 
> >>  ___
> >>  Users mailing list: Users@clusterlabs.org
> >>  http://clusterlabs.org/mailman/listinfo/users
> >> 
> >>  Project Home: http://www.clusterlabs.org
> >>  Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >>  Bugs: http://bugs.clusterlabs.org
> >> 
> > 
> > ___
> > Users mailing list: Users@clusterlabs.org
> > http://clusterlabs.org/mailman/listinfo/users
> > 
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> > 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Antw: Re: [Question] Question about mysql RA.

2015-11-16 Thread renayama19661014
Hi Dejan,


All right!

Thank you for merging a patch.



Many Thanks!
Hideo Yamauchi.


- Original Message -
> From: Dejan Muhamedagic 
> To: users@clusterlabs.org
> Cc: 
> Date: 2015/11/16, Mon 18:02
> Subject: Re: [ClusterLabs] Antw: Re:  [Question] Question about mysql RA.
> 
> Hi Hideo-san,
> 
> On Thu, Nov 12, 2015 at 06:15:29PM +0900, renayama19661...@ybb.ne.jp wrote:
>>  Hi Ken,
>>  Hi Ulrich,
>> 
>>  Hi All,
>> 
>>  I sent a patch.
>>   * https://github.com/ClusterLabs/resource-agents/pull/698
> 
> Your patch was merged. Many thanks.
> 
> Cheers,
> 
> Dejan
> 
>> 
>>  Please confirm it.
>> 
>>  Best Regards,
>>  Hideo Yamauchi.
>> 
>> 
>>  - Original Message -
>>  > From: "renayama19661...@ybb.ne.jp" 
> 
>>  > To: Cluster Labs - All topics related to open-source clustering 
> welcomed 
>>  > Cc: 
>>  > Date: 2015/11/5, Thu 19:36
>>  > Subject: Re: [ClusterLabs] Antw: Re:  [Question] Question about mysql 
> RA.
>>  > 
>>  > Hi Ken,
>>  > Hi Ulrich,
>>  > 
>>  > Thank you for comment
>>  > 
>>  > The RA of mysql seemed to have a problem somehow or other from the 
> beginning as 
>>  > far as I heard the opinion of Ken and Ulrich.
>>  > 
>>  > I wait for the opinion of other people a little more, and I make a 
> patch.
>>  > 
>>  > Best Regards,
>>  > Hideo Yamauchi.
>>  > 
>>  > 
>>  > - Original Message -
>>  >>  From: Ulrich Windl 
>>  >>  To: users@clusterlabs.org; kgail...@redhat.com
>>  >>  Cc: 
>>  >>  Date: 2015/11/5, Thu 16:11
>>  >>  Subject: [ClusterLabs] Antw: Re:  [Question] Question about mysql 
> RA.
>>  >> 
>>  >   Ken Gaillot  schrieb am 
> 04.11.2015 
>>  > um 
>>  >>  16:44 in Nachricht
>>  >>  <563a27c2.5090...@redhat.com>:
>>  >>>   On 11/04/2015 04:36 AM, renayama19661...@ybb.ne.jp wrote:
>>  >>  [...]
>>         pid=`cat $OCF_RESKEY_pid 2> /dev/null `
>>         /bin/kill $pid > /dev/null
>>  >>> 
>>  >>>   I think before this line, the RA should do a "kill 
> -0" to 
>>  > check 
>>  >>  whether
>>  >>>   the PID is running, and return $OCF_SUCCESS if not. That 
> way, we can
>>  >>>   still return an error if the real kill fails.
>>  >> 
>>  >>  And remove the stale PID file if there is no such pid. For very 
> busy 
>>  > systems one 
>>  >>  could use ps for that PID to see whether the PID belongs to the 
> expected 
>>  >>  process. There is a small chance that a PID exists, but does not 
> belong to 
>>  > the 
>>  >>  expected process...
>>  >> 
>>  >>> 
>>         rc=$?
>>         if [ $rc != 0 ]; then
>>             ocf_exit_reason "MySQL couldn't be 
> stopped"
>>             return $OCF_ERR_GENERIC
>>         fi
>>     (snip)
>>     
> -
>>   
>>     The mysql RA does such a code from old days.
>>      * 
> http://hg.linux-ha.org/agents/file/67234f982ab7/heartbeat/mysql 
>>  > 
>>   
>>     Does mysql RA know the reason becoming this made?
>>     Possibly is it a factor to be conscious of mysql 
> cluster?
>>   
>>     I think about a patch of this movement of mysql RA.
>>     I want to know the detailed reason.
>>   
>>     Best Regards,
>>     Hideo Yamauchi.
>>  >>> 
>>  >>> 
>>  >>>   ___
>>  >>>   Users mailing list: Users@clusterlabs.org 
>>  >>>   http://clusterlabs.org/mailman/listinfo/users 
>>  >>> 
>>  >>>   Project Home: http://www.clusterlabs.org 
>>  >>>   Getting started: 
>>  > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
>>  >>>   Bugs: http://bugs.clusterlabs.org 
>>  >> 
>>  >> 
>>  >> 
>>  >> 
>>  >> 
>>  >>  ___
>>  >>  Users mailing list: Users@clusterlabs.org
>>  >>  http://clusterlabs.org/mailman/listinfo/users
>>  >> 
>>  >>  Project Home: http://www.clusterlabs.org
>>  >>  Getting started: 
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>  >>  Bugs: http://bugs.clusterlabs.org
>>  >> 
>>  > 
>>  > ___
>>  > Users mailing list: Users@clusterlabs.org
>>  > http://clusterlabs.org/mailman/listinfo/users
>>  > 
>>  > Project Home: http://www.clusterlabs.org
>>  > Getting started: 
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>  > Bugs: http://bugs.clusterlabs.org
>>  > 
>> 
>>  ___
>>  Users mailing list: Users@clusterlabs.org
>>  http://clusterlabs.org/mailman/listinfo/users
>> 
>>  Project Home: http://www.clusterlabs.org
>>  Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>  Bugs: http://bugs.clusterlabs.org
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: