On Tue, Nov 27, 2012 at 8:38 PM, Rafał Radecki <[email protected]> wrote:
> The crm_master utility must be present in an OCF master/slave script as I
> know.
>
> Currently I have some other doubts:
> - I cannot make a group of resources: TomcatSolrClone:Master & TSVIP, is it
> possible to group resources od Master/Slave & normal type?

no.
but master/slaves can contain groups (although all resources within it
must support the promote/demote commands)

> - when I kill java/tomcat on current Master node which has TSVIP &
> TomcatSolrClone:Master the tomcat gets restarted on that node but I would
> like (if it is possible) to migrate TSVIP and TomcatSolrClone:Master to the
> second node, is it possible?

I think you need migration-threshold=1

> - when I kill java/tomcat on current Master node which has TSVIP &
> TomcatSolrClone:Master and tomcat cannot be restarted on that node TSVIP
> stays on that node and TomcatSolrClone:Master moves to the other node so
> afterwards I have:
>    - TSVIP on one node
>    - TomcatSolrClone:Master on the other node
> despite the fact that in configuration I have:
> location TSVIP_prefer_storage1 TSVIP 100: storage1
> location TSVIP_prefer_storage2 TSVIP 100: storage2
> colocation TomcatSolrClone_with_TSVIP inf: TomcatSolrClone:Master
> TSVIP:Started -> this obviously does not work in this situation

Sounds very much like a bug. Can you run crm_report and attach it to a
new bug at bugs.clusterlabs.org?

> order TomcatSolrClone_after_TSVIP inf: TSVIP:start TomcatSolrClone:promote
>
> Best regards,
> Rafal.
>
> 2012/11/26 Fabian Herschel <[email protected]>
>
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>>
>> Hi Rafal,
>>
>> placing a new master on the "right" (not restarted) side is typically
>> done by the crm_master calls. You might check the scoring if the
>> resources after you have killed one side and check it with
>> "ptest -Ls" (or an matching other call without ptest - sorry I do not
>> remember the other comamnd).
>>
>> On SLES "pstest -Ls" will show you the scores in the "Live" situation
>> and if crm_master is used it also will show you promote-scores.
>>
>> In my resourceagents the tomcat RA does not contain a crm_master call,
>> so this might be the cause.
>>
>> Best regards
>> Fabian
>>
>> On 11/26/2012 01:39 AM, Andrew Beekhof wrote:
>> > On Fri, Nov 23, 2012 at 3:08 AM, Rafał Radecki
>> > <[email protected]> wrote:
>> >> Hi all.
>> >>
>> >> I am currently making a Pacemaker/Corosync cluster which serves
>> >> Tomcat resource in master/slave mode. This Tomcat serves Solr
>> >> java application. My configuration is:
>> >>
>> >> node storage1 node storage2
>> >>
>> >> primitive TSVIP ocf:heartbeat:IPaddr2 \ params
>> >> ip="192.168.100.204" cidr_netmask="32" nic="eth0" \ op monitor
>> >> interval="30s"
>> >>
>> >> primitive TomcatSolr ocf:polskapresse:tomcat6 \ op start
>> >> interval="0" timeout="60" on-fail="stop" \ op stop interval="0"
>> >> timeout="60" on-fail="stop" \ op monitor interval="31"
>> >> role="Slave" timeout="60" on-fail="stop" \ op monitor
>> >> interval="30" role="Master" timeout="60" on-fail="stop"
>> >>
>> >> ms TomcatSolrClone TomcatSolr \ meta master-max="1"
>> >> master-node-max="1" clone-max="2" clone-node-max="1"
>> >> notify="false" globally-unique="true" ordered="false"
>> >> target-role="Master"
>> >>
>> >> colocation TomcatSolrClone_with_TSVIP inf:
>> >> TomcatSolrClone:Master TSVIP:Started order
>> >> TomcatSolrClone_after_TSVIP inf: TSVIP:start
>> >> TomcatSolrClone:promote
>> >>
>> >> property $id="cib-bootstrap-options" \
>> >> dc-version="1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14"
>> >> \ cluster-infrastructure="openais" \ expected-quorum-votes="4" \
>> >> stonith-enabled="false" \ no-quorum-policy="ignore" \
>> >> symmetric-cluster="true" \ default-resource-stickiness="1" \
>> >> last-lrm-refresh="1353594420" rsc_defaults $id="rsc-options" \
>> >> resource-stickiness="10" \ migration-threshold="1000000
>> >>
>> >> So logically I have: - one node with TSVIP and TomcatSolrClone
>> >> Master; - one node with TomcatSolrClone Slave. I have set up
>> >> replication beetwen Solr on TomcatSolrClone Master and Slave and
>> >> written an ocf agent (attached). Few moments ago when I killed
>> >> the Slave resource with 'pkill java' the resource was restarted
>> >> on the same node despite the fact that the monitor action
>> >> returned $OCF_ERROR_GENERIC and I have on-fail="stop" for
>> >> TomcatSolr set (I have also tried "block" with same effect).
>> >>
>> >> Then I have added a migration threshold:
>> >>
>> >> ms TomcatSolrClone TomcatSolr \ meta master-max="1"
>> >> master-node-max="1" clone-max="2" clone-node-max="1"
>> >> notify="false" globally-unique="true" ordered="false"
>> >> target-role="Started" \ params migration-threshold="1"
>> >>
>> >> and now when I kill java on Slave it does not start anymore (the
>> >> Master is ok). But when I then kill java on Master (no resource
>> >> running on both nodes) everything gets restarted by the cluster
>> >> and Master and Slave are running afterwards. How to stop this
>> >> restart when Slave and Master both fail?
>> >
>> > Could you file a bug (https://bugs.clusterlabs.org) for this and
>> > include a crm_report for your testcase? Its likely that you've hit
>> > a bug.
>> >
>> >>
>> >> Best regards, Rafal.
>> >>
>> >> _______________________________________________ Linux-HA mailing
>> >> list [email protected]
>> >> http://lists.linux-ha.org/mailman/listinfo/linux-ha See also:
>> >> http://linux-ha.org/ReportingProblems
>> > _______________________________________________ Linux-HA mailing
>> > list [email protected]
>> > http://lists.linux-ha.org/mailman/listinfo/linux-ha See also:
>> > http://linux-ha.org/ReportingProblems
>> >
>>
>> -----BEGIN PGP SIGNATURE-----
>> Version: GnuPG v2.0.19 (GNU/Linux)
>> Comment: Using GnuPG with Mozilla - http://www.enigmail.net/
>>
>> iQEcBAEBAgAGBQJQs3+iAAoJEJ1uHhrzMvZRkJcH/ij5X5NQn5OxBr0ZEGapj7eM
>> oX9BYT16xPs1HJXLMsjbKVmctAsGLJL79j9gnSVWGS7LhTv1XjHQlHHJyA7y+BbG
>> irscHbgMHg/WwreYeoyfcHRQP/o0rODPWEEmGfI8R89hkqCPjayMRw9NJOkZHMMq
>> ED/VtSlZxeB9wKZnWz9bw8XW4hov0wInhdl4hvSrnh2fCCXxatGz+VtwRXvLrOm3
>> +h5g+nkpn+Q5hAz8xTnn2TMvOAE10SOnWw9XX6vpkgUU61TPTJ9am53x+e4iNURu
>> 7hsUdXWfm3h7+c10BzcrIjVS5GEwu29ZvYmsMiM4LIVXImloFEvmsd5Bpw8yVaw=
>> =Wbeu
>> -----END PGP SIGNATURE-----
>> _______________________________________________
>> Linux-HA mailing list
>> [email protected]
>> http://lists.linux-ha.org/mailman/listinfo/linux-ha
>> See also: http://linux-ha.org/ReportingProblems
>>
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to