Hi Ulrich/Lars,
Current application supports only the Master/Slave configuration and
there can be one master and one slave process in a group.
Also each pair(Master/Slave) processes certain set of data.
It would be great if you could confirm whether I can go ahead with
STONITH feature.
BR,
Mark
On Wed, Jul 10, 2013 at 11:45 AM, Ulrich Windl <
[email protected]> wrote:
> Hi!
>
> Just because I'd like to know: Why aren't you running a 50-node cluster (or
> larger-than-2-nodes clusters)?
>
> Regards,
> Ulrich
>
> >>> John M <[email protected]> schrieb am 09.07.2013 um 16:36 in
> Nachricht
> <CACNodvbjzDLzPJzuf96ycnNku=TxgzBck0s_LSNACJVSHAZ7=a...@mail.gmail.com>:
> > Hi All,
> >
> > Finally I have installed pacemaker-1.1.8 and corosync 1.4.1 (On RHEL 5.8)
> >
> > I have around 50 servers running a custom master slave application.
> > That is 25 number of 2 node clusters.
> >
> > Now I want to know
> > 1. Can I use a node which is part of another cluster to run quorum node?
> > 2. Can I configure a standalone quorum node that can manager 25 Clusters?
> >
> > Thanks in advance.
> >
> > BR,
> > Mark
> >
> > On Fri, Jun 14, 2013 at 11:57 AM, Ulrich Windl <
> > [email protected]> wrote:
> >
> >> >>> John M <[email protected]> schrieb am 13.06.2013 um 21:52 in
> >> Nachricht
> >> <cacnodvzhri8cthvxo4lbagwjgxwytjjrsuvb9yi-anyh9p+...@mail.gmail.com>:
> >> > Heartbeat is not restarting the failed process. In my
> >> > configuration default-resource-failure-stickiness is set to -INFINITY
> >>
> >> ??? Zero is absolutely sufficient; what are you trying to accieve?
> >>
> >> > and resource_failure_stickiness is set to -INFINITY at resource level.
> >>
> >> (again)
> >>
> >> > If a Master resource fails, slave becomes Master and If a Slave
> resource
> >> > fails, status becomes "stopped".In both the case failed resources
> stays
> >> as
> >> > it is.
> >>
> >> Logs, maybe...
> >>
> >> > BR,
> >> > Mark
> >> >
> >> >
> >> > On Thu, Jun 13, 2013 at 11:54 AM, Ulrich Windl <
> >> > [email protected]> wrote:
> >> >
> >> >> >>> John M <[email protected]> schrieb am 12.06.2013 um 18:49 in
> >> >> Nachricht
> >> >> <CACNodvYynv=isiGmuswbNqgVmCusM--6w0yC03eCbCG=kyh...@mail.gmail.com
> >:
> >> >> > Dear All,
> >> >> >
> >> >> > I will try to setup pacemaker cluster in the coming weeks. Before
> >> that
> >> >> I
> >> >> > have to complete the configuration using heartbeat 2.1.4.
> >> >> > I would really appreciate if you could suggest the configuration
> for
> >> >> > Master/Slave scenario mentioned in my previous mail.
> >> >>
> >> >> Doesn't look very complicated for a two-node scenario:
> >> >> ms cln_test prm_test \
> >> >> meta clone-max="2" globally-unique="false" notify="true"
> >> >> clone-node-max="1" master-node-max="1" master-max="1"
> >> >>
> >> >> What is your problem?
> >> >>
> >> >> Regards,
> >> >> Ulrich
> >> >>
> >> >> >
> >> >> > Thanks in advance.
> >> >> >
> >> >> > BR,
> >> >> > Mark
> >> >> >
> >> >> > On Tuesday, June 11, 2013, Lars Marowsky-Bree <[email protected]>
> wrote:
> >> >> >> On 2013-06-11T15:05:11, John M <[email protected]> wrote:
> >> >> >>
> >> >> >>> Unfortunately I cannot install pacemaker :(
> >> >> >>>
> >> >> >>> I just installed heartbeat 2.1.4 and in crm_mon I am getting
> >> >> >>> Master/Slave status.
> >> >> >>
> >> >> >> You seriously need to upgrade. Heartbeat 2.1.4 is ages old and has
> >> many,
> >> >> >> many known bugs. You'll not be able to secure community aid for
> that
> >> >> >> version any more.
> >> >> >>
> >> >> >>
> >> >> >> Regards,
> >> >> >> Lars
> >> >> >>
> >> >> >> --
> >> >> >> Architect Storage/HA
> >> >> >> SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix
> >> >> > Imendörffer, HRB 21284 (AG Nürnberg)
> >> >> >> "Experience is the name everyone gives to their mistakes." --
> Oscar
> >> >> Wilde
> >> >> >>
> >> >> >> _______________________________________________
> >> >> >> Linux-HA mailing list
> >> >> >> [email protected]
> >> >> >> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> >> >> >> See also: http://linux-ha.org/ReportingProblems
> >> >> >>
> >> >> > _______________________________________________
> >> >> > Linux-HA mailing list
> >> >> > [email protected]
> >> >> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> >> >> > See also: http://linux-ha.org/ReportingProblems
> >> >>
> >> >>
> >> >> _______________________________________________
> >> >> Linux-HA mailing list
> >> >> [email protected]
> >> >> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> >> >> See also: http://linux-ha.org/ReportingProblems
> >> >>
> >> > _______________________________________________
> >> > Linux-HA mailing list
> >> > [email protected]
> >> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> >> > See also: http://linux-ha.org/ReportingProblems
> >>
> >>
> >> _______________________________________________
> >> Linux-HA mailing list
> >> [email protected]
> >> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> >> See also: http://linux-ha.org/ReportingProblems
> >>
> > _______________________________________________
> > Linux-HA mailing list
> > [email protected]
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
>
>
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems