Does that mean that the only difference between the new async-replication
module and active-passive cluster is that the async-replication will accept
connections to both (primary and backup) brokers while active-passive
cluster will accept connections only to primary broker? Both will depend on
CMAN for recovery. Is that correct?

Thanks,
Terance.

On Mon, Oct 22, 2012 at 11:30 PM, Alan Conway <[email protected]> wrote:

> On Mon, 2012-10-22 at 11:15 +0530, Terance Dias wrote:
> > Thanks a lot for your reply Alan.
> >
> > I just had a few questions about the new async-replication module.
> > 1. With the new async-replication module, will the backup broker reject
> > connections like in the active-passive cluster?
>
> No, if you set up replication with qpid-ha both brokers remainfully
> functional
>
> > 2. Is there a manual step involved in promoting the backup broker to
> > primary when the primary broker becomes unreachable?
>
> When CMAN detects that the primary is down it calls the
> script /etc/init.d/qpidd-primary with the argument "start" on the host
> of the new primary. You could add other start-up work to this script.
> This is a temporary workaround till we get things fully automated.
>
> >
> > Thanks,
> > Terance.
> >
> > On Fri, Oct 19, 2012 at 7:14 PM, Alan Conway <[email protected]> wrote:
> >
> > > On Fri, 2012-10-19 at 11:09 +0530, Terance Dias wrote:
> > > > I tried the active-passive cluster but it turns out that for this
> cluster
> > > > to work all the nodes are required to be in the same network (since
> it
> > > > depends on cman service which only works (efficiently) for nodes in
> the
> > > > same subnet). This doesn't work for us since we need the backup node
> to
> > > be
> > > > present in another network. Is there some other way we can achieve
> this
> > > and
> > > > solve the fail-over problem mentioned earlier in this thread.
> > >
> > > You can use the async-replication module to replicate to an off-site
> > > node but it is soon (0.20) to be replaced by an improved scheme using
> > > the same replication code as the active-passive cluster.
> > >
> > > You can experiment with the new scheme now, using
> > >  qpid-ha replicate <queue>  <from-node>
> > >
> > > The thing that's missing is that if the receiving end is a cluster, and
> > > the primary fails, the replication link will not automatically be
> > > re-established on the new primary. One way to work around this is to
> > > write a custom qpid-primary script that re-establishes links when a
> > > broker is promoted to primary.
> > >
> > >
> > >
> > > >
> > > > Thanks,
> > > > Terance.
> > > >
> > > > On Fri, Oct 5, 2012 at 3:33 PM, Gordon Sim <[email protected]> wrote:
> > > >
> > > > > On 10/05/2012 10:40 AM, Terance Dias wrote:
> > > > >
> > > > >> Thanks a lot for your reply Gordon.
> > > > >>
> > > > >> We are using the replication described at
> > > http://qpid.apache.org/books/*
> > > > >> *** <http://qpid.apache.org/books/**>
> > > > >> trunk/AMQP-Messaging-Broker-****CPP-Book/html/queue-state-****
> > > > >> replication.html<http://qpid.**apache.org/books/trunk/AMQP-**
> > > > >> Messaging-Broker-CPP-Book/**html/queue-state-replication.**html<
> > >
> http://qpid.apache.org/books/trunk/AMQP-Messaging-Broker-CPP-Book/html/queue-state-replication.html
> > > >
> > > > >> >
> > > > >>
> > > > >> .
> > > > >> We are running on Ubuntu.
> > > > >>
> > > > >
> > > > > Ok, that certainly seems to have rgmanager:
> > > http://manpages.ubuntu.com/**
> > > > > manpages/precise/man8/**rgmanager.8.html<
> > > http://manpages.ubuntu.com/manpages/precise/man8/rgmanager.8.html>
> > > > >
> > > > >
> > > > >> We decided to go with this kind of replication because it allowed
> HA
> > > > >> solution across networks. Does the new mechanism work across
> networks
> > > or
> > > > >> are the nodes required to be in the same network?
> > > > >>
> > > > >
> > > > > No. The replication in the new HA scheme is done over AMQP (over
> > > > > inter-broker aka 'federation' links).
> > > > >
> > > > >
> > > > >  Also, I'm new to Linux. So I just wanted to know how rgmanager
> works.
> > > Does
> > > > >> it run on a separate node and monitors the cluster nodes? In that
> > > case can
> > > > >> it be a single point of failure?
> > > > >>
> > > > >
> > > > > No, I believe it runs on all the nodes participating in the
> cluster.
> > > These
> > > > > daemons all communicate such that they can determine when one node
> is
> > > > > 'down', shut it out the system and attempt to recover it.
> > > > >
> > > > >
> > > > >  Also, Please let me know if you know if anybody has deployed the
> new
> > > > >> cluster mechanism in production.
> > > > >>
> > > > >
> > > > > No, I don;t think so yet. Though it is undergoing testing right
> now by
> > > an
> > > > > actual project.
> > > > >
> > > > >
> > > > >
> > > > >
> > >
> ------------------------------**------------------------------**---------
> > > > > To unsubscribe, e-mail: [email protected].**org<
> > > [email protected]>
> > > > > For additional commands, e-mail: [email protected]
> > > > >
> > > > >
> > >
> > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: [email protected]
> > > For additional commands, e-mail: [email protected]
> > >
> > >
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>
>

Reply via email to