On 09/24/2010 02:31 PM, Jason Stelzer wrote:
I'm looking to create a cluster of qpidd servers that are in a
synchronized state so that I can allow many agents to listen for and
consume events that are enqueued by another system.

So, to borrow from some concepts that I've used to cluster mysql into
writer/reader nodes, if I were to create a 2 node cluster I would
imagine it would work something like this:

Node A would be the enqueue node. System A would send events to Node A
where they would be enqueued. Essentially, this node is write only.

Node B would be the dequeue node. The clustering would take care of
the message propagation and all listeners in System B would dequeue
messages out of the cluster via Node B.

My current problem is that I'm not sure what step I have done
incorrectly. I'm fairly sure that I've done all I can as far as the
wiki goes. But, it could be a misconfiguration of a lower level
service since I am not yet that experienced with the underlying
corosync/heartbeat software.

Some things I've tripped up on in the past in case any of these help:

* does firewall allow UDP on the desired port?
* is SELinux in use?
* is the bind address correct for the network mask?
(* is multicast enabled?)

The security setup was another one, but you've explicitly ruled that out.



On Fri, Sep 24, 2010 at 9:25 AM, Lahiru Gunathilake<[email protected]>  wrote:
Hi Jason,

By clustering what are you going to achieve ? You want to replicate the Qpid
state (I mean your message store) among the cluster nodes ? I am actually
asking for my clarification because I am trying to find an easier solution
for Qpid clustering ?

Lahiru

On Fri, Sep 24, 2010 at 6:46 PM, Jason Stelzer<[email protected]>wrote:

Hi, I'm reaching out for a little help and pointers with regard to
qpid clustering.

I'm coming into this with nearly zero qpid experience so I will try to
be as complete as possible. I am attempting to set up a qpid cluster
so that we can scale out our qpid clients across multiple qpid
servers. Is it best practice to have a primary enqueue node and
dequeue from the secondary nodes in the cluster?

My understanding is that replication is geared more for fault
tolerance and disaster recovery, and that clustering is geared towards
supporting large numbers of concurrent activity.

I am currently working on getting qpid clustering working as described
here:
https://cwiki.apache.org/qpid/starting-a-cluster.html

I am running qpid v 0.5 on Fedora 12. I have the following rpms installed:
qpidc-0.5.829175-2.fc12.x86_64
qpidd-0.5.829175-2.fc12.x86_64
qpidd-cluster-0.5.829175-2.fc12.x86_64

When I start qpidd and pass the --cluster-name=TEST_CLUSTER option,
qpidd aborts with the following error:
Starting Qpid AMQP daemon: Daemon startup failed: Cannot join CPG
group DEV_CLUSTER: try again (6)

I believe I have corosync and pacemaker working.

If I start corosync, it takes a bit of time before the crm commands
work, but once everything spins up I don't see any warnings when I
run:

crm_verify -L
(no output/warnings)

crm configure show
node edisondev3
property $id="cib-bootstrap-options" \
        dc-version="1.0.5-ee19d8e83c2a5d45988f1cee36d334a631d84fc7" \
        cluster-infrastructure="openais" \
        expected-quorum-votes="2" \
        stonith-enabled="false" \
        stonith-enable="false"



I've double checked my bindnetaddress in corosync.conf. It lines up
with the wiki article and agrees with the output of /sbin/route.

I double checked my uidgid.d/qpid file. Initially I had the uid wrong
and was getting a security error when I started qpid. Now that I have
the correct uid/gid, I am seeing the 'try again' error above.

Any tips would be appreciated.

--
J.

---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:[email protected]








---------------------------------------------------------------------
Apache Qpid - AMQP Messaging Implementation
Project:      http://qpid.apache.org
Use/Interact: mailto:[email protected]

Reply via email to