Alan - More than happy to knock out the distributed system test and give it back based on what you did for the queues, it may be few days but I should have some time later this week. Also, we have loads of documentation on how this thing works with examples if there is good way to contribute them back that let me know.
As far as the management aspect, what we have been doing is creating 2 networks. A public and a management one with separate routers. We then send broadcast events (config changes) on the management network where daemons pickup the changes, alter the config and then restart the routers. The broadcast side we actually like in particular keeping management and statistics off the production backbone. However, the extra daemon and the restart we don¹t like at all but when we started the management stuff didn¹t exist. That being said, I think both an AMQP and restful interface to the management API would be great as would allow for better integration across many tools with a python wrapper would be my preference. When I think of managing the nodes, I tend to lean toward CRUD based entity type operations and could see a scenario where there is a base type that has a crud interface with each of the management entities extending it. It would make it very easy to build dynamic scripts and tools that way; working through all the entities based on state change status. Also, there needs to be a way to schedule/quiesce changes like we do with HTTP proxy plugins and network devices. What I am thinking is that we can continue to route over existing connections but all new connections go to the new link/address. Jack Jack Gibson Global Platform Services/eBay Inc. On 8/13/14, 3:25 AM, "Alan Conway" <[email protected]> wrote: >[Justin see below for restarted packaging argument] > >I am interested in this topic (haha). There is a simple distributed work >queue test in dispatch/tests/system_tests_broker.py and I have long >intended to add a distributed topic test. I need a bit of time to read >what you have done in detail and think about it, give me a couple of >days. > >Meantime: it may be of interest that we have (incomplete) support for >remote configuration of dispatch. I.e. instead of distributing >many .conf to many hosts, give each host a generic .conf that allows >management access, then add configuration using an AMQP management >client. > >The new configuration entities reflect the sections in the config file. >Presently there are 2 management addresses: $management2 for the new >config stuff, and $management for the stuff used by qdstat. This is >temporary I will integrate everything under $management. You can create, >query and read on $management2, update and delete aren't done yet. Check >the dispatch/tests/system_tests_broker.py for example of use. > >The management schema has some documentation and is at > dispatch/python/qpid_dispatch_internal/management/qdrouterd.json >The qdrouterd_conf man file is generated from the schema and may be >easier to read. > >No tool support yet but there is a python library which makes it fairly >easy to knit your own. See >, and examples in the system tests: > >dispatch/tests/system_tests_management.py >dispatch/python/qpid_dispatch_internal/management > >NOTE: The python stuff is not installed in the normal python path but >under lib or lib64 which is a minor pain. > >I should be freeing up some time soon to finish this off and real user >feedback would be invaluable - send email or raise JIRAs. In particular >ideas about what an ideal management tool would look like would be very >useful. > >[starting an argument] >Justin: I submit this as evidence that the management library belongs on >the python path. Yes we will provide our own command line tools but lots >of people will want write their own. Attempting to hide the goodies >under lib is not going to fool anyone, it just makes it awkward, ugly >and non standard. People will simply read the bin/qdstat script and copy >this hideous junk into their own scripts: > >home = os.environ.get("QPID_DISPATCH_HOME") >if not home: > home = os.path.join(os.path.dirname(os.path.dirname(__file__)), >'lib', 'qpid-dispatch') >sys.path.insert(0, os.path.join(home, "python")) > >That doesn't improve anyone's life, it certainly won't save us any >support calls. > >We can call the package qpid_dispatch_for_consenting_adults_only or >qpid_dispatch_not_even_a_little_bit_supported, but it should be on the >python path!! > >Cheers, >Alan. > >On Wed, 2014-08-13 at 03:20 +0000, Gibson, Jack wrote: >> Users/Devs - >> >> >> We have been kicking the tires on the dispatch router for a while and >> are building a number of working samples that we can use to build >> upon. Currently, we have been working with using dispatch to work >> with distributed/durable topics. The following basic configuration >> works for both a standalone and interior router network with 1 or more >> brokers (obviously with some changes) but we have it working if anyone >> is interested. That being said, some of it isn¹t making much sense; >> ie. not sure it should workŠ >> >> >> First, shouldn¹t their be a fixed address that maps to amq.topic? >> Passing the exchange and routing-key seems to mix and match pre-1.0 >> and 1.0 semantics. Is there another way to do that doesn¹t expose the >> broker internals? >> >> >> Second, creating a fixed address for every durable subscription at the >> router is onerous and just feels wrong (thinking of the use case where >> there are thousands of subscribers). >> >> >> Also, noticed that the router creates a temporary subscription queue >> on the broker. Should it? Why? >> >> >> queue dur autoDel excl msg >> msgIn msgOut bytes bytesIn bytesOut cons bind >> >> >>========================================================================= >>================================================ >> Qpid.Dispatch.Router.router01_amq.topic Y Y 0 >> 0 0 0 0 0 1 2 >> >> >> Last, the phase settings in the addresses aren¹t well documented as to >> the impact of changing them. Any insights into them would be great. >> >> >> Thanks for the help, >> >> >> Jack >> >> >> Below is our current setup using; proton 0.8, dispatch 0.3 (trunk) and >> qpid-broker c++ 0.28. >> >> >> ## Setup the broker as such: >> ## qpid-config add queue myBarQ >> ## qpid-config add queue yourBarQ >> ## qpid-config bind amq.topic myBarQ *.bar >> ## qpid-config bind amq.topic yourBarQ *.bar >> ## >> ## spout --connection-options {protocol:amqp1.0} -b 0.0.0.0:5672 -c >> 1000 amq.topic/foo.bar >> ## recv amqp://0.0.0.0:5672/myBarQ >> ## recv amqp://0.0.0.0:5672/yourBarQ >> >> >> >> container { >> worker-threads: 4 >> container-name: Qpid.Dispatch.Router.router01 >> } >> >> >> ssl-profile { >> name: ssl-profile-name >> } >> >> >> listener { >> addr: 0.0.0.0 >> port: 5672 >> sasl-mechanisms: ANONYMOUS >> max-frame-size: 16384 >> } >> >> >> connector { >> name: broker1 >> role: on-demand >> addr: 10.64.242.218 >> port: 5672 >> sasl-mechanisms: ANONYMOUS >> } >> >> >> router { >> mode: standalone >> router-id: Mammut.router01 >> } >> >> >> waypoint { >> address: amq.topic >> out-phase: 1 >> in-phase: 0 >> connector: broker1 >> } >> >> >> waypoint { >> address: myBarQ >> out-phase: 1 >> in-phase: 0 >> connector: broker1 >> } >> >> >> waypoint { >> address: yourBarQ >> out-phase: 1 >> in-phase: 0 >> connector: broker1 >> } >> >> >> fixed-address { >> prefix: yourBarQ >> phase: 0 >> fanout: single >> bias: spread >> } >> >> >> fixed-address { >> prefix: yourBarQ >> phase: 1 >> fanout: single >> bias: closest >> } >> waypoint { >> address: myBarQ >> out-phase: 1 >> in-phase: 0 >> connector: broker1 >> } >> >> >> fixed-address { >> prefix: myBarQ >> phase: 0 >> fanout: single >> bias: spread >> } >> >> >> fixed-address { >> prefix: myBarQ >> phase: 1 >> fanout: single >> bias: closest >> } >> >> >> Jack Gibson >> >> Global Platform Services >> www.ebayinc.com.png >> >> >> >> > > > >--------------------------------------------------------------------- >To unsubscribe, e-mail: [email protected] >For additional commands, e-mail: [email protected] > --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
