Hmm Okay. If I want to use NATS then I have upgrade my puppet with version 4 and ruby to 1.9.3. But RHEL5 doesn't have ruby 1.9.3 and I want to use the same for all kind of OS like Linux, Windows, Solaris.
Regards Ravi On Monday, March 6, 2017 at 11:56:48 AM UTC+5:30, R.I.Pienaar wrote: > > Hey > > No there is no proxy node thing > > Pretty sure such huge --nodes requests are just overloading activemq > persistence systems as you suggest and probably there are long spirals of > garbage collect > > Use --batch to make smaller groups will be better. I have no idea if NATS > as found in choria.io is going to do better but my gut feel is yes as it > has no persistence - which mco doesn't even need in the first place. > > --- > R.I.Pienaar > > On 6 Mar 2017, at 05:04, [email protected] <javascript:> wrote: > > I agree people will make mistakes and that could affect all nodes. But its > very limited access for this mcollective in my environment and also I have > to manage all the servers in one place. > > Today I attempted to do mco rpc rpcutil ping 5745 servers using --nodes > with --dt=120 by changing the configuration to with 1xtopic and 2xqueues > and got below result > > Finished processing 2481 / 5745 hosts in 133708.14 ms > > and after that many of the servers on collectives are not responding. Then > I left it for few mins and when I tried to ping for few servers its > responding fine. Not sure it due to queues or that data temporarily writing > to persistance adapter kahadb. I may need to perform more tuning to > overcome this issue but not sure what tuning still am missing. > > Is there way to configure mco console servers for each remote site and one > central mcollective console server on top each remove sites console > servers? If such setup is possible then I can try with this and even by > this way I dont think will be getting such issue. Since when we take each > remote site then the server count will be less. Possibly I might looking > the way how saltstak is working with multiple master setup. > > > Regards > Ravi > > > On Saturday, March 4, 2017 at 2:36:32 PM UTC+5:30, R.I.Pienaar wrote: >> >> No I don't know rabbitmq. >> >> I am still curious why you want to build this one giant network like >> this. It's for sure a bad idea >> >> People can't think about that many nodes and will make mistakes that >> could affect all nodes. Technically as you're finding you are up against >> what's technically possible. I have personally given up on ActiveMQ in >> favour of NATS because I know activemq is lots of trouble. >> >> I don't know if NATS will scale to your needs as I have not used or heard >> from people with that many choria nodes but for sure such a big activemq >> site all as one giant collective and trying to address so many nodes is a >> mistake. >> >> --- >> R.I.Pienaar >> >> On 4 Mar 2017, at 10:01, [email protected] wrote: >> >> By the way any idea with configuring federation in RabbitMQ for this >> case? I attempted that as well but am not getting response back from all >> the servers when I scale up to multiple remote sites. >> >> >> Regards >> Ravi >> >> On Saturday, March 4, 2017 at 2:29:16 PM UTC+5:30, [email protected] >> wrote: >>> >>> Ok sure I will try with your recommendation. Thanks for your help! >>> >>> >>> Regards >>> Ravi >>> >>> On Saturday, March 4, 2017 at 2:22:26 PM UTC+5:30, R.I.Pienaar wrote: >>>> >>>> Between the activemq instances you have networkConnector one for topic >>>> and one for queue. >>>> >>>> I would add a 2nd networkConnector in the xml for queues thus 1 x >>>> topic and 2 x queue for every remote broker. >>>> >>>> You should sync times that's not optional. >>>> >>>> --- >>>> R.I.Pienaar >>>> >>>> On 4 Mar 2017, at 09:34, [email protected] wrote: >>>> >>>> Hi, >>>> >>>> Sorry I didn't get you exactly. Am already having load balancing on all >>>> remote sites and have only one broker on central. Do you want me to try >>>> with load balancing for central one also? >>>> >>>> Also I just remember that I have set ttl to 300 to avoid time drift >>>> issue for some of the servers. Not sure if this is causing the issue and >>>> also not sure what limit is reached. can you check my config and suggests >>>> me if any limit I can tune it to overcome this issue? >>>> >>>> >>>> Regards >>>> Ravi >>>> >>>> On Saturday, March 4, 2017 at 1:55:46 PM UTC+5:30, R.I.Pienaar wrote: >>>>> >>>>> ok, and with --nodes it must send individual messages there's no other >>>>> way >>>>> >>>>> This would put significant pressure on your broker 2 broker >>>>> connections, >>>>> either ActiveMQ is marking some remote brokers as 'slow' and so they >>>>> must back >>>>> off and wait before they can get messages again or some other limits >>>>> are >>>>> reached. >>>>> >>>>> you can perhaps try make multiple connection from your remote sites to >>>>> the >>>>> hub - even in a single ActiveMQ 2 x broker to broker connections to >>>>> the >>>>> same >>>>> upstream will result in it load balancing across those TCP connections >>>>> >>>>> Other than that, I am not too sure what we can try - what is the >>>>> motivation >>>>> for such huge requests with --nodes ? >>>>> >>>>> On Sat, Mar 4, 2017, at 09:11, [email protected] wrote: >>>>> > yeah am also thinking that some optimisations I have to do but I >>>>> tried >>>>> > lot >>>>> > which didn't help. If you would help me with optimization that would >>>>> be >>>>> > great. >>>>> > >>>>> > I put my client in debug mode and seeing it publishing 1 message per >>>>> > node, >>>>> > since am using direct_addressing on both client and server config. >>>>> > >>>>> > `publish' Sending a direct message to ActiveMQ target >>>>> > '/queue/mcollective.nodes' with headers '{"mc_identity"=>"xxxxxxxx", >>>>> > "timestamp"=>"1488614346000", >>>>> > "reply-to"=>"/queue/mcollective.reply.yyyyyyyyyy_16102", >>>>> > "expires"=>"1488614656000"}' >>>>> > >>>>> > >>>>> > Regards >>>>> > Ravi >>>>> > >>>>> > On Saturday, March 4, 2017 at 1:12:29 PM UTC+5:30, R.I.Pienaar >>>>> wrote: >>>>> > > >>>>> > > hey, >>>>> > > >>>>> > > OK See >>>>> > > >>>>> > > So the technical difference here is that without --nodes >>>>> mcollective >>>>> > > publishes a message and relies on the broker to broadcast the >>>>> messages >>>>> > > but with --nodes it has to publish individual messages thus >>>>> putting >>>>> > > quite the strain on the middleware. >>>>> > > >>>>> > > I recall there being some optimisations around when it does and >>>>> does not >>>>> > > publish individual messages but if you put your client into debug >>>>> mode >>>>> > > you'll either see it publishing 1 message per node in your nodes >>>>> file or >>>>> > > something like "Sending a broadcast message to ActiveMQ" >>>>> > > >>>>> > > which do you see? >>>>> > > >>>>> > > On Sat, Mar 4, 2017, at 08:01, [email protected] <javascript:> >>>>> wrote: >>>>> > > > I don't have any issues with any specific site until I do mco >>>>> query for >>>>> > > > multiple nodes using --nodes. >>>>> > > > >>>>> > > > when I do inventory am getting below, >>>>> > > > >>>>> > > > Collective Nodes >>>>> > > > ========== ===== >>>>> > > > australia_mcollective 1950 >>>>> > > > london_mcollective 4120 >>>>> > > > europe_mcollective 1615 >>>>> > > > asia_mcollective 3456 >>>>> > > > china_mcollective 3794 >>>>> > > > japan_mcollective 7581 >>>>> > > > america_mcollective 12532 >>>>> > > > mcollective 35048 >>>>> > > > >>>>> > > > inventory details after I run mco query against 1000+ servers >>>>> list using >>>>> > > > --nodes and out of that 200 servers not responded, >>>>> > > > >>>>> > > > Collective Nodes >>>>> > > > ========== ===== >>>>> > > > australia_mcollective 856 >>>>> > > > london_mcollective 2456 >>>>> > > > europe_mcollective 1615 >>>>> > > > asia_mcollective 2678 >>>>> > > > china_mcollective 1346 >>>>> > > > japan_mcollective 4156 >>>>> > > > america_mcollective 7653 >>>>> > > > mcollective 20760 >>>>> > > > >>>>> > > > but if I do mco query for just particular servers using -I then >>>>> not >>>>> > > > getting >>>>> > > > any issues and even when I run for 1000+ nodes using --nodes and >>>>> if am >>>>> > > > getting response from all then also I dont have any issues. When >>>>> some >>>>> > > > amount of servers not responding for --nodes then only am >>>>> getting such >>>>> > > > issues and if I leave it for 5-10 mins then everything coming >>>>> back to >>>>> > > > normal. >>>>> > > > >>>>> > > > yes, am using sub collectives and each site I have load >>>>> balancing with >>>>> > > > multiple activemq brokers. maximum 2000 servers will connect for >>>>> each >>>>> > > > broker on every site. >>>>> > > > >>>>> > > > Am suspecting that I need to perform some tuning queues but I >>>>> tried many >>>>> > > > things which didn't help me. >>>>> > > > >>>>> > > > >>>>> > > > Regards >>>>> > > > Ravi >>>>> > > > >>>>> > > > On Saturday, March 4, 2017 at 12:19:57 PM UTC+5:30, R.I.Pienaar >>>>> wrote: >>>>> > > > > >>>>> > > > > Ok. So do you find that from central to say Japan you have >>>>> lots of >>>>> > > issues? >>>>> > > > > How is it in Japan from a mco client there communicating with >>>>> just >>>>> > > Japanese >>>>> > > > > nodes. >>>>> > > > > >>>>> > > > > Are you aiming to build one giant 40k node mcollective - not a >>>>> good >>>>> > > idea - >>>>> > > > > or several smaller ones? Or using sub collectives? >>>>> > > > > >>>>> > > > > For sure on many levels a single giant 40k node will not work >>>>> but with >>>>> > > > > what you show a number of subcollectives etc should work if >>>>> it's a hub >>>>> > > and >>>>> > > > > spoke design >>>>> > > > > >>>>> > > > > How many nodes per country? And just one activemq there? >>>>> > > > > >>>>> > > > > On 4 Mar 2017, at 07:39, [email protected] <javascript:> >>>>> wrote: >>>>> > > > > >>>>> > > > > Sorry yes mco rpc rpcutil ping and I mentioned wrongly. >>>>> > > > > >>>>> > > > > you can find my setup on >>>>> > > > > https://awwapp.com/s/f0123542-5072-43dc-94a1-72f33fbbfcb1/ >>>>> > > > > >>>>> > > > > and here is my central server activemq.xml file. Just given >>>>> you >>>>> > > limited >>>>> > > > > number of sites only here and I have many number of sites and >>>>> total >>>>> > > number >>>>> > > > > of servers are 40000+ if you combine all sites. Also am using >>>>> activemq >>>>> > > > > version 5.14.3. >>>>> > > > > >>>>> > > > > >>>>> > > > > >>>>> > > >>>>> ================================================================================= >>>>> >>>>> >>>>> > > >>>>> > > > > <beans >>>>> > > > > xmlns="http://www.springframework.org/schema/beans" >>>>> > > > > xmlns:amq="http://activemq.apache.org/schema/core" >>>>> > > > > xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" >>>>> > > > > xsi:schemaLocation=" >>>>> http://www.springframework.org/schema/beans >>>>> > > > > >>>>> http://www.springframework.org/schema/beans/spring-beans-2.0.xsd >>>>> > > > > http://activemq.apache.org/schema/core >>>>> > > > > http://activemq.apache.org/schema/core/activemq-core.xsd >>>>> > > > > http://activemq.apache.org/camel/schema/spring >>>>> > > > > >>>>> http://activemq.apache.org/camel/schema/spring/camel-spring.xsd"> >>>>> > > > > >>>>> > > > > <!-- Allows us to use system properties as variables in this >>>>> > > > > configuration file --> >>>>> > > > > <bean >>>>> > > > > >>>>> > > >>>>> class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> >>>>> >>>>> >>>>> > > >>>>> > > > > <property name="locations"> >>>>> > > > > >>>>> <value>file:${activemq.base}/conf/credentials.properties</value> >>>>> > > > > </property> >>>>> > > > > </bean> >>>>> > > > > >>>>> > > > > <!-- >>>>> > > > > The <broker> element is used to configure the ActiveMQ broker. >>>>> > > > > --> >>>>> > > > > >>>>> > > > > <broker xmlns="http://activemq.apache.org/schema/core" >>>>> > > > > brokerName="central" networkConnectorStartAsync="true" >>>>> > > > > dataDirectory="${activemq.base}/data" >>>>> > > > > schedulePeriodForDestinationPurge="60000"> >>>>> > > > > >>>>> > > > > <managementContext> >>>>> > > > > <managementContext createConnector="false"/> >>>>> > > > > </managementContext> >>>>> > > > > >>>>> > > > > <persistenceAdapter> >>>>> > > > > <kahaDB directory="${activemq.data}/kahadb" >>>>> > > > > journalMaxFileLength="64mb" preallocationStrategy="zeros" >>>>> > > > > indexCacheSize="30000" journalDiskSyncStrategy="never" >>>>> > > maxAsyncJobs="30000" >>>>> > > > > indexWriteBatchSize="30000" /> >>>>> > > > > </persistenceAdapter> >>>>> > > > > >>>>> > > > > <destinationPolicy> >>>>> > > > > <policyMap> >>>>> > > > > <policyEntries> >>>>> > > > > <policyEntry queue=">" producerFlowControl="false" >>>>> > > > > gcInactiveDestinations="true" >>>>> inactiveTimoutBeforeGC="300000"/> >>>>> > > > > <policyEntry topic=">" producerFlowControl="false"/> >>>>> > > > > </policyEntries> >>>>> > > > > </policyMap> >>>>> > > > > </destinationPolicy> >>>>> > > > > >>>>> > > > > <networkConnectors> >>>>> > > > > >>>>> > > > > <networkConnector >>>>> > > > > name="america-topic" >>>>> > > > > uri="static:(tcp://america:61616)" >>>>> > > > > duplex="true" >>>>> > > > > decreaseNetworkConsumerPriority="true" >>>>> > > > > dynamicOnly="true"> >>>>> > > > > <excludedDestinations> >>>>> > > > > <queue physicalName=">" /> >>>>> > > > > </excludedDestinations> >>>>> > > > > <dynamicallyIncludedDestinations> >>>>> > > > > <topic physicalName="ActiveMQ.>" /> >>>>> > > > > <topic physicalName="mcollective.>" /> >>>>> > > > > <topic physicalName="america_mcollective.>" /> >>>>> > > > > </dynamicallyIncludedDestinations> >>>>> > > > > </networkConnector> >>>>> > > > > >>>>> > > > > <networkConnector >>>>> > > > > name="america-queue" >>>>> > > > > uri="static:(tcp://america:61616)" >>>>> > > > > duplex="true" >>>>> > > > > decreaseNetworkConsumerPriority="true" >>>>> > > > > prefetchSize="20000" >>>>> > > > > dynamicOnly="true" >>>>> > > > > conduitSubscriptions="false"> >>>>> > > > > <excludedDestinations> >>>>> > > > > <topic physicalName=">" /> >>>>> > > > > </excludedDestinations> >>>>> > > > > <dynamicallyIncludedDestinations> >>>>> > > > > <queue physicalName="mcollective.>" /> >>>>> > > > > <queue physicalName="america_mcollective.>" /> >>>>> > > > > </dynamicallyIncludedDestinations> >>>>> > > > > </networkConnector> >>>>> > > > > >>>>> > > > > <networkConnector >>>>> > > > > name="canada-topic" >>>>> > > > > uri="static:(tcp://canada:61616)" >>>>> > > > > duplex="true" >>>>> > > > > decreaseNetworkConsumerPriority="true" >>>>> > > > > dynamicOnly="true"> >>>>> > > > > <excludedDestinations> >>>>> > > > > <queue physicalName=">" /> >>>>> > > > > </excludedDestinations> >>>>> > > > > <dynamicallyIncludedDestinations> >>>>> > > > > <topic physicalName="ActiveMQ.>" /> >>>>> > > > > <topic physicalName="mcollective.>" /> >>>>> > > > > <topic physicalName="canada_mcollective.>" /> >>>>> > > > > </dynamicallyIncludedDestinations> >>>>> > > > > </networkConnector> >>>>> > > > > >>>>> > > > > <networkConnector >>>>> > > > > name="canada-queue" >>>>> > > > > uri="static:(tcp://canada:61616)" >>>>> > > > > duplex="true" >>>>> > > > > decreaseNetworkConsumerPriority="true" >>>>> > > > > prefetchSize="20000" >>>>> > > > > dynamicOnly="true" >>>>> > > > > conduitSubscriptions="false"> >>>>> > > > > <excludedDestinations> >>>>> > > > > <topic physicalName=">" /> >>>>> > > > > </excludedDestinations> >>>>> > > > > <dynamicallyIncludedDestinations> >>>>> > > > > <queue physicalName="mcollective.>" /> >>>>> > > > > <queue physicalName="canada_mcollective.>" /> >>>>> > > > > </dynamicallyIncludedDestinations> >>>>> > > > > </networkConnector> >>>>> > > > > >>>>> > > > > <networkConnector >>>>> > > > > name="london-topic" >>>>> > > > > uri="static:(tcp://london:61616)" >>>>> > > > > duplex="true" >>>>> > > > > decreaseNetworkConsumerPriority="true" >>>>> > > > > dynamicOnly="true"> >>>>> > > > > <excludedDestinations> >>>>> > > > > <queue physicalName=">" /> >>>>> > > > > </excludedDestinations> >>>>> > > > > <dynamicallyIncludedDestinations> >>>>> > > > > <topic physicalName="ActiveMQ.>" /> >>>>> > > > > <topic physicalName="mcollective.>" /> >>>>> > > > > <topic physicalName="london_mcollective.>" /> >>>>> > > > > </dynamicallyIncludedDestinations> >>>>> > > > > </networkConnector> >>>>> > > > > >>>>> > > > > <networkConnector >>>>> > > > > name="london-queue" >>>>> > > > > uri="static:(tcp://london:61616)" >>>>> > > > > duplex="true" >>>>> > > > > decreaseNetworkConsumerPriority="true" >>>>> > > > > prefetchSize="20000" >>>>> > > > > dynamicOnly="true" >>>>> > > > > conduitSubscriptions="false"> >>>>> > > > > <excludedDestinations> >>>>> > > > > <topic physicalName=">" /> >>>>> > > > > </excludedDestinations> >>>>> > > > > <dynamicallyIncludedDestinations> >>>>> > > > > <queue physicalName="mcollective.>" /> >>>>> > > > > <queue physicalName="london_mcollective.>" /> >>>>> > > > > </dynamicallyIncludedDestinations> >>>>> > > > > </networkConnector> >>>>> > > > > >>>>> > > > > <networkConnector >>>>> > > > > name="australia-topics" >>>>> > > > > uri="static:(tcp://australia:61616)" >>>>> > > > > duplex="true" >>>>> > > > > decreaseNetworkConsumerPriority="true" >>>>> > > > > dynamicOnly="true"> >>>>> > > > > <excludedDestinations> >>>>> > > > > <queue physicalName=">" /> >>>>> > > > > </excludedDestinations> >>>>> > > > > <dynamicallyIncludedDestinations> >>>>> > > > > <topic physicalName="ActiveMQ.>" /> >>>>> > > > > <topic physicalName="mcollective.>" /> >>>>> > > > > <topic physicalName="australia_mcollective.>" /> >>>>> > > > > </dynamicallyIncludedDestinations> >>>>> > > > > </networkConnector> >>>>> > > > > >>>>> > > > > <networkConnector >>>>> > > > > name="australia-queue" >>>>> > > > > uri="static:(tcp://australia:61616)" >>>>> > > > > duplex="true" >>>>> > > > > decreaseNetworkConsumerPriority="true" >>>>> > > > > prefetchSize="20000" >>>>> > > > > dynamicOnly="true" >>>>> > > > > conduitSubscriptions="false"> >>>>> > > > > <excludedDestinations> >>>>> > > > > <topic physicalName=">" /> >>>>> > > > > </excludedDestinations> >>>>> > > > > <dynamicallyIncludedDestinations> >>>>> > > > > <queue physicalName="mcollective.>" /> >>>>> > > > > <queue physicalName="australia_mcollective.>" /> >>>>> > > > > </dynamicallyIncludedDestinations> >>>>> > > > > </networkConnector> >>>>> > > > > >>>>> > > > > <networkConnector >>>>> > > > > name="europe-topic" >>>>> > > > > uri="static:(tcp://europe:61616)" >>>>> > > > > duplex="true" >>>>> > > > > decreaseNetworkConsumerPriority="true" >>>>> > > > > dynamicOnly="true"> >>>>> > > > > <excludedDestinations> >>>>> > > > > <queue physicalName=">" /> >>>>> > > > > </excludedDestinations> >>>>> > > > > <dynamicallyIncludedDestinations> >>>>> > > > > <topic physicalName="ActiveMQ.>" /> >>>>> > > > > <topic physicalName="mcollective.>" /> >>>>> > > > > <topic physicalName="europe_mcollective.>" /> >>>>> > > > > </dynamicallyIncludedDestinations> >>>>> > > > > </networkConnector> >>>>> > > > > >>>>> > > > > <networkConnector >>>>> > > > > name="europe-queue" >>>>> > > > > uri="static:(tcp://europe:61616)" >>>>> > > > > duplex="true" >>>>> > > > > decreaseNetworkConsumerPriority="true" >>>>> > > > > prefetchSize="20000" >>>>> > > > > dynamicOnly="true" >>>>> > > > > conduitSubscriptions="false"> >>>>> > > > > <excludedDestinations> >>>>> > > > > <topic physicalName=">" /> >>>>> > > > > </excludedDestinations> >>>>> > > > > <dynamicallyIncludedDestinations> >>>>> > > > > <queue physicalName="mcollective.>" /> >>>>> > > > > <queue physicalName="europe_mcollective.>" /> >>>>> > > > > </dynamicallyIncludedDestinations> >>>>> > > > > </networkConnector> >>>>> > > > > >>>>> > > > > <networkConnector >>>>> > > > > name="china-topic" >>>>> > > > > uri="static:(tcp://china:61616)" >>>>> > > > > duplex="true" >>>>> > > > > decreaseNetworkConsumerPriority="true" >>>>> > > > > dynamicOnly="true"> >>>>> > > > > <excludedDestinations> >>>>> > > > > <queue physicalName=">" /> >>>>> > > > > </excludedDestinations> >>>>> > > > > <dynamicallyIncludedDestinations> >>>>> > > > > <topic physicalName="ActiveMQ.>" /> >>>>> > > > > <topic physicalName="mcollective.>" /> >>>>> > > > > <topic physicalName="china_mcollective.>" /> >>>>> > > > > </dynamicallyIncludedDestinations> >>>>> > > > > </networkConnector> >>>>> > > > > >>>>> > > > > <networkConnector >>>>> > > > > name="china-queue" >>>>> > > > > uri="static:(tcp://china:61616)" >>>>> > > > > duplex="true" >>>>> > > > > decreaseNetworkConsumerPriority="true" >>>>> > > > > prefetchSize="20000" >>>>> > > > > dynamicOnly="true" >>>>> > > > > conduitSubscriptions="false"> >>>>> > > > > <excludedDestinations> >>>>> > > > > <topic physicalName=">" /> >>>>> > > > > </excludedDestinations> >>>>> > > > > <dynamicallyIncludedDestinations> >>>>> > > > > <queue physicalName="mcollective.>" /> >>>>> > > > > <queue physicalName="china_mcollective.>" /> >>>>> > > > > </dynamicallyIncludedDestinations> >>>>> > > > > </networkConnector> >>>>> > > > > >>>>> > > > > <networkConnector >>>>> > > > > name="asia-topic" >>>>> > > > > uri="static:(tcp://asia:61616)" >>>>> > > > > duplex="true" >>>>> > > > > decreaseNetworkConsumerPriority="true" >>>>> > > > > dynamicOnly="true"> >>>>> > > > > <excludedDestinations> >>>>> > > > > <queue physicalName=">" /> >>>>> > > > > </excludedDestinations> >>>>> > > > > <dynamicallyIncludedDestinations> >>>>> > > > > <topic physicalName="ActiveMQ.>" /> >>>>> > > > > <topic physicalName="mcollective.>" /> >>>>> > > > > <topic physicalName="asia_mcollective.>" /> >>>>> > > > > </dynamicallyIncludedDestinations> >>>>> > > > > </networkConnector> >>>>> > > > > >>>>> > > > > <networkConnector >>>>> > > > > name="asia-queue" >>>>> > > > > uri="static:(tcp://asia:61616)" >>>>> > > > > duplex="true" >>>>> > > > > decreaseNetworkConsumerPriority="true" >>>>> > > > > prefetchSize="20000" >>>>> > > > > dynamicOnly="true" >>>>> > > > > conduitSubscriptions="false"> >>>>> > > > > <excludedDestinations> >>>>> > > > > <topic physicalName=">" /> >>>>> > > > > </excludedDestinations> >>>>> > > > > <dynamicallyIncludedDestinations> >>>>> > > > > <queue physicalName="mcollective.>" /> >>>>> > > > > <queue physicalName="asia_mcollective.>" /> >>>>> > > > > </dynamicallyIncludedDestinations> >>>>> > > > > </networkConnector> >>>>> > > > > >>>>> > > > > <networkConnector >>>>> > > > > name="japan-topic" >>>>> > > > > uri="static:(tcp://japan:61616)" >>>>> > > > > duplex="true" >>>>> > > > > decreaseNetworkConsumerPriority="true" >>>>> > > > > dynamicOnly="true"> >>>>> > > > > <excludedDestinations> >>>>> > > > > <queue physicalName=">" /> >>>>> > > > > </excludedDestinations> >>>>> > > > > <dynamicallyIncludedDestinations> >>>>> > > > > <topic physicalName="ActiveMQ.>" /> >>>>> > > > > <topic physicalName="mcollective.>" /> >>>>> > > > > <topic physicalName="japan_mcollective.>" /> >>>>> > > > > </dynamicallyIncludedDestinations> >>>>> > > > > </networkConnector> >>>>> > > > > >>>>> > > > > <networkConnector >>>>> > > > > name="japan-queue" >>>>> > > > > uri="static:(tcp://japan:61616)" >>>>> > > > > duplex="true" >>>>> > > > > decreaseNetworkConsumerPriority="true" >>>>> > > > > prefetchSize="20000" >>>>> > > > > dynamicOnly="true" >>>>> > > > > conduitSubscriptions="false"> >>>>> > > > > <excludedDestinations> >>>>> > > > > <topic physicalName=">" /> >>>>> > > > > </excludedDestinations> >>>>> > > > > <dynamicallyIncludedDestinations> >>>>> > > > > <queue physicalName="mcollective.>" /> >>>>> > > > > <queue physicalName="japan_mcollective.>" /> >>>>> > > > > </dynamicallyIncludedDestinations> >>>>> > > > > </networkConnector> >>>>> > > > > >>>>> > > > > </networkConnectors> >>>>> > > > > >>>>> > > > > >>>>> > > > > <systemUsage> >>>>> > > > > <systemUsage> >>>>> > > > > <memoryUsage> >>>>> > > > > <memoryUsage percentOfJvmHeap="70" /> >>>>> > > > > </memoryUsage> >>>>> > > > > <storeUsage> >>>>> > > > > <storeUsage limit="1 gb"/> >>>>> > > > > </storeUsage> >>>>> > > > > <tempUsage> >>>>> > > > > <tempUsage limit="1 gb"/> >>>>> > > > > </tempUsage> >>>>> > > > > </systemUsage> >>>>> > > > > </systemUsage> >>>>> > > > > >>>>> > > > > <transportConnectors> >>>>> > > > > <transportConnector name="openwire" uri="tcp:// >>>>> 0.0.0.0:61616" >>>>> > > > > updateClusterClients="true"/> >>>>> > > > > <t >>>> >>>> -- >> >> --- >> You received this message because you are subscribed to the Google Groups >> "mcollective-users" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected]. >> For more options, visit https://groups.google.com/d/optout. >> >> -- > > --- > You received this message because you are subscribed to the Google Groups > "mcollective-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected] <javascript:>. > For more options, visit https://groups.google.com/d/optout. > > -- --- You received this message because you are subscribed to the Google Groups "mcollective-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/d/optout.
