I have some mcollective daemons which have been running since August 5th which
appear to not be able to use filters now. They aren't using debug logging and
restarting them fixes whatever ails them. Do any of you have suggestions about
troubleshooting this without a daemon restart? I'd like to
On Mon, Aug 29, 2016 at 07:12:19AM -0700, rakare2...@gmail.com wrote:
>In our Infra we have 40,000 servers where its all managed by mCollective
>and connected with ActiveMQ broker. We have setup meta registration
Off topic, I'd be interested to see your activemq configs if you're able to
ow NATS handles its first DDOS or inter-datacenter connection
downtime! Will follow up if I notice anything bad.
On Thu, Oct 06, 2016 at 08:02:37PM +0100, R.I.Pienaar wrote:
>
>
> - Original Message -
> > From: "Christopher Wood" <christopher_w...@pobox.com>
&g
Not sure if there's a consensus, but if you don't know the upper bound to your
message size you could chunk your payloads or distribute files in some other
fashion and pass pointers via mcollective.
People are better at describing these things here:
https://github.com/nats-io/go-nats/issues/63
In your place I would start with the following:
1) Gather both your server.cfg and diff them, see if there's anything obvious.
2) Turn up logging and see what it says.
Mcollective:
logfile = /var/tmp/test1.log
logger_type = file
loglevel= debug
rom the same email, removing direct addressing (from the client) helped get
>some better responses.
direct_addressing = 0
direct_addressing_threshold = 0
(On nats I have direct addressing enabled again.)
On Wed, May 17, 2017 at 11:14:36AM -0400, Christopher Wood wrote:
> (Top-posting for b
(Top-posting for brain dump form.)
I have been there, albeit with half the number of running mcollective daemons
and 2 leaf nodes plus a hub. (You can also read my past list posts here and on
activemq-user for a flavour of how I ended up here. I have to add a disclaimer
that my skill is at a
Karthi, I recommend increasing the ActiveMQ log level to debug, turning
everything back on, and seeing what happens when you try the same query.
I found that activemq clustering would just upchuck and die when I sent
mcollective queries that were too large. 800 messages doesn't seem very large
mco find
versus
mco find -S '!fqdn=web1.example.com'
The filters page is definitely worth reading over closely, I didn't even
realize this myself for... an embarrassingly long time.
https://puppet.com/docs/mcollective/current/reference/ui/filters.html
On Fri, Nov 10, 2017 at 12:57:51AM
The -I parameter is the identity configured in server.cfg over here. Do you
have the correct identity configured for your mcollective daemon?
https://puppet.com/docs/mcollective/current/reference/ui/filters.html
10 matches
Mail list logo