Hi Brian,
If i understand correctly the issue is with the fact that templates are
sent out very frequently and hence the daemon keeps writing 'real time'
to disk. You mention a configurable interval but, recalling the original
purpose of the feature (if restarting a collector be able to
The culprit was actually the template file. It appears to block while writing
and it's really slow. When I remove the configuration option one process could
do what I could not accomplish using 80 processes with each using a template
file.
Any consideration on a different implementation?
Hi Brian,
No, it's not currently possible to send exporter system time / uptime to
Kafka (also because doing it per-flow would be lots of wasted space).
Also, there are minimal protections, yes, for example for the case of
flows stopping before they start. But not for time format errors, ie.
Hi Brian,
Thanks very much for the nginx config, definitely something to add to
docs as a possible option. QN reads 'Queries Number' (inherited from the
SQL plugins, hence the queries wording); the first number is now many
are sent to the backend, the second is how many should be sent as part
of
Thanks for the response Paolo. I am using nginx to stream load balance (see
config below).
Another quick question on the Kafka plugin. What Does the QN portion of the
purging cache end line indicate/mean?
2019-02-25T03:05:04Z INFO ( kafka2/kafka ): *** Purging cache - END (PID:
387033, QN:
Hi Brian,
You are most probably looking for this:
https://github.com/pmacct/pmacct/blob/master/QUICKSTART#L2644-#L2659
Should that not work, ie. too many input flows for the available
resources, you have a couple load-balancing strategies possible:
one is to configure a replicator (tee
Is there a way to adjust the UDP buffer receive size ?
Are there any other indications of nfacctd not keeping up?
cat /proc/net/udp |egrep drops\|0835
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt
uid timeout inode ref pointer drops
52366: :0835