Sebastian Harl wrote:
However, policy does not talk about anything else (besides log files
which should be removed as well) in that respect. *Imho*, purge is
meant to be remove any trace of the package in question which includes
generated data as well. Anyway, for now I'm downgrading the
LoadPlugin df
Plugin df
MountPoint /
MountPoint /boot
/Plugin
LoadPlugin exec
Plugin exec
NotificationExec root /etc/collectd/collectd-notify.sh
/Plugin
LoadPlugin threshold
Threshold
...
/Threshold
Well, you can't run Exec's as root IIRC. Here is what I have:
Threshold
I rolled out collectd 4.8, running in each of about 20 LXC jails. The
I/O load was crippling the server, so I reduced polling with Interval
60 in each jail.
However I would prefer to
- poll every 10s (the default);
- batch write RRDs, such that any given RRD is only written once
every ten
While I was writing a custom email notifier, I set it up to run
logger(1) instead of /usr/sbin/sendmail, to avoid spamming myself
during testing. I noticed that these syslogs ended up in my mail
queue anyway, aggregated hourly by logcheck[0].
I think this is actually more convenient for me, so
I think I am running into the same problem as described here.
http://administratosphere.wordpress.com/2011/05/13/linux-kernel-sync-bug/
I see you already mentioned force-unsafe-io; here's my notes on the
issue (encountered with a dozen LXC VMs which, sharing a kernel, all
get synced together).
Trent W. Buck wrote:
I've just replaced collection 3 with collection 4.0.0+34, and on the
whole I am impressed.
In c4 the independent variable (the x axis) is time. Is that time as
at UTC, local to the c4 server, the hub collectd (rrd writer plugin),
or the spoke collectd (foo reader plugin
If I have a collectd 5.0 hub receiving data from collectd 4.8
spokes, should that Just Work? I ask because while most plugins are
getting through OK (e.g. process, df, etc), there is no evidence of
the apache plugin sending any data to the hub, nor am I seeing any
errors in syslog from either
I'm using tcpconns to look at e.g. the number of open SSH and HTTP
connections.
It's really useful that it auto-detects which ports are in use, rather
than me having to list them. It means if a new service is added to a
machine (especially a customer machine which I monitor, but don't
directly
Sebastian Harl wrote:
Secondly, why is FastCGI being used? My life would be easier if
collection4 was an app server, i.e. it was a permanently-running
daemon that spoke HTTP to the real web server, being a reverse
proxy like varnish, nginx or apache mod_proxy.
I suppose, FastCGI has been
Sebastian Harl wrote:
Sounds interesting … libevent should have a good enough userbase
to provide decent stability ;-)
PS: I remembered urxvt used it, but I couldn't see a dependency on
libevent in Debian. So I asked, and apparently the urxvt guy makes a
better drop-in replacement (libev).
Sebastian Harl wrote:
Hi,
On Wed, Jun 22, 2011 at 02:07:29PM +1000, Trent W. Buck wrote:
Suppose I have centralized monitoring (i.e. hub-and-spoke), similar to
centralized syslogging. On the central host (accepts UDP and writes
RRD), I have
Interval 60
On the nodes
Trent W. Buck wrote:
So I added these two. The first works fine, the second gives strange
results because each line should pick up $TypeInstance, which will be
error, loop, or an IP address I don't know in advance. I haven't
worked out how to add a dynamic set of DEFs, i.e. have
Currently when you write an exec notify handler, you get a stdin
stream that looks a bit like an RFC822 or Debian thing:
Plugin: foo
PluginInstance: bar
field: field value
...
message body
This means the handler needs to parse these out each time, e.g.
#!/bin/bash
Currently I do availability monitoring with a simple nagios2 setup. I
want to replace this with collectd, consolidating performance and
availability monitoring into a single piece of infrastructure.
If you want to make me really happy, you can say why don't you just
do obvious thing? I darkly
Sebastian Harl wrote:
On Tue, Jun 21, 2011 at 12:13:06PM +1000, Trent W. Buck wrote:
I notice that /etc/collection.conf starts with CacheFile
/tmp/collection4.json, which immediately sets off my spidey
security senses[0]. Would it be better to a different default,
such as /var/cache
Suppose I have centralized monitoring (i.e. hub-and-spoke), similar to
centralized syslogging. On the central host (accepts UDP and writes
RRD), I have
Interval 60
On the nodes (collectd data and write to UDP), I have
Interval 10
Is a given RRD file updated every ten seconds, or every
I've just replaced collection 3 with collection 4.0.0+34, and on the
whole I am impressed.
I notice that /etc/collection.conf starts with CacheFile
/tmp/collection4.json, which immediately sets off my spidey security
senses[0]. Would it be better to a different default, such as
17 matches
Mail list logo