Sergey,
It's usually best to compile mod-sflow from sources so that it matches the
particular version of apache you are running. So before you do that you
have the option of editing mod-sflow.c and changing the setting of
SFWB_DEFAULT_CONFIGFILE (on line 211).
Simon,
I don't know if this is still an issue for you, but my understanding is
that the cluster name comes from the gmond instance that you send the sFlow
to.So if you have 1000 hosts running hsflowd and you want to divide
them into 10 clusters then you would run 10 instances of gmond
FYI, Ganglia already understands the output from this alternative JMX
monitoring solution:
https://code.google.com/p/jmx-sflow-agent/
I think it has similar properties to embedded-jmxtrans. Much better to
have the JVM push the stats every 20 seconds or so than have to poll for
them remotely
Mark,
It does seem like the issue is with the sFlow from nginx-sflow-module. I
wrote that module so I can probably help:
(1) just one instance of nginx on that server, or two?
(2) what version of nginx?
(3) single-threaded or multi-threaded nginx?
(4) running on Linux OS?
(5) please upgrade to
Ron,
You might try downloading the latest source code for hsflowd, and compiling
with LIBVIRT=yes VRTDSKPATH=yes
In other words:
svn checkout http://svn.code.sf.net/p/host-sflow/code/trunk host-sflow-code
cd host-sflow-code
make LIBVIRT=yes VRTDSKPATH=yes
This turns on a different way of
The sFlow CPU metrics are processed here:
https://github.com/ganglia/monitor-core/blob/master/gmond/sflow.c#L334
Let me know if you find a problem.
Regards,
Neil
On Aug 10, 2012, at 2:00 AM, crayon z wrote:
Hi, all:
I use ganglia to parse metrics from Host sFlow. The cpu metrics in
in gmond.c:process_tc_accept_channel() could those goto statements close the
socket and return without relinquishing the mutex?
Neil
On Sep 19, 2012, at 8:45 AM, Nicholas Satterly wrote:
Hi Peter,
Thanks for the feedback.
I've added a thread mutex to the hosts hash table as you
You could try adding --disable-sflow as another configure option. (Or were
you planning to use sFlow agents such as hsflowd?).
Neil
On Jul 9, 2012, at 3:50 AM, Nigel LEACH wrote:
Ganglia 3.4.0
Windows 2008 R2 Enterprise
Cygwin 1.5.25
IBM iDataPlex dx360 with Tesla M2070
Confuse 2.7
Hello All,
There is now a Solaris port of hsflowd:
http://host-sflow.sourceforge.net
Binary packages for sparc and x86 can be downloaded, but sources are only in
the trunk:
mkdir host-sflow-trunk
svn co https://host-sflow.svn.sourceforge.net/svnroot/host-sflow/trunk
host-sflow-trunk
more
I'm pretty sure this will not work. You need separate ports.
Neil Mckee
On Mar 28, 2012, at 2:45 PM, Ozzie Sabina o...@sabina.org wrote:
Can this be shared? A quick googling failed me here.
Can I configure a single one of these and accept messages from both gmetric
and sflow clients
on that ?
Thanks,
Vladimir
On Tue, 25 Oct 2011, Neil Mckee wrote:
Vladimir,
Just an FYI since it seems to be relevant to your talk:
I am preparing a patch for Ganglia that will add support for the sFlow-HTTP
feed, as exported by mod-sflow, nginx-sflow-module, tomcat-sflow-valve
Thanks for bringing this up. I checked a change into the hsflowd trunk that
looks for these interfaces and excludes them from the counting. It uses the
SIOCGIFVLAN ioctl call -- although it seems that your filter on the device name
might work just fine.
://www.mediafire.com/?g4jac7dm3mmb662
2011/8/29 Neil Mckee neil.mckee...@gmail.com
Sorry, the failure of virStorageLookupByPath() was preventing
virDomainBlockStats() from being attempted.
I checked in a fix for this, and also code to try the newer
virDomainGetBlockInfo() call as a fallback should
Sorry, the failure of virStorageLookupByPath() was preventing
virDomainBlockStats() from being attempted.
I checked in a fix for this, and also code to try the newer
virDomainGetBlockInfo() call as a fallback should virStorageLookupByPath()
fail. This call only came in with libvirt version
On Aug 18, 2011, at 1:35 AM, Emanuele Verga wrote:
Ok, I tried linking one of the disk files to the default storage pool folder
and it actually detected the linked volume in libvirt:
After issuing a virsh pool-refresh default the disk was correctly detected
and reported as a volume by
as down, the reporting for the physical host
instead it's working perfectly, as before.
If it can help you, the implementation of Openstack we have uses KVM to
virtualize the hosts. Could it be related?
Thanks again,
Emanuele
2011/8/16 Neil Mckee neil.mckee...@gmail.com
Hello
Hello,
On an OpenStack node you may be able to use libxenstore instead of libvirt.
You'll need to recompile hsflowd to try this. Looking at
trunk/src/Linux/Makefile it appears to look for libvirt first, but you can
override that by compiling hsflowd like this:
make clean
make LIBVIRT=no
500 nodes sending sFlow-HOST data is probably only about 25 packets/sec, so
the issue here is unlikely to be a performance bottleneck in terms of CPU,
network bandwidth, UDP buffers etc.
Right now the most likely explanation seems to be some race-condition over how
long before gmond
recorded.If that happened and these large deltas were enough
to trip a sanity-check somewhere further on (perhaps in gmetad), then that
could explain how the gaps appeared in the chart for the whole cluster.
Neil
On Jul 22, 2011, at 1:06 PM, Neil Mckee wrote:
500 nodes sending sFlow-HOST data
I checked the sFlow feed, and it looks like the sanity checks for 32-bit
rollover and impossible-counter-delta are already present in the hsflowd code
(host-sflow.sourceforge.net src/Linux/readNioCounters.c). At least for the
Linux and FreeBSD ports anyway. We should add those checks to the
Hello all,
Exhibiting at the Supercomputing 2010 show in New Orleans? Setting up a
demo cluster?
We are running a monitoring server in the SCinet NOC which is configured to
receive sFlow from the show network. Selected pages will be shown on big
screens all around the show floor and linked
21 matches
Mail list logo