Here is my gcc -v output from a stock redhat 8.0 installation,

--
Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/3.2/specs
Configured with:
../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info
 --enable-shared --enable-threads=posix --disable-checking --host=i386-redha
t-linux --with-system-zlib --enable-__cxa_atexit
Thread model: posix
gcc version 3.2 20020903 (Red Hat Linux 8.0 3.2-7)
--

I compiled gcc 3.2.1 just now using the above stock compiler, and now have
this for gcc -v:

--
Reading specs from /usr/local/lib/gcc-lib/i686-pc-linux-gnu/3.2.1/specs
Configured with: ../configure
Thread model: posix
gcc version 3.2.1
--

Now when compiling ganglia-monitor-core-2.5.1 and running gmond I get a
proper executable, with good proc/net/dev results, i.e., pkts_in, bytes_in,
pkts_out, bytes_out!

The ~85k rpm for gmond 2.5.1 at sourceforge also worked fine by the way, on
stock redhat 8.0. But, I couldn't figure out how to get a working compile
with the gcc provided.. 3.2.1 works great though.

Thats my latest update, my previous messages about this problem were just
failed attempts at trying to fix/find the source of the problem.

Apologies for all the spam to the list about all these theories, but one
last one here. Is it possible with a cisco catalyst switch that has multiple
vlans on it that the default gmond mcast_channel will be sent out amongst
all the ports? I found much better success with setting the individual
clusters to a specific mcast_channel, amongst all the other changes I've
already described. So I'm wondering if with the default configuration some
gmonds will 'leak' output over to other vlans. In particular, it become a
nuisance to have to find and shut down the gmonds that still had state of
certain servers, then shut down gmetad temporarily to wipe out the
respective rrds directories, and then bring things back up. In one case I
had multiple gmond ip entries in a gmetad.conf line, and the ones that were
down or didn't have trusted entries came up as broken. This was still during
the time where I had yet to split off some of the cluster groups to all
their own mcast_channel, again I have much better success whenever I do
that.


Reply via email to