[Ganglia-general] libssl and libcrypto in SuSE openssl rpms

2002-08-22 Thread HPC Admin Mail Acct.
There has been questions about this before on the mailing list, but I
haven't seen any responses...

The SuSE rpms for openssl provide:
libssl.so.0
libssl.so.0.9.6
libcrypto.so.0
libcrypto.so.0.9.6

but, authd needs libssl.so.2 and libcrypto.so.2.  Do I need to install
openssl from the latest source, or are these libraries the same, just
named differently?

Thanks,

--
Michael Stone
Linux / High Performance Computing Administrator

The University of Texas at Austin
Mechanical Engineering Department
ETC 3.130 ph: 471.5951
agentsmith[at]mail.utexas.edu






[Ganglia-general] Ganglia Installation

2002-08-22 Thread Evaldo Bezerra da Costa
Hello Matt;

I am install the Ganglia Cluster in my company for test, and I have any
questions about installation the Ganglia.

My configuration:

- S.O: Linux RedHat 7.2

- Master: (eth0) 172.16.10.103 - 255.255.255.0   (eth1) 10.4.8.1 -
255.255.255.0

- Node01: (eth1) 10.4.8.1 - 255.255.255.0

1 - When I configuration the Apache I use the (eth0) our (eth1) in
master ?

2 - I have one node, have any problem ?

3 - When I start the ganglia in browser the don`t show the pictures
(.gif) the hosts ?

4 - My gmond.conf is correct ?

[EMAIL PROTECTED] etc]# more /etc/gmond.conf

# This is the configuration file for the Ganglia Monitor Daemon (gmond)
# Documentation can be found at http://ganglia.sourceforge.net/docs/
#
# To change a value from it's default simply uncomment the line
# and alter the value
#
#
# The name of the cluster this node is a part of
# default: unspecified
name  CIMCORP - Lab. Cluster
#
# The multicast channel for gmond to send/receive data on.  i
# NOTE: Must be in the multicast range from 224.0.0.0-239.255.255.255
# default: 239.2.11.71
mcast_channel 239.2.11.71
#
# The multicast port for gmond to send/receive data on
# default: 8649
mcast_port8649
#
# The multicast interface for gmond to send/receive data on
# default: the kernel decides based on routing configuration
mcast_if  eth1
#
# The multicast Time-To-Live (TTL) for outgoing messages
# default: 1
mcast_ttl  1
#
# The number of threads listening to multicast traffic
# default: 2
mcast_threads 2
#
# Which port should gmond listen for XML requests on
# default: 8649
xml_port 8649
#
# The number of threads answering XML requests
# default: 2
xml_threads   2
#
# Hosts ASIDE from 127.0.0.1/localhost and those multicasting
# on the same multicast channel which you will share your XML
# data with.  Multiple hosts are allowed on multiple lines.
# default: none
trusted_hosts nome
#
# The number of nodes in your cluster.  This value is used in the
# creation of the cluster hash.
# default: 1024
num_nodes  1
#
# The number of custom metrics this gmond will be storing.  This
# value is used in the creation of the host custom_metrics hash.
# default: 16
num_custom_metrics 16
#
# Run gmond in mute mode.  Gmond will only listen to the multicast
# channel but will not send any data on the channel.
# default: off
mute off
#
# Run gmond in deaf mode.  Gmond will only send data on the multicast
# channel but will not listen/store any data from the channel.
# default: off
deaf off
#
# Run gmond in debug mode.  Gmond will not background.  Debug messages

# are sent to stdout.  Value from 0-100.  The higher the number the more

# detailed debugging information will be sent.
# default: 0
debug_level 0
#
# If you don't want gmond to setuid, set this to on
# default: off
no_setuid  off
#
# Which user should gmond run as?
# default: nobody
setuid nobody
#
# If you do not want this host to appear in the gexec host list, set
# this value to on
# default: off
no_gexec   off
#
# If you want any host which connects to the gmond XML to receive
# data, then set this value to on
# default: off
all_trusted off



Thank you very much;

--
Evaldo Bezerra da Costa
Analista de Suporte - Unix Adm.
CIMCORP Com. Int. e Info. S.A.
Rua Lauro Muller, 116 / Sala 906
Botafogo - 22290-160 - RJ - Brasil
Tel./Fax: +55 21 2543-1206
Celular: +55 21 7893-4310
URL: www.cimcorp.com.br




Re: [Ganglia-general] high load with gmetad

2002-08-22 Thread matt massie
mark-

i've seen this behavior on the machine running the ganglia demo page but 
it's just a p2 with 128 mbs of memory (soon to be upgraded).

i'm rewriting gmetad in C right now and will be incorporating it into the 
monitoring-core distribution soon.  the biggest bottleneck right now with 
gmetad is disk I/O.  keep in mind the load on linux is a measure of the 
number of running processes but also those processes in I/O wait.  gmetad 
is writing to about 25 files per host every 15 seconds or so.  the next 
generation of gmetad will not be nearly as i/o intensive.

as a trick to make gmetad work better for you.. create a ramdisk to write 
the round-robin databases to.

here are the steps (i'm assuming you installed gmetad in the default 
location)

1. find out how much space your round-robin databases are taking right now
   by doing the following
  a. # cd /usr/local/gmetad/rrds
  b. # du -sk .
   80384   .
   it's important to note the side of the round-robin databases remain 
   constant over time and don't grow in size.  of course, if you increase
   the number of databases (by monitoring more hosts or metrics) then this 
   number will increase.  in this example (taken from the ganglia demo 
   machine), we are monitoring 117 hosts for a total of over 3000 rrds 
   files with only 78 mbs of disk space.

2. create a ramdisk image file at least as big as the space you need (i'd
   double it... 80384*2= 160768)

   dd if=/dev/zero of=/root/rrd-ramdisk.img bs=1k count=160768

3. mke2fs -F -vm0 /root/rrd-ramdisk.img

4. /etc/rc.d/init.d/gmetad stop

5. mv /usr/local/gmetad/rrds /usr/local/gmetad/rrds.orig

6. mkdir /usr/local/gmetad/rrds

7. mount -o loop /root/rrd-ramdisk.img /usr/local/gmetad/rrds

8. copy your round-robin databases to the new RAM disk...
   (cd /usr/local/gmetad/rrds.orig; tar -cf - .)  | \
   (cd /usr/local/gmetad/rrds; tar -xvf -)

9. /etc/rc.d/init.d/gmetad start

if you want to see a site which uses this RAM disk trick (and invented
this trick too) take a look at http://meta.rocksclusters.org/.  they are 
monitoring over 450 hosts using this method quite comfortably.

one important note... since the data is being written to RAM and not the 
disk.. it of course will be lost on reboot.  if you want to keep the 
round-robin databases long-term.. you will need to setup a cron job which 
saves the data from the RAM disk to the physical disk and then writes it 
back on reboot.

i hope this helps.  i'm going to focus much more attention on gmetad in 
the next few days and i'm sure you'll find the C version of gmetad much 
much more easier to install and much more efficient to run.

good luck!
-matt

Yesterday, markp wrote forth saying...

 Is anyone experiencing a high load with gmetad?  I've run this daemon on
 a high end intel 933mhz dual proc machine with 1gb of memory and RH
 7.2.   Loads get and stay as high as 3.  I get worse results on single
 processor machines, loads as high as 6.7  Kill the daemon and it drops
 back to normal.  Is it supposed to be such a resource hog?  I ran the
 old web-frontend and didn't have any problems.
 
 
 
 
 ---
 This sf.net email is sponsored by: OSDN - Tired of that same old
 cell phone?  Get a new here for FREE!
 https://www.inphonic.com/r.asp?r=sourceforge1refcode1=vs3390
 ___
 Ganglia-general mailing list
 Ganglia-general@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/ganglia-general
 




Re: [Ganglia-general] high load with gmetad

2002-08-22 Thread Steven Wagner
Remember that RRD files are of a fixed size.  In other words, they should 
never grow beyond their original size when created.  That's why they call 
'em round-robin databases. :)


So the only reason new RRDs would be created is if new metrics were added 
for existing hosts or if new hosts were added to the cluster.


Theoretically you could write a startup script that would precisely 
allocate the size of the ramdisk based on the size of the gmetad/rrds 
directory (size plus maybe 256k for temp files?).  That'd be nice and fun...


matt massie wrote:

mark-

i've seen this behavior on the machine running the ganglia demo page but 
it's just a p2 with 128 mbs of memory (soon to be upgraded).


i'm rewriting gmetad in C right now and will be incorporating it into the 
monitoring-core distribution soon.  the biggest bottleneck right now with 
gmetad is disk I/O.  keep in mind the load on linux is a measure of the 
number of running processes but also those processes in I/O wait.  gmetad 
is writing to about 25 files per host every 15 seconds or so.  the next 
generation of gmetad will not be nearly as i/o intensive.


as a trick to make gmetad work better for you.. create a ramdisk to write 
the round-robin databases to.


here are the steps (i'm assuming you installed gmetad in the default 
location)


1. find out how much space your round-robin databases are taking right now
   by doing the following
  a. # cd /usr/local/gmetad/rrds
  b. # du -sk .
   80384   .
   it's important to note the side of the round-robin databases remain 
   constant over time and don't grow in size.  of course, if you increase
   the number of databases (by monitoring more hosts or metrics) then this 
   number will increase.  in this example (taken from the ganglia demo 
   machine), we are monitoring 117 hosts for a total of over 3000 rrds 
   files with only 78 mbs of disk space.


2. create a ramdisk image file at least as big as the space you need (i'd
   double it... 80384*2= 160768)

   dd if=/dev/zero of=/root/rrd-ramdisk.img bs=1k count=160768

3. mke2fs -F -vm0 /root/rrd-ramdisk.img

4. /etc/rc.d/init.d/gmetad stop

5. mv /usr/local/gmetad/rrds /usr/local/gmetad/rrds.orig

6. mkdir /usr/local/gmetad/rrds

7. mount -o loop /root/rrd-ramdisk.img /usr/local/gmetad/rrds

8. copy your round-robin databases to the new RAM disk...
   (cd /usr/local/gmetad/rrds.orig; tar -cf - .)  | \
   (cd /usr/local/gmetad/rrds; tar -xvf -)

9. /etc/rc.d/init.d/gmetad start

if you want to see a site which uses this RAM disk trick (and invented
this trick too) take a look at http://meta.rocksclusters.org/.  they are 
monitoring over 450 hosts using this method quite comfortably.


one important note... since the data is being written to RAM and not the 
disk.. it of course will be lost on reboot.  if you want to keep the 
round-robin databases long-term.. you will need to setup a cron job which 
saves the data from the RAM disk to the physical disk and then writes it 
back on reboot.


i hope this helps.  i'm going to focus much more attention on gmetad in 
the next few days and i'm sure you'll find the C version of gmetad much 
much more easier to install and much more efficient to run.


good luck!
-matt

Yesterday, markp wrote forth saying...



Is anyone experiencing a high load with gmetad?  I've run this daemon on
a high end intel 933mhz dual proc machine with 1gb of memory and RH
7.2.   Loads get and stay as high as 3.  I get worse results on single
processor machines, loads as high as 6.7  Kill the daemon and it drops
back to normal.  Is it supposed to be such a resource hog?  I ran the
old web-frontend and didn't have any problems.




---
This sf.net email is sponsored by: OSDN - Tired of that same old
cell phone?  Get a new here for FREE!
https://www.inphonic.com/r.asp?r=sourceforge1refcode1=vs3390
___
Ganglia-general mailing list
Ganglia-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ganglia-general






---
This sf.net email is sponsored by: OSDN - Tired of that same old
cell phone?  Get a new here for FREE!
https://www.inphonic.com/r.asp?r=sourceforge1refcode1=vs3390
___
Ganglia-general mailing list
Ganglia-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ganglia-general







Re: [Ganglia-general] libssl and libcrypto in SuSE openssl rpms

2002-08-22 Thread matt massie
i'm sorry but i don't have access to any SuSE boxes to work this problem 
out on.

# rpm -qa openssl
openssl-0.9.6b-8
# rpm -ql openssl
/lib/libcrypto.so.0.9.6b
/lib/libssl.so.0.9.6b

here's an idea to try.  we've build the gexec tarball so that you can 
easily make an RPM from it.  just type

# rpm -ta gexec-0.3.5.tar.gz

and it will create new RPMS for you.  you can also make RPMs from the 
SRPMS as well of course.

are you having problems compiling gexec from source on SuSE?

-matt

Today, HPC Admin Mail Acct. wrote forth saying...

 There has been questions about this before on the mailing list, but I
 haven't seen any responses...
 
 The SuSE rpms for openssl provide:
 libssl.so.0
 libssl.so.0.9.6
 libcrypto.so.0
 libcrypto.so.0.9.6
 
 but, authd needs libssl.so.2 and libcrypto.so.2.  Do I need to install
 openssl from the latest source, or are these libraries the same, just
 named differently?
 
 Thanks,
 
 --
 Michael Stone
 Linux / High Performance Computing Administrator
 
 The University of Texas at Austin
 Mechanical Engineering Department
 ETC 3.130 ph: 471.5951
 agentsmith[at]mail.utexas.edu
 
 
 
 
 
 ---
 This sf.net email is sponsored by: OSDN - Tired of that same old
 cell phone?  Get a new here for FREE!
 https://www.inphonic.com/r.asp?r=sourceforge1refcode1=vs3390
 ___
 Ganglia-general mailing list
 Ganglia-general@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/ganglia-general