Forwarding back to the ganglia-general mailing-list.

Regards,

Bernard

On Sun, Dec 26, 2010 at 8:24 PM, bhargav90 <[email protected]> wrote:
>
> Hi Bernard,
>                  The following are the gmond & gmetad configurations..
> GMOND
> /* This configuration is as close to 2.5.x default behavior as possible
>    The values closely match ./gmond/metric.h definitions in 2.5.x */
> globals {
>   daemonize = yes
>   setuid = yes
>   user = nobody
>   debug_level = 0
>   max_udp_msg_len = 1472
>   mute = no
>   deaf = no
>   allow_extra_data = yes
>   host_dmax = 0 /*secs */
>   cleanup_threshold = 300 /*secs */
>   gexec = yes
>   send_metadata_interval = 60 /*secs */
> }
> /* If a cluster attribute is specified, then all gmond hosts are wrapped
> inside
>  * of a <CLUSTER> tag.  If you do not specify a cluster tag, then all
> <HOSTS> will
>  * NOT be wrapped inside of a <CLUSTER> tag. */
> cluster {
>   name = "aix01"
>   owner = "unspecified"
>   latlong = "unspecified"
>   url = "http://aixmaster01/ganglia";
> }
> /* The host section describes attributes of the host, like the location */
> host {
>   location = "mumbai"
> }
> /* Feel free to specify as many udp_send_channels as you like.  Gmond
>    used to only support having a single channel */
> udp_send_channel {
>   bind_hostname = yes # Highly recommended, soon to be default.
>                        # This option tells gmond to use a source address
>                        # that resolves to the machine's hostname.  Without
>                        # this, the metrics may appear to come from any
>                        # interface and the DNS names associated with
>                        # those IPs will be used to create the RRDs.
>   host = aixmaster01
>   port = 8649
>   ttl = 1
> }
>
> /* You can specify as many udp_recv_channels as you like as well. */
> udp_recv_channel {
>   bind = aix01
>   port = 8649
> }
> /* You can specify as many tcp_accept_channels as you like to share
>    an xml description of the state of the cluster */
> tcp_accept_channel {
>   port = 8649
> }
> /* Each metrics module that is referenced by gmond must be specified and
>    loaded. If the module has been statically linked with gmond, it does not
>    require a load path. However all dynamically loadable modules must
> include
>    a load path. */
> modules {
>   module {
>     name = "core_metrics"
>   }
>   module {
>     name = "cpu_module"
>     path = "modcpu.so"
>   }
>   module {
>     name = "disk_module"
>     path = "moddisk.so"
>   }
>   module {
>     name = "load_module"
>     path = "modload.so"
>   }
>   module {
>     name = "mem_module"
>     path = "modmem.so"
>   }
>   module {
>     name = "net_module"
>     path = "modnet.so"
>   }
>   module {
>     name = "proc_module"
>     path = "modproc.so"
>   }
> ........
> GMETAD
> # This is an example of a Ganglia Meta Daemon configuration file
> #                http://ganglia.sourceforge.net/
> #
> # $Id: gmetad.conf.in 2014 2009-08-10 10:44:09Z d_pocock $
> #
> #-------------------------------------------------------------------------------
> # Setting the debug_level to 1 will keep daemon in the forground and
> # show only error messages. Setting this value higher than 1 will make
> # gmetad output debugging information and stay in the foreground.
> # default: 0
> # debug_level 10
> #
> #-------------------------------------------------------------------------------
> # What to monitor. The most important section of this file.
> #
> # The data_source tag specifies either a cluster or a grid to
> # monitor. If we detect the source is a cluster, we will maintain a complete
> # set of RRD databases for it, which can be used to create historical
> # graphs of the metrics. If the source is a grid (it comes from another
> gmetad),
> # we will only maintain summary RRDs for it.
> #
> # Format:
> # data_source "my cluster" [polling interval] address1:port addreses2:port
> ...
> #
> # The keyword 'data_source' must immediately be followed by a unique
> # string which identifies the source, then an optional polling interval in
> # seconds. The source will be polled at this interval on average.
> # If the polling interval is omitted, 15sec is asssumed.
> #
> # A list of machines which service the data source follows, in the
> # format ip:port, or name:port. If a port is not specified then 8649
> # (the default gmond port) is assumed.
> # default: There is no default value
> #
> # data_source "my cluster" 10 localhost  my.machine.edu:8649  1.2.3.5:8655
> # data_source "my grid" 50 1.3.4.7:8655 grid.org:8651 grid-backup.org:8651
> # data_source "another source" 1.3.4.7:8655  1.3.4.8
> data_source "MUMBAI Grid" localhost
> #
> # Round-Robin Archives
> # You can specify custom Round-Robin archives here (defaults are listed
> below)
> #
> # RRAs "RRA:AVERAGE:0.5:1:244" "RRA:AVERAGE:0.5:24:244"
> "RRA:AVERAGE:0.5:168:244" "RRA:AVERAGE:0.5:672:244" \
> #      "RRA:AVERAGE:0.5:5760:374"
> #
> #
> #-------------------------------------------------------------------------------
> # Scalability mode. If on, we summarize over downstream grids, and respect
> # authority tags. If off, we take on 2.5.0-era behavior: we do not wrap our
> output
> # in <GRID></GRID> tags, we ignore all <GRID> tags we see, and always assume
> # we are the "authority" on data source feeds. This approach does not scale
> to
> # large groups of clusters, but is provided for backwards compatibility.
> # default: on
> # scalable off
> #
> #-------------------------------------------------------------------------------
> # The name of this Grid. All the data sources above will be wrapped in a
> GRID
> # tag with this name.
> # default: unspecified
> gridname "MUMBAI"
> #
> #-------------------------------------------------------------------------------
> # The authority URL for this grid. Used by other gmetads to locate graphs
> # for our data sources. Generally points to a ganglia/
> # website on this machine.
> # default: "http://hostname/ganglia/";,
> #   where hostname is the name of this machine, as defined by gethostname().
> authority "http://aixmaster01/ganglia/";
> #
> #-------------------------------------------------------------------------------
> # List of machines this gmetad will share XML with. Localhost
> # is always trusted.
> # default: There is no default value
> trusted_hosts 127.0.0.1 mumbaiadm01.pfdc.net
> #
> #-------------------------------------------------------------------------------
> # If you want any host which connects to the gmetad XML to receive
> # data, then set this value to "on"
> # default: off
> all_trusted on
> #
> #-------------------------------------------------------------------------------
> # If you don't want gmetad to setuid then set this to off
> # default: on
> # setuid off
> #
> #----------------------------------------------------------------------------------
> # User gmetad will setuid to (defaults to "nobody")
> # default: "nobody"
> # setuid_username "nobody"
> #
> #-------------------------------------------------------------------------------
> # The port gmetad will answer requests for XML
> # default: 8651
> # xml_port 8651
> #
> #-------------------------------------------------------------------------------
> # The port gmetad will answer queries for XML. This facility allows
> # simple subtree and summation views of the XML tree.
> # default: 8652
> # interactive_port 8652
> #
> #-------------------------------------------------------------------------------
> # The number of threads answering XML requests
> # default: 4
> # server_threads 10
> #
> #-------------------------------------------------------------------------------
> # Where gmetad stores its round-robin databases
> # default: "/var/lib/ganglia/rrds"
> # rrd_rootdir "/some/other/place"
> After starting the gmond at debug level 10 I'm getting the following
> errrors..
>
> Unable to find the metric information for 'os_release'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'cpu_user'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'cpu_system'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'cpu_idle'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'cpu_nice'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'cpu_aidle'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'cpu_wio'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'load_one'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'load_five'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'load_fifteen'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'proc_run'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'proc_total'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'mem_free'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'mem_shared'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'mem_buffers'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'mem_cached'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'swap_free'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'bytes_out'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'bytes_in'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'pkts_in'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'pkts_out'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'disk_total'. Possible that the
> module has not been loaded.
> Unable to find the metric information for 'disk_free'. Possible that the
> module has not been loaded.
>         sending metadata for metric: heartbeat
>         sent message 'heartbeat' of length 48 with 0 errors
>         sending metadata for metric: location
>         sent message 'location' of length 48 with 0 errors
>         sending metadata for metric: gexec
>         sent message 'gexec' of length 48 with 0 errors
>
> I have used the same configuration on all servers in the environment & they
> all work fine except this one..
> In the web front it says the server is up and running but it does not show a
> graph, insted of a graph i get the following string
> http://aixmaster01/ganglia/graph.php?g=load_report&z=large&c=MUMBAI%20Standalone%20Systems&h=aix01.pfdc.net&m=load_one&r=hour&s=descending&hc=4&mc=2&st=1293422441
> regards,
>               Kiran Mantri

------------------------------------------------------------------------------
Learn how Oracle Real Application Clusters (RAC) One Node allows customers
to consolidate database storage, standardize their database environment, and, 
should the need arise, upgrade to a full multi-node Oracle RAC database 
without downtime or disruption
http://p.sf.net/sfu/oracle-sfdevnl
_______________________________________________
Ganglia-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/ganglia-general

Reply via email to