Hello Renier!

> Can you post your configuration file? and where is it located?

The configuration file is located in my home directory in 
/home/sperl/nat and looks like this:

--- cut here --->

### OpenHPI configuration example file ###

#######
## FIRST section: declaration of global parameters like the following.

#OPENHPI_LOG_ON_SEV = "MINOR"
#OPENHPI_ON_EP = "{SYSTEM_CHASSIS,1}"
#OPENHPI_EVT_QUEUE_LIMIT = 10000
#OPENHPI_DEL_SIZE_LIMIT = 10000
#OPENHPI_DEL_SAVE = "NO"
#OPENHPI_DAT_SIZE_LIMIT = 0
#OPENHPI_DAT_USER_LIMIT = 0
#OPENHPI_DAT_SAVE = "NO"
#OPENHPI_PATH = "/usr/local/lib/openhpi:/usr/lib/openhpi"
#OPENHPI_VARPATH = "/usr/local/var/lib/openhpi"

## The default values for each have been selected in the example above 
(except
## for OPENHPI_PATH and OPENHPI_CONF. See below).
## No need to specify any one of them because the default will be used
## automatically. The library will also look for these as environment 
variables.
## Environment variables found that match a global parameter will 
override the
## corresponding parameter set in this configuration file.
##
## OPENHPI_LOG_SEV sets the lowest severity level an event must meet to be
## logged in the domain event log. Possible values are (highest to lowest):
## "CRITICAL", "MAJOR", "MINOR", "INFORMATIONAL", "OK", and "DEBUG".
## OPENHPI_ON_EP sets the entity path on wich the application is 
running. This
## entity path will be returned when SaHpiResourceIdGet() is called.
## OPENHPI_EVT_QUEUE_LIMIT sets the maximum number of events that are 
allowed
## in the session's event queue. Default is 10000 events. Setting it to 
0 means
## unlimited.
## OPENHPI_DEL_SIZE_LIMIT sets the maximum size (in number of event log 
entries)
## for the domain event log. Default is 10000 log entries. Setting it to 0
## means unlimited.
## OPENHPI_DEL_SAVE sets wether the domain event log will be persisted 
to disk or
## not. The event log is written to OPENHPI_VARPATH value.
## OPENHPI_DAT_SIZE_LIMIT sets the maximum size (in number of alarm 
entries) for
## the alarm table. The default 0 means unlimited.
## OPENHPI_DAT_USER_LIMIT sets the maximum number of user type alarm entries
## allowed in the alarm table. The default 0 means unlimited.
## OPENHPI_DAT_SAVE sets wether the domain alarm table will be persisted 
to disk or
## not. The alarm table is written to OPENHPI_VARPATH value.
## OPENHPI_PATH is a colon (:) delimited list of directories specifying
## the location of openhpi plugin libraries. The default is defined when the
## library is configured.
## OPENHPI_VARPATH is a directory to which certain openhpi data will be 
saved to.
## The DEL (Domain Event Log), DAT (Domain Alarm Table), and UID (Unique 
IDs used
## for resources) mappings are saved to this directory. The default is 
set at
## compile time through the ./configure options.
#######

#######
## SECOND section: handler (instance) declaration with arguments 
understood by the plugin.

#############################################################################
##**CAUTION** System administrators have to make sure that entity paths are
## unique in a domain. To avoid entity paths conflicting among handlers, 
make
## sure the "entity_root" is unique for each handler definition, unless the
## plugin documentation says otherwise.
#############################################################################

## Strings are enclosed by "", numbers are not.

## Section for the simulator plugin
## You can load multiple copies of the simulator plugin but each
## copy must have a unique name.
#handler libsimulator {
#        entity_root = "{SYSTEM_CHASSIS,1}"
#        name = "simulator"
#}

## Section for ipmi plugin using SMI -- local interface
#handler libipmi        {
#       entity_root = "{SYSTEM_CHASSIS,2}"
#       name = "smi"
#       addr = 0
#}

## Section for ipmi plugin based on OpenIPMI:
#handler libipmi {
#       entity_root = "{SYSTEM_CHASSIS,3}"
#       name = "lan"
#       addr = "x.x.x.x"        #ipaddress
#       port = 999
#       auth_type = "straight"
#       auth_level= "user"
#       username = "joe"
#       password = "blow"
#}

## Section for BladeCenter snmp_bc plugin:
#handler libsnmp_bc {
#        entity_root = "{SYSTEM_CHASSIS,4}" # Required. BladeCenter 
chassis Entity Path.
#        host = "192.168.70.125"            # Required. BladeCenter 
Management Module IP address.
#        version = "1"                      # Required. SNMP protocol 
version (1|3).
#        community = "public"               # SNMP V1: Required. SNMP V1 
community name.
#        security_name = "snmpv3_user"      # SNMP V3: Required. SNMP V3 
user Login ID.
#        context_name  = ""                 # SNMP V3: Optional. Must 
match MM's "Context name" field, if defined.
#        security_level = "noAuthNoPriv"    # SNMP V3: Required. 
Security level (noAuthNoPriv|authNoPriv|authPriv).
#        passphrase = ""                    # SNMP V3: Authentication 
password. Required if security_level
#                                           # is authNoPriv or authPriv.
#        auth_type = ""                     # SNMP V3: Authentication 
password encoding (MD5|SHA). Required if
#                                           # security_level is 
authNoPriv or authPriv.
#        privacy_passwd = ""                # SNMP V3: Privacy password. 
Required if security_level is authPriv.
#        privacy_protocol = ""              # SNMP V3: Privacy password 
encoding (DES).
#                                           # Required if security_level 
is authPriv.
#                                           # If security_level is 
authPriv, DES encoding is assumed since there
#                                           # currently is no other 
privacy password encoding choice.
#        count_per_getbulk = "32"           # SNMP V3: Optional. 
SNMP_MSG_GETBULK commands can be used to increase
#                                           # performance. This variable 
sets the maximum OIDs allowable per
#                                           # MSG_GETBULK command.
#                                           # Positive values less than 
16 default to "16", since values less than
#                                           # 16 don't necessarily 
increase performance. Too high of a value (> 40)
#                                           # can cause SNMP commands to 
BladeCenter to timeout. The default is "32".
#                                           # A value of "0" disables 
the use of the SNMP_MSG_GETBULK SNMP command.
#}

## Section for static simulator plugin:
## If openhpi configured with
## configure --enable-simulator=static
## the dummy plugin is compiled in.
## It is possible to use simulator and libsimulator
## at the same time.
handler simulator {
         entity_root = "{SYSTEMk_CHASSIS,5}"
         name = "test"
}

## Section for ipmidirect plugin using SMI -- local interface
#handler libipmidirect {
#       entity_root = "{SYSTEM_CHASSIS,6}"
#       name = "smi"
#       addr = 0
#}

## Section for ipmidirect plugin using RMCP:
handler libipmidirect {
         entity_root = "{SYSTEM_CHASSIS,7}"
         name = "lan"         # RMCP
#        addr = "132.147.160.135"   # Host name or IP address: NMCH
        addr = "192.168.0.135"
        port = "623"         # RMCP port
         auth_type = "none"   # none, md2, md5 or straight
         auth_level = "admin" # operator or admin
         username = "arthur"
         password = "pieman"
         MultipleDomains = "no"  # "yes" creates a domain for this handler
         DomainTag = "Chassis 1" # Used if MultipleDomains="yes"
#        logflags = ""      # logging off
         logflags = "file"      # logging to file
         logfile = "/home/sperl/nat/openhpi"
         # if #logfile_max reached replace the oldest one
         logfile_max = "0"
}

#######
## THIRD Section: Domains (Optional).
##
## Events are processed in domains that match the entity path of said event
## (see SaHpiEntityPathT in SaHpi.h). This is done by associating an entity
## path pattern (below) to a domain declaration and using that to match
## against the event's entity path.
## All events are still processed in the default domain, regardless of
## entitypath.
##
## The '.' in the entity path patterns can be placed anywhere you would
## put an entity type or an entity location number in the entity path
## (an entity path in canonical form looks like this:
## "{GROUP,4}{RACK,1}{SYSTEM_CHASSIS,5}{SBC_BLADE,6}").
## Depending on where you put the '.', it means any entity type or any 
entity
## location number. You can also place a '*' where you would put a tuple (a
## tuple is an entity type/entity location pair: {RACK,8}).
## It means 0 or more tuples with any entity type and any location number.
## NOTE: entity types come from the SaHpiEntityTypeT enum defined in SaHpi.h
## without the SAHPI_ENT prefix.
##
## Example entity path patterns:
##
## {.,.} will match {RACK,4}, but not {RACK,4}{GROUP,2}
##
## {RACK,4}* will match {RACK,4}{GROUP,2}
##
## {RACK,.}* will match {RACK,4}{GROUP,2} also.
##
## {RACK,.}{GROUP,.} will match {RACK,4}{GROUP,2} also.
##
## *{SBC_BLADE,1} will not match {SYSTEM_CHASSIS,4}{SBC_BLADE,4}
##
## * will match anything.
##
## Example domain declarations:
#domain 1 {
#       # Required if peer_of is not used.
#       entity_pattern = "{SYSTEM_CHASSIS,1}*{SBC_BLADE,.}"
#       tag = "Blades on first chassis" # Optional
#       ai_timeout = 0 # Optional. -1 means BLOCK. 0 is the default.
#       ai_readonly = 1 # Optional. 1 means yes (default), 0 means no.
#}

#domain 2 {
#       entity_pattern = "*{SBC_BLADE,.}{FAN,.}"
#       tag = "All Blade Fans" # Optional
#       child_of = 1 # Optional. 0 means the default domain.
#}

#domain 3 {
#       tag = "All Blade Fans (Peer)"
#       peer_of = 2 # Optional. 0 means the default domain.
#}

<--- cut there ---

I found out something interesting which I do not understand yet:

* When using openhpi 2.7.3 - the version from which I started -
   and I start it like this

   # openhpid -c /home/sperl/nat/openhpi.conf

   I get this "no handler defined in configuration" error. The daemon
   keeps running but due to the lack of a valid handler it cannot
   transmit any messages to the MCH.

* When I start the daemon like that:

   # /home/sperl/nat/openhpi-2.7.3/openhpid/openhpid -c \
     /home/sperl/nat/openhpi.conf

   all defined plugin handler are found and everything seems to
   work fine. With some modifications I even was able to control
   our cooling units in our MTCA shelf.

* Currently I cannot tell anything about this behaviour with
   openhpi 2.9.2.

Have you seens something similair?

Greetings,
Stefan

> Saludos,
> 
>        --Renier
> 
> [EMAIL PROTECTED] wrote on 08/15/2007 09:26:32 AM:
> 
>  > Hello!
>  >
>  > All of a sudden I get the following error hen starting the
>  > openhpi daemon:
>  >
>  >     openhpid: DEBUG: (init.c, 141, *Warning*: No handler definitions
>  >     found in config file. Check configuration file openhpi.conf and
>  >     previous messages)
>  >
>  > I even tried with a configuration of a colleague (where openhpi is
>  > working fine). Has anyone of you had this and knows how to solve
>  > this?
>  >
>  > Any help is appreciated!!! I have no more ideas what to look for.
>  > Thank you very much in advance.
>  >
>  > Stefan Sperling
>  >
> Openhpi-devel mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/openhpi-devel


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
Openhpi-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/openhpi-devel

Reply via email to