Well, the question is better to address to ipmidirect maintainer.

However I just tried two ipmidirect handlers and it worked.

        Anton Pak


On Thu, 20 Jan 2011 14:18:58 +0300, Mansuri, Wasim (NSN - IN/Bangalore)  
<[email protected]> wrote:

> Hello Anton,
>
> We are using 2.14 version of openhpi.
>
> We have reconfigure without domain stanza. With  new configuration we  
> are able to get events from only one shelf manager(only from first  
> libipmidirect tag).
>
> Using other method(Running multiple instance of openhpid) is working.  
> However we prefer to have single instance as we are expected to manage 3  
> shelves.
>
> I have pasted my current configuration below.
>
> handler libipmidirect  {
>         entity_root = "{SYSTEM_CHASSIS,1}"
>         name = "lan"         # RMCP
>         addr = "130.130.1.100"   # Host name or IP address
>         port = "623"         # RMCP port
>         auth_type = "md5"   # none, md2, md5 or straight
>         auth_level = "admin" # operator or admin
>         username = "admin"
>         password = "admin"
>         IpmiConnectionTimeout = "5000"
>         AtcaConnectionTimeout = "1000"
>         MaxOutstanding = "3"
>         MCC8= "initial_discover poll_alive poll_dead"
>         MCCA= "initial_discover poll_alive poll_dead"
>         #MultipleDomains = "yes"
>         DomainTag = "1" # Used if MultipleDomains="yes"
>         #logflags = ""      # logging off
>         logflags = "file"
>         # infos goes to logfile and stdout
>         # the logfile are log00.log, log01.log ...
>         logfile = "/tmp/hpi1.log"
>         # if #logfile_max reached replace the oldest one
>         logfile_max = "1"
> }
>
>
> handler libipmidirect {
>         entity_root = "{SYSTEM_CHASSIS,2}"
>         name = "lan"         # RMCP
>         addr = "130.129.1.100"   # Host name or IP address
>         port = "623"         # RMCP port
>         auth_type = "md5"   # none, md2, md5 or straight
>         auth_level = "admin" # operator or admin
>         username = "admin"
>         password = "admin"
>         IpmiConnectionTimeout = "5000"
>         AtcaConnectionTimeout = "1000"
>         MaxOutstanding = "3"
>         MCC8= "initial_discover poll_alive poll_dead"
>         MCCA= "initial_discover poll_alive poll_dead"
>         #MultipleDomains = "yes"
>         DomainTag = "2" # Used if MultipleDomains="yes"
>         #logflags = ""      # logging off
>         logflags = "file"
>         # infos goes to logfile and stdout
>         # the logfile are log00.log, log01.log ...
>         logfile = "/tmp/hpi2.log"
>         # if #logfile_max reached replace the oldest one
>         logfile_max = "1"
> }
>
> In above configuration we observed that if first ipAddress is not  
> available or invalid user credential than only it tries to connect to  
> the second handler.
>
>
> Thanks, Wasim
>
>
> -----Original Message-----
> From: ext Anton Pak [mailto:[email protected]]
> Sent: Thursday, January 20, 2011 2:47 PM
> To: [email protected]; Mansuri, Wasim (NSN -  
> IN/Bangalore)
> Subject: Re: [Openhpi-devel] About ShelfManager Events
>
> What OpenHPI version do yo use?
> Since 2.10 domain stanza is not supported in openhpi.conf.
> And there shall be no number after libipmidirect.
>
>       Anton Pak
>
> On Thu, 20 Jan 2011 12:12:25 +0300, Mansuri, Wasim (NSN - IN/Bangalore)
> <[email protected]> wrote:
>
>> Hello Anton,
>>
>> when we use below configuration in /etc/openhpi/openhpi.conf file for
>> accessing multiple shelfmanagers, openhpid is connecting to only one
>> shelfmanager.  It is not connecting to second shelfmanagers.
>>
>> *********************/etc/openhpi/openhpi.conf********************************
>> domain 1{
>>       tag = "1"
>> }
>>
>> domain 2 {
>>        tag = "2"
>> }
>>
>>
>> handler libipmidirect 1{
>>         entity_root = "{SYSTEM_CHASSIS,1}"
>>         name = "lan"         # RMCP
>>         addr = "130.130.1.100"   # Host name or IP address
>>         port = "623"         # RMCP port
>>         auth_type = "md5"   # none, md2, md5 or straight
>>         auth_level = "admin" # operator or admin
>>         username = "admin"
>>         password = "admin"
>>         IpmiConnectionTimeout = "5000"
>>         AtcaConnectionTimeout = "1000"
>>         MaxOutstanding = "3"
>>         MCC8= "initial_discover poll_alive poll_dead"
>>         MCCA= "initial_discover poll_alive poll_dead"
>>         MultipleDomains = "yes"
>>         DomainTag = "1" # Used if MultipleDomains="yes"
>>         #logflags = ""      # logging off
>>         logflags = "file"
>>         # infos goes to logfile and stdout
>>         # the logfile are log00.log, log01.log ...
>>         logfile = "/tmp/hpi1.log"
>>         # if #logfile_max reached replace the oldest one
>>         logfile_max = "1"
>> }
>>
>> handler libipmidirect 2{
>>         entity_root = "{SYSTEM_CHASSIS,2}"
>>         name = "lan"         # RMCP
>>         addr = "130.129.1.100"   # Host name or IP address
>>         port = "623"         # RMCP port
>>         auth_type = "md5"   # none, md2, md5 or straight
>>         auth_level = "admin" # operator or admin
>>         username = "admin"
>>         password = "admin"
>>         IpmiConnectionTimeout = "5000"
>>         AtcaConnectionTimeout = "1000"
>>         MaxOutstanding = "3"
>>         MCC8= "initial_discover poll_alive poll_dead"
>>         MCCA= "initial_discover poll_alive poll_dead"
>>         MultipleDomains = "yes"
>>         DomainTag = "2" # Used if MultipleDomains="yes"
>>         #logflags = ""      # logging off
>>         logflags = "file"
>>         # infos goes to logfile and stdout
>>         # the logfile are log00.log, log01.log ...
>>         logfile = "/tmp/hpi2.log"
>>         # if #logfile_max reached replace the oldest one
>>         logfile_max = "1"
>> }
>> *********************end of
>> /etc/openhpi/openhpi.conf********************************
>>
>> Do we need to do any configuration changes in
>> /etc/openhpi/openhpiclient.conf file also?
>>
>> Thanks, Wasim
>>
>> -----Original Message-----
>> From: ext Anton Pak [mailto:[email protected]]
>> Sent: Thursday, December 30, 2010 11:09 PM
>> To: [email protected]; Mansuri, Wasim (NSN -
>> IN/Bangalore)
>> Subject: Re: [Openhpi-devel] About ShelfManager Events
>>
>> Hello Wasim.
>>
>> THe HPI-xTCA Mapping Spec utilizes only
>> - Hot Swap Event - from xTCA Hot Swap Sensor
>> - Sensor Event - from the rest of IPMI Sensors
>> - Resource Event (Failed, Restored) for Mx<->M7 xTCA Hot Swap Events
>>
>> Other events are not supposed to come from the shelf manager.
>> However some events of different type can come from OpenHPI level.
>>
>>      Anton Pak
>>
>> On Thu, 30 Dec 2010 13:23:55 +0300, Mansuri, Wasim (NSN - IN/Bangalore)
>> <[email protected]> wrote:
>>
>>>
>>> Background:
>>> We are implementing a module to receive and display events from shelf
>>> manager using openhpi interfaces.
>>>
>>>
>>> 1. Any detailed event catalog available for the events received from  
>>> the
>>> shelf manager?
>>>
>>> 2. Possible event types:  will all event types given in openhpi
>>> SaHpi.h:SaHpiEventTypeT appear in practical ?
>>>
>>>
>>> Thanks & Regards,
>>> Wasim Mansuri
>>>
>>>
>>> -----Original Message-----
>>> From: Mansuri, Wasim (NSN - IN/Bangalore)
>>> Sent: Thursday, December 16, 2010 12:46 PM
>>> To: 'ext Anton Pak'; [email protected]
>>> Subject: RE: [Openhpi-devel] Configuring openhpid with more than one
>>> shelf manager
>>>
>>> Hello
>>>
>>> In our configuration of Active-standby shelf manager, Active shelf
>>> manager will have two Active-Active physical interfaces.
>>>
>>> 1. What should be the openhpi configuration for handling one shelf.
>>> Should both the IP Address be configured?
>>> 2. In case of link failure(one of the IPAddress going down), does
>>> openhpi daemon takes care of transparently handling the failover or the
>>> client application has to re-open a new session.
>>>
>>> Note: Client application is using the HPI library to manage the shelf
>>> manager.
>>>
>>>
>>> Thanks & Regards,
>>> Wasim Mansuri
>>>
>>>
>>> -----Original Message-----
>>> From: ext Anton Pak [mailto:[email protected]]
>>> Sent: Wednesday, December 15, 2010 11:09 PM
>>> To: [email protected]; Mansuri, Wasim (NSN -
>>> IN/Bangalore)
>>> Subject: Re: [Openhpi-devel] Configuring openhpid with more than one
>>> shelfmanager
>>>
>>> Yes, both ways are possible.
>>>
>>> 1) Several daemons on the same system:
>>> - assign them to different ports (env. var OPENHPI_DAEMON_PORT or -p  
>>> cmd
>>>
>>> line arg)
>>> - assign different pid files for them ( -f cmd line arg )
>>> - assign them to different uid_map files
>>>
>>> 2) Single daemon:
>>> Just configure several handler stanzas in openhpi.conf
>>> For example:
>>> -----------------------------------------------------------
>>> ...
>>> handler libipmidirect {
>>>          entity_root = "{SYSTEM_CHASSIS,7}"
>>>          name = "lan"         # RMCP
>>>          addr = "192.168.1.7"
>>>          port = "623"
>>> }
>>> handler libipmidirect {
>>>          entity_root = "{SYSTEM_CHASSIS,8}"
>>>          name = "lan"         # RMCP
>>>          addr = "192.168.1.8"
>>>          port = "623"
>>> }
>>> ...
>>> -----------------------------------------------------------
>>>
>>>     Anton Pak
>>>
>>>
>>> On Wed, 15 Dec 2010 11:23:55 +0300, Mansuri, Wasim (NSN - IN/Bangalore)
>>>
>>> <[email protected]> wrote:
>>>
>>>>
>>>> Hello All,
>>>>
>>>> I am new to this forum recently started using openhpid to manage one
>>>> shelf. My requirement here is I want to manage more than one shelf
>>> from
>>>> the rack mount.
>>>>
>>>> Is it possible to start more than one instance of the openhpid daemon
>>>> and to manage all the shelf or other way round we can have one
>>> openhpid
>>>> instance configured to manage more than one shelf.
>>>>
>>>> Thanks & Regards,
>>>> Wasim Mansuri
>>>>
>>>>
>>>>
>>>>
>>>>
>>> ------------------------------------------------------------------------
>>> ------
>>>> Lotusphere 2011
>>>> Register now for Lotusphere 2011 and learn how
>>>> to connect the dots, take your collaborative environment
>>>> to the next level, and enter the era of Social Business.
>>>> http://p.sf.net/sfu/lotusphere-d2d
>>>> _______________________________________________
>>>> Openhpi-devel mailing list
>>>> [email protected]
>>>> https://lists.sourceforge.net/lists/listinfo/openhpi-devel
>>>
>>> ------------------------------------------------------------------------------
>>> Learn how Oracle Real Application Clusters (RAC) One Node allows
>>> customers
>>> to consolidate database storage, standardize their database  
>>> environment,
>>> and,
>>> should the need arise, upgrade to a full multi-node Oracle RAC database
>>> without downtime or disruption
>>> http://p.sf.net/sfu/oracle-sfdevnl
>>> _______________________________________________
>>> Openhpi-devel mailing list
>>> [email protected]
>>> https://lists.sourceforge.net/lists/listinfo/openhpi-devel

------------------------------------------------------------------------------
Protect Your Site and Customers from Malware Attacks
Learn about various malware tactics and how to avoid them. Understand 
malware threats, the impact they can have on your business, and how you 
can protect your company and customers by using code signing.
http://p.sf.net/sfu/oracle-sfdevnl
_______________________________________________
Openhpi-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/openhpi-devel

Reply via email to