The following HPI data should be actively synchronized to the peer HPI
daemon:

Sessions, Sessions opened on the active HPI daemon should persist over
failovers. HPI user should be able to use the same SessionId.
Application that opened session should not see any noticeable difference
on the HPI daemon operation (maybe a small delay and the command
processing...)

Event subscriptions, Event subscriptions should remain in effect after a
failover, session event queues should be synchronized.

Domain tag, resource tag, and resource severity.

Events and event logs (system generated and user added)

Alarms (alarm acknowledgements and user alarm deletions)

These HPI operations should also be atomic (it's impacting the HPI
database)
saHpiDiscover()
saHpiEventLogClear()
saHpiParmControl()

Regards,

/jonathan

On Wed, 2007-09-12 at 16:51 -0400, Audet, Jean-Michel wrote:

> Thanks, this confirms and clarifies a lot of points.
> 
>  
> 
> I do have more questions :o)
> 
>  
> 
>  
> 
> Let say we have:
> 
>  
> 
> -         A shelf manager with IPMI service (In fact, there is 2
> shelf-manager, but the second one is in standby mode)
> 
> -         2 blades running each an openhpi daemon, each of them
> connected to the shelf-manager to retrieve IPMI data
> 
> -         A SMS connected to OpenHPI with the libopenhpi.so
> 
>  
> 
>  
> 
> Probably using a Master/Slave model, I think that both daemons should
> have the data in sync in order to be able to provide HPI service in
> case one of the HPI daemons disappears.  Right?  How hard do you think
> it would be to keep necessary data in sync?  We would probably add a
> data pipe between the 2 daemons to transfer synchronization data and
> heartbeat?  Should we also sync some files (Domain Event Log, etc…?)
> 
>  
> 
>  What do you think about this?  I am trying to evaluate how much of
> effort it will be required by us to add these features (in order for
> me to get the goahead)?  
> 
>  
> 
> Thanks!
> 
> Jean-Michel Audet
> 
> Kontron Canada
> 
>  
> 
>  
> 
>                                    
> ______________________________________________________________________
> 
> De :[EMAIL PROTECTED] [mailto:openhpi-devel-
> [EMAIL PROTECTED] De la part de Renier Morales
> Envoyé : Wednesday, September 12, 2007 4:29 PM
> À : [email protected]
> Objet : Re: [Openhpi-devel] Daemon Sync
> 
> 
>  
> 
> 
> [EMAIL PROTECTED] wrote on 09/12/2007
> 02:25:00 PM:
> 
> > I had seen the answer but maybe I am not sure to correctly 
> > understand the concept. 
> >   
> > Could you explain a little bit more the “domain connection
> manager”? 
> 
> What we have now as standard is libopenhpi.so and openhpid. Running
> openhpid creates a complete (monolithic?) HPI instace. All domains
> live within that space. By linking to libopenhpi.so, you get access to
> that instance over TCP. 
> The change that I was talking about is to, instead of having openhpid
> house all of the domains, to have multiple openhpids, each
> representing one domain. So we break the openhpid HPI instance into
> pieces at the domain level. 
> 
> Now we need to add logic to libopenhpi.so to know how to connect to
> multiple daemons (which are now single domains). That involves making
> libopenhpi.so map domain ids to ip/port numbers in order to
> communicate with the domains (each now a daemon). 
> 
> That logic is what I call the "domain connection manager", which would
> be transparent to the API user, except for maybe a configuration file
> which would have the domain id to ip/port map information, unless the
> mapping can be done differently. 
> 
> With that, I think that the domain relationships (e.g. peer) described
> in the spec become more useful and meaningful. 
> 
> >   
> > My understanding is that there will be 2 instance of OpenHPI running
> > on 2 different blades.  One of the instances will be connected to a 
> > domain and the second instance will have a peer connection to the 
> > same domain (same hardware).  Right? 
> 
> The idea is that the application linking to libopenhpi.so can be
> connected to 2 domains (i.e. daemons) at the same time. Each one of
> these daemons can be running anywhere on the network. Domains
> themselves can be peers of one another. Peer domains give you access
> to the same set of hardware resources. How these domains access
> hardware depends on the OpenHPI plugin they use. Peer domains don't
> necessarily have to be using the same plugin or same plugin
> configuration. One could be using a local-based plugin, another a
> network-based plugin to get to the same hardware. 
> 
> > Where will be the “RAM” data 
> > storing all the information.   II am not sure to fully understand 
> > how this will work but really interested in more discussions. 
> 
> The domain instances (daemon processes) hold the domain data. Some of
> this can be persisted to disk (e.g. domain alarm table, domain event
> log) in order to survive a domain restart. 
> 
> >   
> > We might be interested in contributing to OpenHPI to add this 
> > functionality.  Is it something OpenHPI might be interested.  Do you
> > think that OpenHPI developers might help doing and supporting it. 
> 
> Yes, we would be interested in a contribution like this. I'd be able
> to provide help on this. 
> 
> >   
> > By the way, do you have any other documentation than the HPI
> manual. 
> 
> I'm afraid not. The HPI specification and OpenHPI manual is what we
> have for now... 
> 
>         --Renier 
>  
> 
> 
> 
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Microsoft
> Defy all challenges. Microsoft(R) Visual Studio 2005.
> http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
> _______________________________________________ Openhpi-devel mailing list 
> [email protected] 
> https://lists.sourceforge.net/lists/listinfo/openhpi-devel
-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Openhpi-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/openhpi-devel

Reply via email to