Hi,

I tried the OPENHPI_DAT_SAVE="YES", but now each time I restart the
daemon, previous asserted alarms are seen as new alarms in the DAT:

OpenHPI> dat
(1) 2007-08-06 11:32:59 MAJOR - SENSOR 27/4097 0x10
(2) 2007-08-06 11:32:59 MAJOR - SENSOR 27/4097 0x1
No alarms in DAT.

Became after a restart:

OpenHPI> dat
(1) 2007-08-06 11:32:59 MAJOR - SENSOR 27/4097 0x10
(2) 2007-08-06 11:32:59 CRITICAL - SENSOR 0/0 0x0
(3) 2007-08-06 11:44:23 MAJOR - SENSOR 27/4097 0x10
(4) 2007-08-06 11:44:23 MAJOR - SENSOR 27/4097 0x1
No alarms in DAT.

What is alarm 2? Why alarm 3 is now a "duplicate" of alarm 1?

Regards,

/jonathan

On Fri, 2007-08-03 at 18:13 -0400, Renier Morales wrote:

> [EMAIL PROTECTED] wrote on 08/03/2007
> 02:37:15 PM:
> 
> > I have restarted Blade 10 HPI daemon, now the 74/75 are gone, so 
> > Blade 10 missed to asserted false event? How did it miss it? That's 
> > not a really reliable process?
> 
> Like I said, the events that indicated an alarm condition for 74/75
> must have happened during normal event pull, not during initial
> resource discovery. Than means that those events came and went. The
> HPI instance saw them at the time, and logged alarms approprietly.
> Restarting the daemon does not necessarily mean you will get the same
> alarms. The daemon must see those events again which promted the alarm
> entries in the first place which may or may not be generated
> immediatedly after re-startup depending on the plugin. Maybe the
> plugin is only picking up sensor events when there is a change, and in
> that case you are seeing what you are seeing.
> The DAT is kept entirely in software, so the daemon must see a
> sequence of events in order to enter and remove alarms from the DAT.
> You can't really expect the DAT to be kept intact between restarts
> because the event sequence is broken, unless...
> 
> One thing you can do is turn on DAT persistence in the openhpi.conf
> file (OPENHPI_DAT_SAVE="YES"). This way the daemon can recall past
> alarm entries between restarts.
> 
> > 3.6 Synchronization
> > ...
> > There may be multiple HPI implementations present in a system, such 
> > as those offered by different vendors. HPI Users should not assume 
> > any synchronisation between different HPI implementations.
> > ...
> > 3.6.2 Multiple HPI implementations
> > ...
> > Any software layer using concurrent access via multiple HPI 
> > implementations should take appropriate care; for
> > example, by updating both RDR tables, reading most current sensor 
> > values, etc., if it is possible that anything
> > may have had an effect on the other HPI implementation.
> > 
> > By "HPI implementation" the spec means instance of the 
> > daemon/library or like in 3.6 different vendors?
> 
> 3.6 doesn't restrict what it means by different implementations to
> different vendors. It uses different vendors as an example of
> different implementations. I very much understand an HPI
> implementation as an HPI running instance in this text. Running
> multiple instances of one code implementation is the same as running
> multiple HPI implementations. Thus you are subject to the warning in
> 3.6.2.
> 
> > 
> > Because from what I understand "different HPI implementations" means
> > e.g. one from OpenHPI and one from Motorola, I assume that both 
> > process wont talk to each other to sync (even if most commercial HPI
> > implementation are based on OpenHPI :)). But two instances of 
> > OpenHPI daemon for me share the same HPI implementation.
> 
> No, because, even though they run the same code, they are different
> running instances. Each manages its own set of domains separate from
> the other. All clients connected to the same daemon, are synchronized.
> The daemon is the single thing that ensures a consistent view to all
> HPI users (i.e. all connected sessions). Multiple instances do not
> share data and thus you can't expect an aggregated  consistent view
> between them. You can't really extend the synchronization scope
> described in the spec to running one implementation multiple times,
> same vendor or not.
> 
> > 
> > 3.6.1 Synchronization Responsibilities 
> > It is the responsibility of an HPI implementation to ensure that a 
> > single, consistent view of the system and its
> > domains and resources is presented to all HPI Users. In the face of 
> > multiple concurrent changes, the HPI
> > implementation should attempt to make updates visible system-wide in
> > a timely manner; however, no specific
> > timing is specified.
> > 
> > An HPI implementation shall guarantee that each HPI operation on any
> > resource is atomic; that is, if two writes
> > are attempted to a resource (e.g. from different sessions), one 
> > write shall succeed entirely, and then the other
> > write shall succeed entirely. The order in which the writes occur 
> > may be undefined, depending on timing and
> > the locations of the sources of the writes.
> > 
> > Any one HPI implementation is required to report all events for all 
> > resources to all sessions which have
> > subscribed to receive events and which have visibility of those
> resources.
> > 
> > Synchronization across sessions for me is just an example and one 
> > case of where sync should occurs. But sessions from different daemon
> > to the same shmm should have a 
> > consistent view of the system.
> > 
> 
> Same as above.
> 
> --Renier
> 
> 
> 
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >>  http://get.splunk.com/
> _______________________________________________ Openhpi-devel mailing list 
> [email protected] 
> https://lists.sourceforge.net/lists/listinfo/openhpi-devel
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
Openhpi-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/openhpi-devel

Reply via email to