Michael, I am inclined to Setup #3 - There is no need for the main HPI domain to proxy all HPI traffic. (it may reduce the performance) - less code changes needed on the daemon/plug-in level - more generic code changes
For Setup #4 there will be code duplication for each plug-in. Also we may miss Domain Alarm Table and Domain Event Log for subordinate OpenHPI. However I don't think it is a big trouble. As for the resources and instruments of the subordinate OpenHPI. I don't think there will be duplicates. My current thinking is that subordinate OpenHPI will provide payload-specific resources instruments. They are not visible on IPMI level now. But for coordination resources and instruments of subordinate OpenHPI shall have HPI entity path containing the board HPI entity path. Anton Pak > I think that Setup #4 with a coordinated view of the entities would be the > easiest to manage. > > When a board/module with a Subordinate OpenHPI goes into the M4 state > there may be duplicate paths to the same resources or instruments. The > Main OpenHPI on the ShM should filter the duplicate resources and > instruments to avoid confusion. > > Duplicate resources and instruments on the Subordinate OpenHPI may have a > higher performance connection than those connected directly to the Main > OpenHPI on the ShM. It would be desirable if the higher performance > connection would be used when the board/module is in the M4 state. > > Your diagram on page 1 shows 3 possible connections to a Payload FUMI on a > board/module; through the ShM's IPMB, through the Ethernet connection to > the IPMC/MMC; and through the payload's Ethernet connection. The Main > OpenHPI on the ShM should be able to determine that there are multiple > connections to a FUMI and filter all but the highest performance > connection. If the board/module changed from the M4 state to the M1 state > the Main OpenHPI on the ShM should use the next highest performance > connection to the FUMI. I am not sure if the detailed information about > the specific connection to the FUMI would be of value to a System Manager. > > > -----Original Message----- > From: Anton Pak [mailto:[email protected]] > Sent: Tuesday, August 10, 2010 1:29 PM > To: OpenHPI-devel > Subject: [Openhpi-devel] More on multi-domain configurations in real > setups > > Hello! > > I was thinking about multi-domains stuff and how it maps to the real > setup. > It is highly possible that the system will have different OpenHPI and > other HPI > service instances running at the same time. They all shall provide > coordinated view of the system. > Static configuration is an option but painful and complex one. > > Seems there shall be generic solution to allow different vendors to > provide > vendor-specific management aspects in HPI without exposing source code. > If we plot a well-known way the integration process will be much easier. > > Prepared small presentation. > xTCA system was used as an example but I think it is applicable to > wider set of systems. > > Please review and share your thoughts. > > I hope Bryan will no kill me for the big attachement. :) > > Anton Pak > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by > > Make an app they can't live without > Enter the BlackBerry Developer Challenge > http://p.sf.net/sfu/RIM-dev2dev > _______________________________________________ > Openhpi-devel mailing list > [email protected] > https://lists.sourceforge.net/lists/listinfo/openhpi-devel > ------------------------------------------------------------------------------ This SF.net email is sponsored by Make an app they can't live without Enter the BlackBerry Developer Challenge http://p.sf.net/sfu/RIM-dev2dev _______________________________________________ Openhpi-devel mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/openhpi-devel
