I doubt this.
There may be the System Interface (IPMI based) between payload system and
IPMC/MMC.
And IPMC/MMC responds as 0x20 over this interface.
Vendor-specific OEM mechanism may provide information about IPMB-0/IPMB-L
hardware address.
And even with knowledge about hardware address it is hard to determine a
site type/site number for the slot.
Anton Pak
On Wed, 11 Aug 2010 15:57:42 +0400, Thompson, Michael
<[email protected]> wrote:
> Can I assume that OpenHPI running on the Board/AMC can determine it's
> Hardware Address from the IPMC/MMC and therefore know what slot it is in?
>
> -----Original Message-----
> From: [email protected] [mailto:[email protected]]
> Sent: Wednesday, August 11, 2010 4:39 AM
> To: [email protected]
> Subject: Re: [Openhpi-devel] More on multi-domain configurations in real
> setups
>
> Uli,
>
> Thank you for feedback.
>
> My comments:
>
> 1)
>
> With Setup#4 all resources will be visible via domain on Main OpenHPI.
> HPI application will not connect to subordinate OpenHPI instances
> directly.
> I guess it is on of the drawbacks on Setup#4.
>
> For the Setup#4 there will be following resources:
>
> OpenHPI on Board 1 Payload:
> - Payload CPU - {CPU,1}
> - Payload Disk - {DISK,1}
>
> OpenHPI on AMC:
> - Payload CPU - {CPU,1}
> - Payload Memory - {MEMORY,1}
>
> Main OpenHPI:
> - Shelf - {CHASSIS,1}
> - Board 1 - {CHASSIS,1}{PHYS_SLOT,1}{BOARD,1}
> - Board 2 - {CHASSIS,1}{PHYS_SLOT,2}{BOARD,2}
> - AMC on Board2 - {CHASSIS,1}{PHYS_SLOT,2}{BOARD,2}{AMC_SLOT,1}{AMC,1}
> - resources collected from subordinate OpenHPIs
> -- Payload CPU - {CHASSIS,1}{PHYS_SLOT,1}{BOARD,1}{CPU,1}
> -- Payload Disk - {CHASSIS,1}{PHYS_SLOT,1}{BOARD,1}{DISK,1}
> -- Payload CPU -
> {CHASSIS,1}{PHYS_SLOT,2}{BOARD,2}{AMC_SLOT,1}{AMC,1}{CPU,1}
> -- Payload Memory -
> {CHASSIS,1}{PHYS_SLOT,2}{BOARD,2}{AMC_SLOT,1}{AMC,1}{MEMORY,1}
>
> I guess it is parent-child domain association.
> As for peer domains HPI spec says:
>
> ---------------
> Two domains, domain-A and domain-B, are "peers" if they are intended to
> provide an HPI User with access to the same set of resources
> ---------------
> Because peer domains are intended to provide access to a common set of
> resources, the same set of resources should be listed in the RPTs of both
> domains. The RPTs in each of the peer domains are separate, and, in
> particular, the RPT entries may be ordered differently in each peer
> domain. However, the ResourceId, ResourceInfo, ResourceEntity,
> ResourceCapabilities, and HotSwapCapabilities fields in the corresponding
> RPT entries for a common resource must be identical in each peer domain.
> Of these, the ResourceId field is guaranteed to be unique in each RPT
> entry in a single domain. Therefore, RPT Entries corresponding to common
> resources can be identified in peer domains by comparing the ResourceId
> fields of RPT entries in the peer domains
> --------------
>
> I don't think we may use peer domains here.
> What say?
>
> 2) Very good point!
> And I suspect that 2 level hierarchy may be not enough for OpenHPI
> domains.
> Seems the final solution shall allow a tree setup of arbitrary depth.
>
>
> Anton Pak
>
>
>> Hi,
>> great discussion! I am very happy this topic proceeds that way.
>>
>> I think, before we can evaluate the setups, I would like to throw
>> two more inputs:
>>
>> 1. The idea of subsidiary OpenHPI instances in Setup #4 is very
>> similar to what HPI defines for peer domains. I didn't dig into
>> the peer domains deep enough yet, but as I understood them so far,
>> a domain needs to show all resources of a peer and forward any
>> commands related to those resources to the peer domain.
>>
>> 2. Additionally to the setups in the slideset, we need to consider
>> configurations with multiple shelves (in xTCA, but similar in other
>> architectures). Multiple shelves can be handled with a singe daemon
>> or with a daemon per shelf. Dynamic reasons may lead to multiple
>> daemons.
>>
>> Cheers,
>> Uli
>>
>>
>>
>>
>>> -----Ursprüngliche Nachricht-----
>>> Von: ext Thompson, Michael [mailto:[email protected]]
>>> Gesendet: Dienstag, 10. August 2010 22:38
>>> An: [email protected]
>>> Betreff: Re: [Openhpi-devel] More on multi-domain
>>> configurations in real setups
>>>
>>> One blade that I have worked with supports upgrading through
>>> the IPMB via the shelf manager. You can also load an IPMI
>>> driver that connects to the KCS interface to the IPMC and
>>> upgrade from the payload processor. It is very unlikely that
>>> a user would try to do both at the same time.
>>>
>>> The new work in the HPM.x group will provide a mechanism for
>>> the shelf manager to determine if an IPMC or MMC has the
>>> ability of supporting an Ethernet connection for firmware upgrades.
>>>
>>> Maybe OpenHPI FUMIs could be enhanced to take advantage of
>>> this new capability. Then an Upgrade Agent could make an
>>> informed decision about which connection path would be best.
>>>
>>> -----Original Message-----
>>> From: [email protected] [mailto:[email protected]]
>>> Sent: Tuesday, August 10, 2010 4:25 PM
>>> To: [email protected]
>>> Subject: Re: [Openhpi-devel] More on multi-domain
>>> configurations in real setups
>>>
>>> Yes, the idea is that base library adds entity root to all
>>> HPI resources and instruments from the domain.
>>> So there is no need for subordinate OpenHPI to know its full
>>> entity path.
>>> Knowledge of the Board/AMC relative entity path is enough.
>>>
>>> As for the FUMIes my understanding of coordination is:
>>> - entity pathes for two FUMIes should be the same
>>> - if an upgrade is performed via the FUMI #1, then then FUMI #2
>>> should refuse upgrade actions or should indicate that
>>> upgrade is currently undesirable.
>>> I don't see any right generic way to resolve this problem.
>>> By the way, how the board vendor copes with this problem
>>> (providing more that one upgrade path)?
>>> Seems it is not an HPI problem in nature.
>>>
>>> Anton Pak
>>>
>>>
>>> > With Setup #3 would work nicely if the OpenHPI library provided a
>>> > coordinated view of the multiple domains by matching the
>>> HPI entity paths.
>>> >
>>> > How does the subordinate OpenHPI on a blade or module know
>>> it's complete
>>> > entity path?
>>> >
>>> > I think that you could have duplicate payload FUMIs.
>>> Something like a
>>> > payload BIOS could be upgraded through the IPMC and through
>>> the payload
>>> > processor. I have also seen boards where the IPMC firmware could be
>>> > upgraded through the IPMB and from the payload processor.
>>> >
>>> > How would the OpenHPI library provide a coordinated view of
>>> a single FUMI
>>> > that has more than one upgrade path?
>>> >
>>> > -----Original Message-----
>>> > From: [email protected] [mailto:[email protected]]
>>> > Sent: Tuesday, August 10, 2010 3:28 PM
>>> > To: [email protected]
>>> > Subject: Re: [Openhpi-devel] More on multi-domain
>>> configurations in real
>>> > setups
>>> >
>>> > Michael,
>>> >
>>> > I am inclined to Setup #3
>>> > - There is no need for the main HPI domain to proxy all HPI traffic.
>>> > (it may reduce the performance)
>>> > - less code changes needed on the daemon/plug-in level
>>> > - more generic code changes
>>> >
>>> > For Setup #4 there will be code duplication for each plug-in.
>>> > Also we may miss Domain Alarm Table and Domain Event Log for
>>> > subordinate OpenHPI. However I don't think it is a big trouble.
>>> >
>>> > As for the resources and instruments of the subordinate OpenHPI.
>>> > I don't think there will be duplicates.
>>> > My current thinking is that subordinate OpenHPI will provide
>>> > payload-specific resources instruments.
>>> > They are not visible on IPMI level now.
>>> > But for coordination resources and instruments of
>>> subordinate OpenHPI
>>> > shall have HPI entity path containing the board HPI entity path.
>>> >
>>> >
>>> > Anton Pak
>>> >
>>> >> I think that Setup #4 with a coordinated view of the
>>> entities would be
>>> >> the
>>> >> easiest to manage.
>>> >>
>>> >> When a board/module with a Subordinate OpenHPI goes into
>>> the M4 state
>>> >> there may be duplicate paths to the same resources or
>>> instruments. The
>>> >> Main OpenHPI on the ShM should filter the duplicate resources and
>>> >> instruments to avoid confusion.
>>> >>
>>> >> Duplicate resources and instruments on the Subordinate
>>> OpenHPI may have
>>> >> a
>>> >> higher performance connection than those connected
>>> directly to the Main
>>> >> OpenHPI on the ShM. It would be desirable if the higher performance
>>> >> connection would be used when the board/module is in the M4 state.
>>> >>
>>> >> Your diagram on page 1 shows 3 possible connections to a
>>> Payload FUMI on
>>> >> a
>>> >> board/module; through the ShM's IPMB, through the Ethernet
>>> connection to
>>> >> the IPMC/MMC; and through the payload's Ethernet
>>> connection. The Main
>>> >> OpenHPI on the ShM should be able to determine that there
>>> are multiple
>>> >> connections to a FUMI and filter all but the highest performance
>>> >> connection. If the board/module changed from the M4 state to the M1
>>> >> state
>>> >> the Main OpenHPI on the ShM should use the next highest performance
>>> >> connection to the FUMI. I am not sure if the detailed
>>> information about
>>> >> the specific connection to the FUMI would be of value to a System
>>> >> Manager.
>>> >>
>>> >>
>>> >> -----Original Message-----
>>> >> From: Anton Pak [mailto:[email protected]]
>>> >> Sent: Tuesday, August 10, 2010 1:29 PM
>>> >> To: OpenHPI-devel
>>> >> Subject: [Openhpi-devel] More on multi-domain
>>> configurations in real
>>> >> setups
>>> >>
>>> >> Hello!
>>> >>
>>> >> I was thinking about multi-domains stuff and how it maps
>>> to the real
>>> >> setup.
>>> >> It is highly possible that the system will have different
>>> OpenHPI and
>>> >> other HPI
>>> >> service instances running at the same time. They all shall provide
>>> >> coordinated view of the system.
>>> >> Static configuration is an option but painful and complex one.
>>> >>
>>> >> Seems there shall be generic solution to allow different vendors to
>>> >> provide
>>> >> vendor-specific management aspects in HPI without exposing
>>> source code.
>>> >> If we plot a well-known way the integration process will
>>> be much easier.
>>> >>
>>> >> Prepared small presentation.
>>> >> xTCA system was used as an example but I think it is applicable to
>>> >> wider set of systems.
>>> >>
>>> >> Please review and share your thoughts.
>>> >>
>>> >> I hope Bryan will no kill me for the big attachement. :)
>>> >>
>>> >> Anton Pak
>>> >>
>>> >>
>>> --------------------------------------------------------------
>>> ----------------
>>> >> This SF.net email is sponsored by
>>> >>
>>> >> Make an app they can't live without
>>> >> Enter the BlackBerry Developer Challenge
>>> >> http://p.sf.net/sfu/RIM-dev2dev
>>> >> _______________________________________________
>>> >> Openhpi-devel mailing list
>>> >> [email protected]
>>> >> https://lists.sourceforge.net/lists/listinfo/openhpi-devel
>>> >>
>>> >
>>> >
>>> >
>>> >
>>> --------------------------------------------------------------
>>> ----------------
>>> > This SF.net email is sponsored by
>>> >
>>> > Make an app they can't live without
>>> > Enter the BlackBerry Developer Challenge
>>> > http://p.sf.net/sfu/RIM-dev2dev
>>> > _______________________________________________
>>> > Openhpi-devel mailing list
>>> > [email protected]
>>> > https://lists.sourceforge.net/lists/listinfo/openhpi-devel
>>> >
>>> >
>>> --------------------------------------------------------------
>>> ----------------
>>> > This SF.net email is sponsored by
>>> >
>>> > Make an app they can't live without
>>> > Enter the BlackBerry Developer Challenge
>>> > http://p.sf.net/sfu/RIM-dev2dev
>>> > _______________________________________________
>>> > Openhpi-devel mailing list
>>> > [email protected]
>>> > https://lists.sourceforge.net/lists/listinfo/openhpi-devel
>>> >
>>>
>>>
>>>
>>> --------------------------------------------------------------
>>> ----------------
>>> This SF.net email is sponsored by
>>>
>>> Make an app they can't live without
>>> Enter the BlackBerry Developer Challenge
>>> http://p.sf.net/sfu/RIM-dev2dev
>>> _______________________________________________
>>> Openhpi-devel mailing list
>>> [email protected]
>>> https://lists.sourceforge.net/lists/listinfo/openhpi-devel
>>>
>>> --------------------------------------------------------------
>>> ----------------
>>> This SF.net email is sponsored by
>>>
>>> Make an app they can't live without
>>> Enter the BlackBerry Developer Challenge
>>> http://p.sf.net/sfu/RIM-dev2dev
>>> _______________________________________________
>>> Openhpi-devel mailing list
>>> [email protected]
>>> https://lists.sourceforge.net/lists/listinfo/openhpi-devel
>>>
>>
>> ------------------------------------------------------------------------------
>> This SF.net email is sponsored by
>>
>> Make an app they can't live without
>> Enter the BlackBerry Developer Challenge
>> http://p.sf.net/sfu/RIM-dev2dev
>> _______________________________________________
>> Openhpi-devel mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/openhpi-devel
>>
>
>
>
> ------------------------------------------------------------------------------
> This SF.net email is sponsored by
>
> Make an app they can't live without
> Enter the BlackBerry Developer Challenge
> http://p.sf.net/sfu/RIM-dev2dev
> _______________________________________________
> Openhpi-devel mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/openhpi-devel
>
> ------------------------------------------------------------------------------
> This SF.net email is sponsored by
>
> Make an app they can't live without
> Enter the BlackBerry Developer Challenge
> http://p.sf.net/sfu/RIM-dev2dev
> _______________________________________________
> Openhpi-devel mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/openhpi-devel
------------------------------------------------------------------------------
This SF.net email is sponsored by
Make an app they can't live without
Enter the BlackBerry Developer Challenge
http://p.sf.net/sfu/RIM-dev2dev
_______________________________________________
Openhpi-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/openhpi-devel