Andy -

I don't think their would be an objection from HP - on deprecating the IPMI 
plug-in.  I would suggest that anyone on this mailing list who cares about the 
IPMI plugin should make their position known.

I would also suggest that that if we decide to deprecate - that we continue to 
release the IPMI for a while - but mark the code, documentation, release notes, 
etc., indicating deprecation for this plugin.

--michael



> -----Original Message-----
> From: Andy Cress [mailto:[email protected]]
> Sent: Thursday, March 04, 2010 7:57 AM
> To: [email protected]; Bishop, Michael (ISB
> Linux/Telco)
> Subject: Re: [Openhpi-devel] openHPI plugins
>
>
> Good question.  That isn't my decision.
>
> Michael,
> Should the ipmi plugin be deprecated?  Are there any users of it that
> would object?
>
> Andy
>
> -----Original Message-----
> From: Kleber, Ulrich (NSN - DE/Munich) [mailto:[email protected]]
> Sent: Thursday, March 04, 2010 8:48 AM
> To: [email protected]; [email protected]
> Subject: Re: [Openhpi-devel] openHPI plugins
>
> Hi,
> thanks a lot, this explains it.
> Do you think the ipmi plugin could be deprecated?
> Cheers,
> Uli
>
> > -----Original Message-----
> > From: ext Andy Cress [mailto:[email protected]]
> > Sent: Thursday, March 04, 2010 2:43 PM
> > To: [email protected]; [email protected]
> > Subject: Re: [Openhpi-devel] openHPI plugins
> >
> > RE: difference between the ipmi and ipmidirect plugins
> > In the beginning, the two plugins were targeted at different
> > segments of
> > IPMI servers (conventional and bladed), but now they are simply two
> > different approaches to the same goal.
> >
> > The ipmi plugin uses/requires OpenIPMI libraries in addition
> > to openhpi
> > in order to talk to the OpenIPMI driver.  It has become
> > rather stale and
> > didn't work at all for Intel IPMI servers the last time I
> tested with
> > it.  (bug 1565999 from 2006 is still Open)
> >
> > The ipmidirect plugin talks directly to the OpenIPMI
> driver, and works
> > fine for both conventional and bladed IPMI servers.  This is
> > the choice
> > that we use.
> >
> > Andy
> >
> > -----Original Message-----
> > From: Kleber, Ulrich (NSN - DE/Munich)
> [mailto:[email protected]]
> > Sent: Thursday, March 04, 2010 3:33 AM
> > To: [email protected]; [email protected]
> > Subject: Re: [Openhpi-devel] openHPI
> >
> > Hi Lars,
> > I didn't see any reply to your email on the reflector, but I was
> > interested in the topic. I am not yet really an expert (yet) on
> > the plugin, but maybe together we can progress with your topics.
> > See inline.
> > Cheers,
> > Uli
> >
> > > -----Original Message-----
> > > From: ext Lars Wetzel [mailto:[email protected]]
> > > Sent: Tuesday, February 09, 2010 11:24 AM
> > > To: [email protected]; [email protected]
> > > Subject: Re: [Openhpi-devel] openHPI
> > >
> > > Hi Ric,
> > >
> > > yes, we debug the ipmi events as you describe below last week.
> > > We could see that the Mx->M0->Mx events are missing.  So I
> > > think it isn't a
> > > problem of the openhpi/ipmidirect plugin.
> > >
> > > But I want to take the opportunity and ask some short
> > > questions regarding the
> > > ipmidirect plugin before you leave the project. Maybe you can
> > > help me to have
> > > a better understanding of the ipmidirect plugin background. I
> > > hope, I'm not
> > > too late.
> > >
> > > I know the openhpid/ipmidirect combination only by the code.
> > > I never run it in
> > > a system.
> > > - I think the openhpid/ipmidirect isn't programmed to replace
> > > a Shelf or
> > > ChassisManager in an xTCA system. Is this correct?
> >
> > I think this is true.
> > As far as I know, the ipmidirect plugin talks to the ChassisManager.
> > At least it worked when I configured the daemon that way.
> > However I am still a bit confused about the difference between
> > ipmi plugin and ipmidirect plugin.
> >
> >
> > >
> > > - I also miss some stuff from the xTCA Mapping Specification
> > > (e.g. a SLOT
> > > resource, like SYSTEM_CHASSIS - XYZ_SLOT - XYZ_RESOURCE).
> > > Should the plugin
> > > be SAF mapping specification compliant?
> >
> > I think the plugin should be compliant, but to which mapping spec?
> > It looks like you refer to the xTCA mapping spec, which is not
> > published yet.
> > As soon as the new mapping spec is published, we should start
> > working on a plugin compliant to that one.
> >
> > Hope that helps,
> > Cheers,
> > Uli
> >
> >
> >
> > >
> > > Thanks in forward and best wishes for the new job!
> > > Lars
> > >
> > > On Tuesday, 9. February 2010 02:47, Ric White wrote:
> > > > Hello Ayman,
> > > >
> > > > We tried to make the IPMI plug-ins as general purpose as
> > > possible, but
> > > > sometimes a bit of modification is  required to make them
> > > play well with
> > > > specific hardware.
> > > >
> > > > To check if the daemon is receiving and processing the IMPI
> > > events for
> > > > hot swap as Lars suggested, you can add the following
> > parameters to
> > > > the libipmidirect handler stanza in your openhpi.conf file:
> > > >
> > > >     logflags = "file"    # "" means logging off; also use
> > > "file stdout"
> > > >     logfile = "ipmidirect"  # log file name prefix;
> ${logfile}.log
> > > >     logfile_max = "10000" # maximum log file size in kilobytes
> > > >
> > > > This will create a ipmidirect.log file that could be used
> > > to see what is
> > > > really going on.
> > > >
> > > > On Tue, 2010-01-26 at 14:35 +0000, Ayman Daoud wrote:
> > > > > Dear openHPI representative,
> > > > >
> > > > > I have been working on a project to monitor the uTCA
> > > hardware using
> > > > > openHPI. I am using openhpi-2.14.1 with the ipmidirect
> > > plugin. During
> > > > > my work (using hpi_shell) I experienced the following
> > questionable
> > > > > behaviours which might be bugs:
> > > > >
> > > > > 1. if FRU is added to the chassis after the daemon
> has started,
> > > > > openHPI do not detect that FRU; No RPT entry added in the
> > > RPT table
> > > > > for the newly added FRU nor an event is generated to
> > indicate the
> > > > > addition of the FRU. (this is different from
> extracting FRU and
> > > > > reinstalling it which is fine except for what is stated
> > > in #2 and 3)
> > > > >
> > > > > 2. SAHPI_HS_STATE_NOT_PRESENT event is not generated when
> > > the FRU is
> > > > > removed from the chassis.
> > > > >
> > > > > 3. when FRU is removed from the chassis, the
> > > corresponding RPT entry
> > > > > is not deleted from the RPT table.
> > > > >
> > > > > 4. if the daemon start with a FRU plugged into the
> > chassis but the
> > > > > latch is not pushed in; we see a RPT entry for the
> > > resource modelling
> > > > > the FRU, but when the latch is pushed in, no event is
> > generated to
> > > > > indicate the transition from INACTIVE (or INSERTION
> > > PENDING) state to
> > > > > ACTIVE state.
> > > > >
> > > > > 5. saHpiHotSwapStateGet() return an error when it is
> called for
> > > > > resources that have the FRU capability but not the HS
> > > capability. the
> > > > > HPI specs states that this function should be enabled for
> > > resources
> > > > > with the FRU capability.
> > > >
> > > > This (your #5) appears to be a defect in the daemon. It is
> > > checking the
> > > > resource's ResourceCapabilities flag, and if
> > > > SAHPI_CAPABILITY_MANAGED_HOTSWAP is not set, it will
> always return
> > > > SA_ERR_HPI_CAPABILITY. According to the B.03.01
> > > Specification, it should
> > > > instead be checking that SAHPI_CAPABILITY_FRU is set. Looks
> > > like this
> > > > was a change in behavior between the B.02.01 and B.03.01 HPI
> > > > Specifications.
> > > >
> > > > I have submitted bug #2948127 for this.
> > > >
> > > > Best Regards,
> > > > Ric White
> > > >
> > > > > Any help with these issues will be greatly appreciated.
> > > > >
> > > > > Best Regards,
> > > > >
> > > > > Ayman Doaud
> > > > > Software Engineer
> > > > >
> > > > > Tecore Networks
> > > > >
> > > > > Phone: +1 410.872.6286
> > > > > Fax: +1 410.872.6010
> > > > > e-mail: [email protected]
> > > > >
> > > > >
> > > > > THIS E-MAIL MAY CONTAIN PRIVILEGED, CONFIDENTIAL,
> > > COPYRIGHTED OR OTHER
> > > > > LEGALLY PROTECTED INFORMATION, AND IS INTENDED
> > EXCLUSIVELY FOR THE
> > > > > INTENDED RECIPIENT. IF YOU ARE NOT THE INTENDED RECIPIENT
> > > (EVEN IF THE
> > > > > E-MAIL ADDRESS ABOVE IS YOURS), YOU MAY NOT REVIEW,
> > > STORE, USE, COPY,
> > > > > DISCLOSE OR RETRANSMIT IT IN ANY FORM. IF YOU ARE NOT
> > THE INTENDED
> > > > > RECIPIENT OR OTHERWISE HAVE RECEIVED THIS BY MISTAKE, OR
> > > IF YOU WISH
> > > > > TO BE REMOVED FROM A MAILING LIST, PLEASE IMMEDIATELY
> NOTIFY THE
> > > > > SENDER BY RETURN E-MAIL (AND TECORE AT
> > [email protected]), THEN
> > > > > DELETE THE MESSAGE IN ITS ENTIRETY. THANK YOU.
> > > > >
> > > > >
> > > --------------------------------------------------------------
> > > -----------
> > > > >----- The Planet: dedicated and managed hosting, cloud
> > > storage, colocation
> > > > > Stay online with enterprise data centers and the best
> > > network in the
> > > > > business
> > > > > Choose flexible plans and management services without
> long-term
> > > > > contracts
> > > > > Personal 24x7 support from experience hosting pros just a
> > > phone call
> > > > > away.
> > > > > http://p.sf.net/sfu/theplanet-com
> > > > > _______________________________________________
> > > > > Openhpi-devel mailing list
> > > > > [email protected]
> > > > > https://lists.sourceforge.net/lists/listinfo/openhpi-devel
> > > >
> > > >
> > > --------------------------------------------------------------
> > > -------------
> > > >--- The Planet: dedicated and managed hosting, cloud
> > > storage, colocation
> > > > Stay online with enterprise data centers and the best
> > network in the
> > > > business Choose flexible plans and management services
> > > without long-term
> > > > contracts Personal 24x7 support from experience hosting
> > > pros just a phone
> > > > call away. http://p.sf.net/sfu/theplanet-com
> > > > _______________________________________________
> > > > Openhpi-devel mailing list
> > > > [email protected]
> > > > https://lists.sourceforge.net/lists/listinfo/openhpi-devel
> > >
> > > --
> > > -------------------------------
> > > Dipl. Wi.ing.
> > > Lars Wetzel
> > > Uttinger Str. 13
> > > 86938 Schondorf a. Ammersee
> > >
> > > Tel.: 0179-2096845
> > > Mail: [email protected]
> > >
> > > USt-IdNr.: DE181396006
> > >
> > > --------------------------------------------------------------
> > > ----------------
> > > The Planet: dedicated and managed hosting, cloud storage,
> colocation
> > > Stay online with enterprise data centers and the best network
> > > in the business
> > > Choose flexible plans and management services without
> > > long-term contracts
> > > Personal 24x7 support from experience hosting pros just a
> > > phone call away.
> > > http://p.sf.net/sfu/theplanet-com
> > > _______________________________________________
> > > Openhpi-devel mailing list
> > > [email protected]
> > > https://lists.sourceforge.net/lists/listinfo/openhpi-devel
> > >
> >
> > --------------------------------------------------------------
> > ----------
> > ------
> > Download Intel® Parallel Studio Eval
> > Try the new software tools for yourself. Speed compiling, find bugs
> > proactively, and fine-tune applications for parallel performance.
> > See why Intel Parallel Studio got high marks during beta.
> > http://p.sf.net/sfu/intel-sw-dev
> > _______________________________________________
> > Openhpi-devel mailing list
> > [email protected]
> > https://lists.sourceforge.net/lists/listinfo/openhpi-devel
> > ceforge.net/lists/listinfo/openhpi-devel
> >
> >
> >
> > --------------------------------------------------------------
> > ----------------
> > Download Intel® Parallel Studio Eval
> > Try the new software tools for yourself. Speed compiling, find bugs
> > proactively, and fine-tune applications for parallel performance.
> > See why Intel Parallel Studio got high marks during beta.
> > http://p.sf.net/sfu/intel-sw-dev
> > _______________________________________________
> > Openhpi-devel mailing list
> > [email protected]
> > https://lists.sourceforge.net/lists/listinfo/openhpi-devel
> >
>
> --------------------------------------------------------------
> ----------
> ------
> Download Intel® Parallel Studio Eval
> Try the new software tools for yourself. Speed compiling, find bugs
> proactively, and fine-tune applications for parallel performance.
> See why Intel Parallel Studio got high marks during beta.
> http://p.sf.net/sfu/intel-sw-dev
> _______________________________________________
> Openhpi-devel mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/openhpi-devel
> ceforge.net/lists/listinfo/openhpi-devel
>
>
>

------------------------------------------------------------------------------
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
Openhpi-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/openhpi-devel

Reply via email to