Thanks, Pierre.  I'll send on the log file (a little over 2M in size).
I added some comments, tagged with 'JCB:'.

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Pierre
Sangouard
Sent: Friday, January 04, 2008 3:41 AM
To: [email protected]
Subject: Re: [Openhpi-devel] openhpi v 2.10.1 seg faults with libipmi
plugin

Could you send me directly the ipmidirect log file of the failing
session?
FYI I have got the plugin running with an ATCA chassis and MPCBL0040
boards.

> -----Original Message-----
> From: [EMAIL PROTECTED]
[mailto:openhpi-devel-
> [EMAIL PROTECTED] On Behalf Of Jason Barrett
> Sent: Thursday, January 03, 2008 19:42
> To: [email protected]
> Subject: Re: [Openhpi-devel] openhpi v 2.10.1 seg faults with libipmi
> plugin
> 
> I got the IPMI plugin to work in an ATCA chassis....almost.  See my
> email of yesterday for one issue and also, the plugin discovered the
> blades OK, but had a hard time with AMC cards installed in the blades.
> This particular chassis had two Intel MPCBL0040 blades with an AMC
card
> each - the plugin found *one* AMC card, which the 'hpitree' and
'hpitop'
> commands placed off of the system chassis...the entity path was
> something like,
> 
> {SYSTEM_CHASSIS, 3}{193, 6}
> 
> ...where the entity ID and instance are (I believe) OEM-specific.
> 
> Currently having an issue with the 2.10.1 version of 'ipmidirect'
where,
> after discovering the satellite MCs for all the blades and reading
their
> device SDR repositories, it sends another 'Reserve Device SDR
> Repositoty' command to the BMC for the blade's slave address (0x92 for
> example), but to LUN 1 instead of LUN 0.  Not sure why it does that,
but
> it fails with completion code 0xcc, which the IPMI spec says is,
> "Invalid data field in request".  Which is weird because that command
> does not expect any data and I don't think (from the log) that
openhpid
> is trying to send any.
> 
> Good to know that most people use 'ipmidirect', though.  I'd been
> planning to use whichever one I could get working soonest and best.
How
> widespread is the use of OpenHPI in general, and this plugin
> specifically, in the telecom industry, does anyone know?  Just
curious.
> 
> Jason
> 
> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
Bryan
> Sutula
> Sent: Thursday, January 03, 2008 1:31 PM
> To: [email protected]
> Subject: Re: [Openhpi-devel] openhpi v 2.10.1 seg faults with libipmi
> plugin
> 
> Here as well, I haven't gotten the IPMI plugin to work with ATCA
> equipment either.  Most people are using the ipmidirect plugin for
ATCA.
> It's on my longer-term list to figure out why it doesn't work and
maybe
> fix it.  If you have time and interest to see what's wrong, I'd
> appreciate any patches in this area.
> 
> Thanks,
> Bryan
> 
> On Thu, 2008-01-03 at 10:02 +0530, Anand S Katti wrote:
> > Hi Bryan,
> >
> > The Problem got resolved, I had multiple copies of the hpi library
and
> > was linking to the older version.
> >
> > But still it doesn't discover the ATCA Blades.
> >
> > Im running on RHEL 4.0 with 2.6 kernel; and using our own Continuous
> > Computing 5U ATCA Chassis.
> >
> > Thanks for the response,
> > Anand
> >
> > -----Original Message-----
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf Of
> Bryan
> > Sutula
> > Sent: Thursday, December 27, 2007 12:52 AM
> > To: [email protected]
> > Subject: Re: [Openhpi-devel] openhpi v 2.10.1 seg faults with
libipmi
> > plugin
> >
> > It would be helpful if you could provide a few more specifics:
> >
> > - What OS and/or distribution, and version of OS?
> > - What sort of hardware are you trying to access with OpenHPI?
> >
> > I will be mostly unavailable for the next week as well, but will try
> to
> > look at this when I'm back in the office next week.
> >
> > Thanks
> > Bryan Sutula
> >
> >
> > On Sat, 2007-12-22 at 20:58 +0530, Anand S Katti wrote:
> > > Hi
> > >
> > >
> > >
> > > I tried libipmi plugin with openhpid, and I am seeing following
> error
> > > with it.
> > >
> > > It is creating 5 openhpid processes. With hpi_shell I did lsres
and
> it
> > > doesn't list any resources either; it is empty.
> > >
> > >
> > >
> > > The behavior is same with libOpenIPMI libraries from 1.4.20 and
> 2.0.13
> > > versions as well.
> > >
> > >
> > >
> > > It would be great if anyone tells me where im going wrong.
> > >
> > >
> > >
> > > PS: openhpid works flawless with libipmidirect plugin.
> > >
> > >
> > >
> > > Thanks
> > >
> > > Anand
> > >
> > >
> > >
> > > Gdb shows that the process segfaults at accept().
> > >
> > >
> ----------------------------------------------------------------------
> > >
> > > 290                     if (stop_server) {
> > >
> > > (gdb)
> > >
> > > 294                     if (servinst->Accept()) {
> > >
> > > (gdb) s
> > >
> > > sstrmsock::Accept (this=0x8beef08) at strmsock.cpp:316
> > >
> > > 316             socklen_t sz = sizeof(addr);
> > >
> > > (gdb) n
> > >
> > > 318             if (!fOpenS) {
> > >
> > > (gdb) p fOpenS
> > >
> > > $1 = true
> > >
> > > (gdb) n
> > >
> > > 322             sz = sizeof (struct sockaddr);
> > >
> > > (gdb)
> > >
> > > 323             s = accept(ss, (struct sockaddr *) &addr, &sz);
> > >
> > > (gdb)
> > >
> > >
> > >
> > > Program received signal SIGSEGV, Segmentation fault.
> > >
> > > [Switching to Thread -1218585680 (LWP 5561)]
> > >
> > > 0x00000000 in ?? ()
> > >
> > > (gdb) bt
> > >
> > > #0  0x00000000 in ?? ()
> > >
> > > #1  0x00281ae3 in ipmi_lock (lock=0x8be0148) at locks.c:108
> > >
> > > #2  0x00cdefee in _ipmi_fru_lock (fru=0x8be0148) at fru.c:201
> > >
> > > #3  0x00cdfedd in fetch_got_timestamp (fru=0x8bef250,
> > > domain=0x8bea090, err=0,
> > >
> > >     timestamp=167772160) at fru.c:639
> > >
> > > #4  0x00d1472d in atca_fru_254_get_timestamp_done
(domain=0x8bea090,
> > > rspi=0x8be0148)
> > >
> > >     at oem_atca.c:3228
> > >
> > > #5  0x00cca41b in deliver_rsp (domain=0x8be0148,
> > > rsp_handler=0x8bef3c0,
> > >
> > >     rspi=0x8bed508) at domain.c:428
> > >
> > > #6  0x00ccc778 in ll_rsp_handler (ipmi=0x8be6550, orspi=0x8bef058)
> at
> > > domain.c:1869
> > >
> > > #7  0x00cca314 in ipmi_handle_rsp_item (ipmi=0x8be6550,
> > > rspi=0x8bef058,
> > >
> > >     rsp_handler=0x8be0148) at ipmi.c:1765
> > >
> > > #8  0x00d19cb9 in handle_payload (ipmi=0x8be6550, lan=0x8be6608,
> > > addr_num=0,
> > >
> > >     payload_type=146731104, tmsg=0xb75dd03e "\201???\037",
> > > payload_len=14)
> > >
> > >     at ipmi_lan.c:2994
> > >
> > > #9  0x00d1b2f8 in data_handler (fd=7, cb_data=0x8be9e60,
> id=0x8be9f18)
> > >
> > >     at ipmi_lan.c:3308
> > >
> > > #10 0x0027a09f in fd_handler (fd=7, data=0x8be0148) at
> > > posix_os_hnd.c:87
> > >
> > > #11 0x0027bacc in process_fds (sel=0x8be01f8, send_sig=0x8bef3c0,
> > > thread_id=0,
> > >
> > >     cb_data=0x0, timeout=0xb75dd358) at selector.c:638
> > >
> > > #12 0x0027c069 in sel_select (sel=0x8be01f8, send_sig=0,
> thread_id=0,
> > > cb_data=0x0,
> > >
> > >     timeout=0x0) at selector.c:740
> > >
> > > #13 0x0054761a in ipmi_get_event (hnd=0x8bdc6e8) at ipmi.c:538
> > >
> > > #14 0x0805beb6 in harvest_events_for_handler (h=0x8bdc720) at
> > > event.c:111
> > >
> > > #15 0x0805c0a5 in oh_harvest_events () at event.c:141
> > >
> > > #16 0x0805d663 in oh_evtget_thread_loop (data=0x0) at
threaded.c:130
> > >
> > > #17 0x00218812 in g_static_private_free ()
> > > from /usr/lib/libglib-2.0.so.0
> > >
> > > #18 0x005fc3ae in start_thread () from /lib/tls/libpthread.so.0
> > >
> > > #19 0x0047caee in clone () from /lib/tls/libc.so.6
> > >
> > > (gdb)
> > >
> > >
> > >
> > >
> > >
> > >
> > > Here is the extract of openhpi.conf for libipmi....
> > >
> > >
> > >
> > >
> > >
> > > OPENHPI_LOG_ON_SEV = "MINOR"
> > >
> > > OPENHPI_ON_EP = "{SYSTEM_CHASSIS,3}"
> > >
> > > OPENHPI_EVT_QUEUE_LIMIT = 10000
> > >
> > > OPENHPI_DEL_SIZE_LIMIT = 10000
> > >
> > > OPENHPI_DEL_SAVE = "NO"
> > >
> > > OPENHPI_DAT_SIZE_LIMIT = 0
> > >
> > > OPENHPI_DAT_USER_LIMIT = 0
> > >
> > > OPENHPI_DAT_SAVE = "NO"
> > >
> > > OPENHPI_PATH = "/usr/local/lib/openhpi:/usr/lib/openhpi"
> > >
> > > OPENHPI_VARPATH = "/usr/local/var/lib/openhpi"
> > >
> > >
> > >
> > >
> > >
> > > ## Section for ipmi plugin based on OpenIPMI:
> > >
> > > handler libipmi {
> > >
> > >        entity_root = "{SYSTEM_CHASSIS,3}"
> > >
> > >        name = "lan"
> > >
> > >        addr = "172.25.10.150"        #ipaddress
> > >
> > >        port = 623
> > >
> > >        auth_type = "none"
> > >
> > >        auth_level= "admin"
> > >
> > >        username = ""
> > >
> > >        password = ""
> > >
> > > }
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> >
>
------------------------------------------------------------------------
> > -
> > > This SF.net email is sponsored by: Microsoft
> > > Defy all challenges. Microsoft(R) Visual Studio 2005.
> > > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
> > > _______________________________________________ Openhpi-devel
> mailing
> > list [email protected]
> > https://lists.sourceforge.net/lists/listinfo/openhpi-devel
> >
> >
> >
>
------------------------------------------------------------------------
> > -
> > This SF.net email is sponsored by: Microsoft
> > Defy all challenges. Microsoft(R) Visual Studio 2005.
> > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
> > _______________________________________________
> > Openhpi-devel mailing list
> > [email protected]
> > https://lists.sourceforge.net/lists/listinfo/openhpi-devel
> >
> >
> >
> >
>
------------------------------------------------------------------------
> -
> > This SF.net email is sponsored by: Microsoft
> > Defy all challenges. Microsoft(R) Visual Studio 2005.
> > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
> > _______________________________________________
> > Openhpi-devel mailing list
> > [email protected]
> > https://lists.sourceforge.net/lists/listinfo/openhpi-devel
> --
> Bryan Sutula <[EMAIL PROTECTED]>
> 
> 
>
------------------------------------------------------------------------
> -
> This SF.net email is sponsored by: Microsoft
> Defy all challenges. Microsoft(R) Visual Studio 2005.
> http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
> _______________________________________________
> Openhpi-devel mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/openhpi-devel
> 
>
------------------------------------------------------------------------
-
> This SF.net email is sponsored by: Microsoft
> Defy all challenges. Microsoft(R) Visual Studio 2005.
> http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
> _______________________________________________
> Openhpi-devel mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/openhpi-devel



------------------------------------------------------------------------
-
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Openhpi-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/openhpi-devel

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Openhpi-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/openhpi-devel

Reply via email to