[ 
https://issues.apache.org/jira/browse/PROTON-2325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17268014#comment-17268014
 ] 

Koen commented on PROTON-2325:
------------------------------

Hi Jiri,

Thanks for your answer. Unfortunately I don't have a minimal example to 
reproduce this issue. It's also very uncommon and happens not so many times. 
That makes it hard for me to actually find the bug. I'm now busy upgrading to a 
newer qpid version, that also has some troubles. Still have to fix those, so 
will come back to you if the problem still exists in newer versions.

 

> Crash when closing application using Qpid Proton Cpp 0.25.0
> -----------------------------------------------------------
>
>                 Key: PROTON-2325
>                 URL: https://issues.apache.org/jira/browse/PROTON-2325
>             Project: Qpid Proton
>          Issue Type: Bug
>          Components: cpp-binding
>    Affects Versions: proton-c-0.25.0
>            Reporter: Koen
>            Priority: Major
>
> Hello,
> I'm having a problem with Qpid-proton-cpp version 0.25.0 and was wondering if 
> this was already solved in a newer version. 
>  When I shutdown the application, it crashes sometimes (1 time in 40 
> shutdown's) and generates the following stacktrace:
>  
>  
> {code:java}
> #0 0x00000000 in ?? ()
> #1 0xf325203e in pn_class_decref (clazz=0xee100fb0, object=0xee100738) at 
> /opt/jenkins_home/workspace/Qpid_release_build/qpid-proton-src/c/src/core/object/object.c:91
> #2 0xf32522ce in pn_decref (object=0xee100738) at 
> /opt/jenkins_home/workspace/Qpid_release_build/qpid-proton-src/c/src/core/object/object.c:253
> #3 0xf3ee0b50 in proton::internal::pn_ptr_base::decref (p=0xee100738) at 
> /opt/jenkins_home/workspace/Qpid_release_build/qpid-proton-src/cpp/src/object.cpp:31
> #4 0x08eca58e in proton::internal::pn_ptr<pn_connection_t>::~pn_ptr 
> (this=0xaf5eeac, __in_chrg=<optimized out>) at 
> /opt/qpid-itr/include/proton/internal/object.hpp:55
> #5 0x08ec7004 in proton::internal::object<pn_connection_t>::~object 
> (this=0xaf5eeac, __in_chrg=<optimized out>) at 
> /opt/qpid-itr/include/proton/internal/object.hpp:86
> #6 0x08ede3be in proton::connection::~connection (this=0xaf5eea8, 
> __in_chrg=<optimized out>) at /opt/qpid-itr/include/proton/connection.hpp:45
> #7 REST OF THE STACKTRACE
> {code}
>  
> I looked through my own code and didn't find anything that would cause this 
> issue so I was wondering if the stacktrace is known and fixed in newer 
> version of qpid.
> I ran my application through valgrind, but valgrind didn't find the memory 
> leak. That could be due to not crashing all the time.
> It looks like a double free (pni_head_t) happening inside qpid. Although I'm 
> not entirely sure..
> I was also curious if the number of items in this list needs to be 
> decremented every time when calling the pni_free_children destructor
> {code:java}
> Function: 
>  static void pni_free_children(pn_list_t *children, pn_list_t *freed)
>  while (pn_list_size(children) > 0) { //added printf to print the children 
> pointers and there list size.
>  #pn_list_size(children) for list size
>  #printing pointer = children
> Output:
>  children = 0xd039d4d8, list size = 1
>  children = 0xedc00a50, list size = 2568
>  children = 0xedc0c938, list size = 1
>  children = 0xedc00a50, list size = 2567
>  children = 0xedc0f890, list size = 18
>  children = 0xedc0f890, list size = 17
>  children = 0xedc0f890, list size = 17 <==This would need to be 16 right?
> Received signal: Segmentation fault (11)
>  ____________________ STACKTRACE:
>  #1 ip=0x0000000008a5f37f sp=0x00000000ef5f0d70 backtrace::SignalHandler(int) 
> + 0x7f
>  #2 ip=0x00000000f76e2410 sp=0x00000000ef5f0d80 + 0x7f
>  #3 ip=0x00000000edc01858 sp=0x00000000ef5f135c + 0x7f
>  #4 ip=0x000000006e657264 sp=0x00000000f34512bc + 0x7f
> {code}
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to