[
https://issues.apache.org/jira/browse/DISPATCH-583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15712141#comment-15712141
]
Paolo Patierno commented on DISPATCH-583:
-----------------------------------------
I have just had a segmentation fault with latest compiled bits (few minutes
ago) :
{noformat}
[0x7f42b402fdc0]:0 <- @detach(22) [handle=1, closed=true]
[0x7f42bc0247c0]:0 -> @detach(22) [handle=0, closed=true]
[0x7f42b402fdc0]:0 <- @detach(22) [handle=2, closed=true]
[0x7f42b402fdc0]:0 -> @detach(22) [handle=1, closed=true]
[0x7f42b402fdc0]:0 <- @detach(22) [handle=0, closed=true]
[0x7f42b402fdc0]:0 -> @detach(22) [handle=0, closed=true]
[0x7f42b402fdc0]:0 <- @attach(18) [name="auto-2", handle=0, role=false,
snd-settle-mode=0, rcv-settle-mode=0, source=@source(40) [durable=0,
expiry-policy=:"session-end", timeout=0, dynamic=false,
outcomes=@PN_SYMBOL[:"amqp:accepted:list", :"amqp:rejected:list",
:"amqp:released:list", :"amqp:modified:list"]], target=@target(41)
[address="$mqtt.12345.pubrel"], incomplete-unsettled=false,
initial-delivery-count=0]
[0x7f42b402fdc0]:0 <- @detach(22) [handle=0, closed=true]
[0x7f42bc0247c0]:0 <- @close(24) []
[0x7f42bc0247c0]: <- EOS
[0x7f42bc0247c0]:0 -> @close(24) []
[0x7f42bc0247c0]: -> EOS
Thu Dec 1 14:41:52 2016 ROUTER_CORE (info) Link Route Deactivated
'linkRoute/0' on container will-service
Thu Dec 1 14:41:52 2016 ROUTER_CORE (info) Link Route Deactivated
'linkRoute/1' on container will-service
[0x7f42b402fdc0]:0 -> @attach(18) [name="auto-2", handle=0, role=true,
snd-settle-mode=2, rcv-settle-mode=0, source=@source(40) [durable=0, timeout=0,
dynamic=false, outcomes=@PN_SYMBOL[:"amqp:accepted:list",
:"amqp:rejected:list", :"amqp:released:list", :"amqp:modified:list"]],
target=@target(41) [address="$mqtt.12345.pubrel", durable=0, timeout=0,
dynamic=false], initial-delivery-count=0]
[0x7f42b402fdc0]:0 -> @detach(22) [handle=0, closed=true]
[0x7f42b402fdc0]:0 -> @detach(22) [handle=2, closed=false, error=@error(29)
[condition=:"qd:routed-link-lost", description="Connectivity to the peer
container was lost"]]
[0x7f42b80332a0]:0 -> @detach(22) [handle=2, closed=false, error=@error(29)
[condition=:"qd:routed-link-lost", description="Connectivity to the peer
container was lost"]]
[0x7f42b402fdc0]:0 <- @close(24) []
[0x7f42b402fdc0]: <- EOS
[0x7f42b402fdc0]:0 -> @close(24) []
[0x7f42b402fdc0]: -> EOS
Segmentation fault (core dumped)
{noformat}
this is the backtrace :
{noformat}
#0 0x00007f42bc02fc40 in ?? ()
#1 0x00007f42d0df1e60 in pn_class_free (clazz=0x7f42bc02fc00,
object=0x7f42bc047260) at /qpid-proton-0.15.0/proton-c/src/object/object.c:117
#2 0x00007f42d0df204f in pn_free (object=<optimized out>) at
/qpid-proton-0.15.0/proton-c/src/object/object.c:263
#3 0x00007f42d0dfd255 in pn_link_finalize (object=0x7f42bc053160) at
/qpid-proton-0.15.0/proton-c/src/engine/engine.c:1119
#4 0x00007f42d0df1e73 in pn_class_free (clazz=0x7f42d1028480 <clazz>,
object=0x7f42bc053160) at /qpid-proton-0.15.0/proton-c/src/object/object.c:124
#5 0x00007f42d0df204f in pn_free (object=<optimized out>) at
/qpid-proton-0.15.0/proton-c/src/object/object.c:263
#6 0x00007f42d0dfc902 in pni_free_children (children=0x7f42bc04dd00,
freed=0x7f42bc04db40) at /qpid-proton-0.15.0/proton-c/src/engine/engine.c:456
#7 0x00007f42d0dfd179 in pn_session_finalize (object=0x7f42bc016cd0) at
/qpid-proton-0.15.0/proton-c/src/engine/engine.c:930
#8 0x00007f42d0df1e19 in pn_class_decref (clazz=0x7f42d1028500 <clazz>,
object=0x7f42bc016cd0) at /qpid-proton-0.15.0/proton-c/src/object/object.c:95
#9 0x00007f42d0dfc8d2 in pni_free_children (children=0x7f42b4025700,
freed=0x7f42b4025640) at /qpid-proton-0.15.0/proton-c/src/engine/engine.c:450
#10 0x00007f42d0dfd0bb in pn_connection_finalize (object=0x7f42b40430c0) at
/qpid-proton-0.15.0/proton-c/src/engine/engine.c:478
#11 0x00007f42d0df1e19 in pn_class_decref (clazz=0x7f42d1028580 <clazz>,
object=0x7f42b40430c0) at /qpid-proton-0.15.0/proton-c/src/object/object.c:95
#12 0x00007f42d0dffdb0 in pn_event_finalize (event=0x7f42bc054030) at
/qpid-proton-0.15.0/proton-c/src/events/event.c:212
#13 pn_event_finalize_cast (object=0x7f42bc054030) at
/qpid-proton-0.15.0/proton-c/src/events/event.c:257
#14 0x00007f42d0df1e19 in pn_class_decref (clazz=0x7f42d1028600 <clazz>,
clazz@entry=0x7f42d1027f00 <PN_OBJECT>, object=0x7f42bc054030) at
/qpid-proton-0.15.0/proton-c/src/object/object.c:95
#15 0x00007f42d0df202f in pn_decref (object=<optimized out>) at
/qpid-proton-0.15.0/proton-c/src/object/object.c:253
#16 0x00007f42d0e000a2 in pn_collector_pop
(collector=collector@entry=0x7f42b4026240) at
/qpid-proton-0.15.0/proton-c/src/events/event.c:189
#17 0x00007f42d0e000f8 in pn_collector_drain (collector=0x7f42b4026240) at
/qpid-proton-0.15.0/proton-c/src/events/event.c:56
#18 pn_collector_release (collector=0x7f42b4026240) at
/qpid-proton-0.15.0/proton-c/src/events/event.c:118
#19 0x00007f42d0e00119 in pn_collector_free (collector=0x7f42b4026240) at
/qpid-proton-0.15.0/proton-c/src/events/event.c:109
#20 0x00007f42d106937c in free_qd_connection (ctx=ctx@entry=0x7f42b403d050) at
/qpid-dispatch/src/server.c:100
#21 0x00007f42d106b86e in thread_run (arg=<optimized out>) at
/qpid-dispatch/src/server.c:1102
#22 0x00007f42d0bc95ba in start_thread () from /lib64/libpthread.so.0
#23 0x00007f42d01187cd in clone () from /lib64/libc.so.6
{noformat}
> Random segmentation fault
> -------------------------
>
> Key: DISPATCH-583
> URL: https://issues.apache.org/jira/browse/DISPATCH-583
> Project: Qpid Dispatch
> Issue Type: Bug
> Affects Versions: 0.8.0
> Reporter: Paolo Patierno
> Assignee: Ganesh Murthy
> Fix For: 0.8.0
>
> Attachments: qdrouterd.conf
>
>
> Hello,
> working with the following project :
> https://github.com/EnMasseProject/mqtt-frontend
> using the router, I have random "segmentation fault" errors when the
> SubscribeTest is executed. It happens running the test 3-4 times ... or 20
> times ... totally random.
> You can reproduce it, cloning the above repo and running :
> mvn -Dtest=SubscribeTest test
> I used the route built from the repo source code (about 2 weeks ago) but
> applying the patch for message annotations on that.
> I got the backtrace of one of the crash :
> #0 0x0000000f00747369 in ?? ()
> #1 0x00007f0eceddcf69 in pn_class_equals (clazz=<optimized out>,
> a=<optimized out>, b=b@entry=0x7f0eb800bd90) at
> /qpid-proton-0.15.0/proton-c/src/object/object.c:169
> #2 0x00007f0eceddd466 in pn_list_index (list=list@entry=0x7f0eb802c0c0,
> value=value@entry=0x7f0eb800bd90) at
> /qpid-proton-0.15.0/proton-c/src/object/list.c:88
> #3 0x00007f0eceddd559 in pn_list_remove (list=0x7f0eb802c0c0,
> value=value@entry=0x7f0eb800bd90) at
> /qpid-proton-0.15.0/proton-c/src/object/list.c:99
> #4 0x00007f0ecede82ea in pn_link_finalize (object=0x7f0eb800bd90) at
> /qpid-proton-0.15.0/proton-c/src/engine/engine.c:1129
> #5 0x00007f0eceddce19 in pn_class_decref (clazz=0x7f0ecf013480 <clazz>,
> object=0x7f0eb800bd90) at /qpid-proton-0.15.0/proton-c/src/object/object.c:95
> #6 0x00007f0ecedeadb0 in pn_event_finalize (event=0x7f0eb802a580) at
> /qpid-proton-0.15.0/proton-c/src/events/event.c:212
> #7 pn_event_finalize_cast (object=0x7f0eb802a580) at
> /qpid-proton-0.15.0/proton-c/src/events/event.c:257
> #8 0x00007f0eceddce19 in pn_class_decref (clazz=0x7f0ecf013600 <clazz>,
> clazz@entry=0x7f0ecf012f00 <PN_OBJECT>, object=0x7f0eb802a580) at
> /qpid-proton-0.15.0/proton-c/src/object/object.c:95
> #9 0x00007f0eceddd02f in pn_decref (object=<optimized out>) at
> /qpid-proton-0.15.0/proton-c/src/object/object.c:253
> #10 0x00007f0ecedeb0a2 in pn_collector_pop
> (collector=collector@entry=0x7f0eb8036450) at
> /qpid-proton-0.15.0/proton-c/src/events/event.c:189
> #11 0x00007f0ecf055d43 in process_connector (cxtr=0x7f0eb8002f90,
> qd_server=0x2517e70) at /qpid-dispatch/src/server.c:851
> #12 thread_run (arg=<optimized out>) at /qpid-dispatch/src/server.c:1075
> #13 0x00007f0ecebb45ba in start_thread () from /lib64/libpthread.so.0
> #14 0x00007f0ece1037cd in clone () from /lib64/libc.so.6
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]