Hi,
Ok, I see your point ,two rounds of pollin means two different iobref, so the
first pollin will not affect the second. But from the stack the second poll in
stuck in iobref_unref LOCK operation,
Do you think it is possible that the iobref_destroy does not deal with this
iobref->lock will
On Mon, Apr 15, 2019 at 12:52 PM Zhou, Cynthia (NSB - CN/Hangzhou) <
cynthia.z...@nokia-sbell.com> wrote:
> Hi,
>
> The reason why I move event_handled to the end of socket_event_poll_in is
> because if event_handled is called before rpc_transport_pollin_destroy, it
> allowed another round of
Ok, I got your point, thanks for responding!
cynthia
From: Raghavendra Gowdappa
Sent: Monday, April 15, 2019 4:36 PM
To: Zhou, Cynthia (NSB - CN/Hangzhou)
Cc: gluster-devel@gluster.org
Subject: Re: glusterd stuck for glusterfs with version 3.12.15
On Mon, Apr 15, 2019 at 12:52 PM Zhou,
On Mon, Apr 15, 2019 at 12:52 PM Zhou, Cynthia (NSB - CN/Hangzhou) <
cynthia.z...@nokia-sbell.com> wrote:
> Hi,
>
> The reason why I move event_handled to the end of socket_event_poll_in is
> because if event_handled is called before rpc_transport_pollin_destroy, it
> allowed another round of
Hi,
The reason why I move event_handled to the end of socket_event_poll_in is
because if event_handled is called before rpc_transport_pollin_destroy, it
allowed another round of event_dispatch_epoll_handler happen before
rpc_transport_pollin_destroy, in this way, when the latter poll in goes
Ok, thanks for your comment!
cynthia
From: Raghavendra Gowdappa
Sent: Monday, April 15, 2019 11:52 AM
To: Zhou, Cynthia (NSB - CN/Hangzhou)
Cc: gluster-devel@gluster.org
Subject: Re: glusterd stuck for glusterfs with version 3.12.15
Cynthia,
On Mon, Apr 15, 2019 at 8:10 AM Zhou, Cynthia (NSB
Hi,
I made a patch and according to my test, this glusterd stuck issue disappear
with my patch. Only need to move event_handled to the end of
socket_event_poll_in function.
--- a/rpc/rpc-transport/socket/src/socket.c
+++ b/rpc/rpc-transport/socket/src/socket.c
@@ -2305,9 +2305,9 @@
Can you figure out some possible reason why iobref is corrupted, is it possible
that thread 8 has called poll in and iobref has been relased, but the lock
within it is not properly released (as I can not find any free lock operation
in iobref_destroy), then thread 9 called
On Mon, Apr 8, 2019 at 7:42 AM Zhou, Cynthia (NSB - CN/Hangzhou) <
cynthia.z...@nokia-sbell.com> wrote:
> Hi glusterfs experts,
>
> Good day!
>
> In my test env, sometimes glusterd stuck issue happened, and it is not
> responding to any gluster commands, when I checked this issue I find that
>
7)):
>
> #0 0x7f9ee9fcfa3d in __pthread_timedjoin_ex () from
> /lib64/libpthread.so.0
>
> #1 0x00007f9eeb282b09 in event_dispatch_epoll (event_pool=0x17feb00) at
> event-epoll.c:746
>
> #2 0x7f9eeb246786 in event_dispatch (event_pool=0x17feb00) at
> event.c:124
>
t.c:124
#3 0x0040ab95 in main ()
(gdb)
(gdb)
(gdb) q!
A syntax error in expression, near `'.
(gdb) quit
From: Sanju Rakonde
Sent: Monday, April 08, 2019 4:58 PM
To: Zhou, Cynthia (NSB - CN/Hangzhou)
Cc: Raghavendra Gowdappa ; gluster-devel@gluster.org
Subject: Re: [Gluster-devel] glus
Can you please capture output of "pstack $(pidof glusterd)" and send it to
us? We need to capture this information when glusterd is struck.
On Mon, Apr 8, 2019 at 8:05 AM Zhou, Cynthia (NSB - CN/Hangzhou) <
cynthia.z...@nokia-sbell.com> wrote:
> Hi glusterfs experts,
>
> Good day!
>
> In my test
12 matches
Mail list logo