Can you provide the core dump, as well as details of the system where it crashed (distribution, version, list of packages, ...), so that I can analyze it ?
Thanks, Xavi On Tue, Mar 19, 2019 at 3:56 PM Artem Russakovskii <[email protected]> wrote: > I upgraded the node that was crashing to 5.5 yesterday. Today, it got > another crash. This is a 1x4 replicate cluster, you can find the config > mentioned in my previous reports, and Amar should have it as well. Here's > the log: > > ==> mnt-<SNIP>_data1.log <== > The message "I [MSGID: 108031] [afr-common.c:2543:afr_local_discovery_cbk] > 0-<SNIP>_data1-replicate-0: selecting local read_child > <SNIP>_data1-client-3" repeated 4 times between [2019-03-19 > 14:40:50.741147] and [2019-03-19 14:40:56.874832] > pending frames: > frame : type(1) op(LOOKUP) > frame : type(1) op(LOOKUP) > frame : type(1) op(READ) > frame : type(1) op(READ) > frame : type(1) op(READ) > frame : type(1) op(READ) > frame : type(0) op(0) > patchset: git://git.gluster.org/glusterfs.git > signal received: 6 > time of crash: > 2019-03-19 14:40:57 > configuration details: > argp 1 > backtrace 1 > dlfcn 1 > libpthread 1 > llistxattr 1 > setfsid 1 > spinlock 1 > epoll.h 1 > xattr.h 1 > st_atim.tv_nsec 1 > package-string: glusterfs 5.5 > /usr/lib64/libglusterfs.so.0(+0x2764c)[0x7ff841f8364c] > /usr/lib64/libglusterfs.so.0(gf_print_trace+0x306)[0x7ff841f8dd26] > /lib64/libc.so.6(+0x36160)[0x7ff84114a160] > /lib64/libc.so.6(gsignal+0x110)[0x7ff84114a0e0] > /lib64/libc.so.6(abort+0x151)[0x7ff84114b6c1] > /lib64/libc.so.6(+0x2e6fa)[0x7ff8411426fa] > /lib64/libc.so.6(+0x2e772)[0x7ff841142772] > /lib64/libpthread.so.0(pthread_mutex_lock+0x228)[0x7ff8414d80b8] > > /usr/lib64/glusterfs/5.5/xlator/cluster/replicate.so(+0x5de3d)[0x7ff839fbae3d] > > /usr/lib64/glusterfs/5.5/xlator/cluster/replicate.so(+0x70d51)[0x7ff839fcdd51] > > /usr/lib64/glusterfs/5.5/xlator/protocol/client.so(+0x58e1f)[0x7ff83a252e1f] > /usr/lib64/libgfrpc.so.0(+0xe820)[0x7ff841d4e820] > /usr/lib64/libgfrpc.so.0(+0xeb6f)[0x7ff841d4eb6f] > /usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7ff841d4b063] > /usr/lib64/glusterfs/5.5/rpc-transport/socket.so(+0xa0ce)[0x7ff83b9690ce] > /usr/lib64/libglusterfs.so.0(+0x85519)[0x7ff841fe1519] > /lib64/libpthread.so.0(+0x7559)[0x7ff8414d5559] > /lib64/libc.so.6(clone+0x3f)[0x7ff84120c81f] > --------- > > Sincerely, > Artem > > -- > Founder, Android Police <http://www.androidpolice.com>, APK Mirror > <http://www.apkmirror.com/>, Illogical Robot LLC > beerpla.net | +ArtemRussakovskii > <https://plus.google.com/+ArtemRussakovskii> | @ArtemR > <http://twitter.com/ArtemR> > > > On Mon, Mar 18, 2019 at 9:46 PM Amar Tumballi Suryanarayan < > [email protected]> wrote: > >> Due to this issue, along with few other logging issues, we did make a >> glusterfs-5.5 release, which has the fix for particular crash. >> >> Regards, >> Amar >> >> On Tue, 19 Mar, 2019, 1:04 AM , <[email protected]> wrote: >> >>> Hello Ville-Pekka and list, >>> >>> >>> >>> I believe we are experiencing similar gluster fuse client crashes on 5.3 >>> as mentioned here. This morning I made a post in regards. >>> >>> >>> >>> https://lists.gluster.org/pipermail/gluster-users/2019-March/036036.html >>> >>> >>> >>> Has this "performance.write-behind: off" setting continued to be all you >>> needed to workaround the issue? >>> >>> >>> >>> Thanks, >>> >>> >>> >>> Brandon >>> _______________________________________________ >>> Gluster-users mailing list >>> [email protected] >>> https://lists.gluster.org/mailman/listinfo/gluster-users >> >> _______________________________________________ >> Gluster-users mailing list >> [email protected] >> https://lists.gluster.org/mailman/listinfo/gluster-users > > _______________________________________________ > Gluster-users mailing list > [email protected] > https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list [email protected] https://lists.gluster.org/mailman/listinfo/gluster-users
