On 02/27/2017 07:19 PM, Kevin Lemonnier wrote:
Hi,

I have a simple glusterFS configured on a VM, with a single volume on a single 
brick. It's
setup that way to replicate the production conditions as close as possible, but 
with no
replica as it's just for dev.

Every few hours, the NFS server in glusterfs crashes. Here are the logs from 
nfs.log :

... works fine ...
[2017-02-27 12:00:23.029163] W [socket.c:596:__socket_rwv] 0-NLM-client: readv 
on 172.16.0.13:54367 failed (No data available)
pending frames:
frame : type(0) op(0)
patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash:
2017-02-27 12:00:33
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.7.15
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_msg_backtrace_nomem+0x7e)[0x7f7d1a768fbe]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x31d)[0x7f7d1a78b70d]
/lib/x86_64-linux-gnu/libc.so.6(+0x350e0)[0x7f7d192720e0]
/lib/x86_64-linux-gnu/libc.so.6(+0x80216)[0x7f7d192bd216]
/usr/lib/x86_64-linux-gnu/glusterfs/3.7.15/xlator/nfs/server.so(nlm_set_rpc_clnt+0x62)[0x7f7d1431ac32]
/usr/lib/x86_64-linux-gnu/glusterfs/3.7.15/xlator/nfs/server.so(nlm_rpcclnt_notify+0x35)[0x7f7d1431d395]
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0x2aa)[0x7f7d1a53647a]
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7f7d1a532733]
/usr/lib/x86_64-linux-gnu/glusterfs/3.7.15/rpc-transport/socket.so(+0x4a73)[0x7f7d15ac8a73]
/usr/lib/x86_64-linux-gnu/glusterfs/3.7.15/rpc-transport/socket.so(+0x8e1f)[0x7f7d15acce1f]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8e722)[0x7f7d1a7d1722]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x8064)[0x7f7d199ec064]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f7d1932562d]


There are some issues with Gluster-NFS/NLM implementation. Similar concerns were raise earlier [1]. Niels has been working on the fix. Meanwhile as work-around, in case your application is not interested in locking, you could try mounting using 'nolock' option ('man nfs') or please consider switching to NFS-Ganesha to export gluster volumes via NFS.


Thanks,
Soumya

[1] http://lists.gluster.org/pipermail/gluster-users/2017-January/029632.html

--------------------
... I reboot the server here ...

As you can see, it's glusterfs 3.7.15 on debian.
Any idea of what is happening and what I can do to fix it ? The fuse client 
works fine,
but performances are terrible (it's a web application, and since it's for dev 
they keep clearing caches
and stuff like that all the time, it's a whole lot of small files load).

Thanks



_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
[email protected]
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to