Dear All,


i have a glusterfs pool with 3 servers with Centos7.3, Glusterfs 3.8.5, network 
is Infiniband.

Pacemaker/Corosync and Ganesha-NFS is installed and all seems to be OK, no 
error logged.

I created a replica 3 volume with transport rdma (without tcp!).

When i mount this volume via glusterfs and do some IO, no errors are logged and 
everything seems to go pretty well.



When i mount the volume via nfs and do some IO, nfs freezes immediatly and 
following logs are written to

ganesha-gfapi.log:

2017-01-05 23:23:53.536526] W [MSGID: 103004] 
[rdma.c:452:gf_rdma_register_arena] 0-rdma: allocation of mr failed

[2017-01-05 23:23:53.541519] W [MSGID: 103004] 
[rdma.c:1463:__gf_rdma_create_read_chunks_from_vector] 0-rpc-transport/rdma: 
memory registration failed (peer:10.40.1.1:49152) [Keine Berechtigung]

[2017-01-05 23:23:53.541547] W [MSGID: 103029] 
[rdma.c:1558:__gf_rdma_create_read_chunks] 0-rpc-transport/rdma: cannot create 
read chunks from vector entry->prog_payload

[2017-01-05 23:23:53.541553] W [MSGID: 103033] 
[rdma.c:2063:__gf_rdma_ioq_churn_request] 0-rpc-transport/rdma: creation of 
read chunks failed

[2017-01-05 23:23:53.541557] W [MSGID: 103040] 
[rdma.c:2775:__gf_rdma_ioq_churn_entry] 0-rpc-transport/rdma: failed to process 
request ioq entry to peer(10.40.1.1:49152)

[2017-01-05 23:23:53.541562] W [MSGID: 103040] [rdma.c:2859:gf_rdma_writev] 
0-vmstor1-client-0: processing ioq entry destined to (10.40.1.1:49152) failed

[2017-01-05 23:23:53.541569] W [MSGID: 103037] 
[rdma.c:3016:gf_rdma_submit_request] 0-rpc-transport/rdma: sending request to 
peer (10.40.1.1:49152) failed

[…]



Some additional info:

Firewall is disabled, SELinux is disabled.

Different hardware with Centos 7.1 and the Mellanox OFED 3.4 packages instead 
of the Centos Infiniband packages lead to the same results.

Just to mention: I am not trying to do NFS over RDMA, the Ganesha FSAL is just 
configured to "glusterfs".



I hope someone could help me, i am running out of ideas…



Kind regards,

Andreas

_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to