On 5/7/19 8:44 PM, Sahina Bose wrote:
Rafi, can you take a look?

On Mon, May 6, 2019 at 10:29 PM <[email protected]> wrote:
this is what I see in the logs when I try to add RDMA:


Are you trying to change the transport type of the volume. If so, Have you followed the document [1] ?


[1] : https://docs.gluster.org/en/latest/Administrator%20Guide/RDMA%20Transport/



[2019-05-06 16:54:50.305297] I [MSGID: 106521] 
[glusterd-op-sm.c:2953:glusterd_op_set_volume] 0-management: changing 
transport-type for volume storage_ssd to tcp,rdma
[2019-05-06 16:54:50.309122] W [MSGID: 101095] 
[xlator.c:180:xlator_volopt_dynload] 0-xlator: 
/usr/lib64/glusterfs/5.3/xlator/nfs/server.so: cannot open shared object file: 
No such file or directory
[2019-05-06 16:54:50.321422] E [MSGID: 106068] 
[glusterd-volgen.c:1025:volgen_write_volfile] 0-management: failed to create 
volfile
[2019-05-06 16:54:50.321463] E 
[glusterd-volgen.c:6556:glusterd_create_volfiles] 0-management: Could not 
generate gfproxy client volfiles
[2019-05-06 16:54:50.321476] E [MSGID: 106068] 
[glusterd-op-sm.c:3062:glusterd_op_set_volume] 0-management: Unable to create 
volfile for 'volume set'
[2019-05-06 16:54:50.321488] E [MSGID: 106122] 
[glusterd-syncop.c:1434:gd_commit_op_phase] 0-management: Commit of operation 
'Volume Set' failed on localhost
[2019-05-06 16:54:50.323610] E [MSGID: 101191] 
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch 
handler

rmda status:
root@Prometheus glusterfs]# systemctl status rdma
● rdma.service - Initialize the iWARP/InfiniBand/RDMA stack in the kernel
    Loaded: loaded (/usr/lib/systemd/system/rdma.service; enabled; vendor 
preset: disabled)
    Active: active (exited) since Sun 2019-04-21 23:53:42 EDT; 2 weeks 0 days 
ago
      Docs: file:/etc/rdma/rdma.conf
  Main PID: 4968 (code=exited, status=0/SUCCESS)
     Tasks: 0
    CGroup: /system.slice/rdma.service

volume status:

Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.100.50.12:/gluster_bricks/storage_
nvme2/storage_nvme2                         49153     0          Y       26888
Brick 10.100.50.14:/gluster_bricks/storage_
nvme2/storage_nvme2                         49153     0          Y       3827
Brick 10.100.50.16:/gluster_bricks/storage_
nvme2/storage_nvme2                         49153     0          Y       24238
Self-heal Daemon on localhost               N/A       N/A        Y       29452
Self-heal Daemon on 10.100.50.16            N/A       N/A        Y       11058
Self-heal Daemon on 10.100.50.14            N/A       N/A        Y       8573

_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/4JZJXH3OFH3RPILRCRZYCKEZNO2CSNTN/
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/K2Q7E2LFS5HAGPRYO2XS4TYXBWQ5VLG4/

Reply via email to