The RHS 3.1 gluster volume info: [root@rhgluster1 ~]# gluster volume info
Volume Name: rhs_gluster Type: Distributed-Replicate Volume ID: ce5ebd81-35bf-40f6-bc53-04c494a8836f Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: rhgluster1:/rhs/brick1 Brick2: rhgluster2:/rhs/brick1 Brick3: rhgluster3:/rhs/brick1 Brick4: rhgluster4:/rhs/brick1 Options Reconfigured: server.allow-insecure: off performance.readdir-ahead: on cluster.server-quorum-type: server cluster.quorum-type: auto cluster.server-quorum-ratio: 51% This is what I get on a centos client with 3.7.4 glusterfs fuse installed trying to mount a RHS Gluster 3.1 volume: [2015-09-24 01:57:31.742054] I [MSGID: 100030] [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.4 (args: /usr/sbin/glusterfs --volfile-server=rhgluster1 --volfile-id=rhs_gluster /TEMP) [2015-09-24 01:57:31.755132] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2015-09-24 01:57:31.756694] W [socket.c:588:__socket_rwv] 0-glusterfs: readv on 172.18.60.100:24007 failed (No data available) [2015-09-24 01:57:31.757143] E [rpc-clnt.c:362:saved_frames_unwind] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f87cb0de63b] (--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f87caeaa1d7] (--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f87caeaa2ee] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f87caeaa3bb] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f87caeaa9f2] ))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called at 2015-09-24 01:57:31.755686 (xid=0x1) [2015-09-24 01:57:31.758030] E [glusterfsd-mgmt.c:1604:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:rhs_gluster) [2015-09-24 01:57:31.758095] W [glusterfsd.c:1219:cleanup_and_exit] (-->/usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x20e) [0x7f87caeaa1fe] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3f2) [0x40d5d2] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received signum (0), shutting down [2015-09-24 01:57:31.758138] I [fuse-bridge.c:5595:fini] 0-fuse: Unmounting '/TEMP'. [2015-09-24 01:57:31.764511] W [glusterfsd.c:1219:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x79d1) [0x7f87ca1c69d1] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d] -->/usr/sbin/glusterfs(cleanup_and_exit+0x65) [0x4059b5] ) 0-: received signum (15), shutting down On Wed, Sep 23, 2015 at 3:44 PM, Prasun Gera <[email protected]> wrote: > There were some posts regarding incompatibility between versions due to > the insecure ports option. > http://www.gluster.org/pipermail/gluster-users/2015-August/023207.html. > Perhaps worth a shot ? > > On Wed, Sep 23, 2015 at 3:20 PM, Gluster Admin <[email protected]> wrote: > >> I would love to know if i am doing something wrong but as it stands I >> have a redhat gluster 3.1 setup and the only native FUSE clients i can get >> to work are redhat. I have had no success with Centos 6/7 and various >> gluster clients or OEL 6 in same case. >> >> I am hoping there is some option or something I am not setting and this >> will actually work. >> >> On Wed, Sep 23, 2015 at 3:11 PM, Prasun Gera <[email protected]> >> wrote: >> >>> Is this confirmed ? I have held off upgrading from 3.0 to 3.1. I have a >>> lot of Ubuntu clients who are using the ppa for the fuse clients. I don't >>> want to upgrade if it's known to break things. >>> >>> On Wed, Sep 23, 2015 at 10:40 AM, Gluster Admin <[email protected]> >>> wrote: >>> >>>> Just curious here if RH is purposely trying to prevent anyone but RHEL >>>> servers using their storage natively via the FUSE client? >>>> >>>> with RHGS 3.1 I can mount via fuse with no issues on RHEL 6/7 clients >>>> but no other variant of Gluster 3.6 or 3.7 client can connect to it. Even >>>> tried installing the RHGS fuse and client libs with no success. Same if i >>>> try to mount Gluster 3.7 from RHEL with FUSE so its both ways. >>>> >>>> Just a bummer as we have tons of non RHEL clients and would prefer not >>>> to setup NFS HA when the native client performs pretty well and covers all >>>> the HA without the hassle. >>>> >>>> >>>> >>>> _______________________________________________ >>>> Gluster-users mailing list >>>> [email protected] >>>> http://www.gluster.org/mailman/listinfo/gluster-users >>>> >>> >>> >> >
_______________________________________________ Gluster-users mailing list [email protected] http://www.gluster.org/mailman/listinfo/gluster-users
