[Gluster-users] Split brain

2018-02-19 Thread rwecker
Hi, 

I am having a problem with a split-brain issue that does not seem to match up 
with documentation on how to solve it. 

gluster volume heal VMData2 info gives: 


Brick found2.ssd.org:/data/brick6/data 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/d1164a9b-ba63-46c4-a9ec-76ea4a7a2c45/82a7027b-321c-4bd9-8afc-2a12cfa23bfc
 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/d1164a9b-ba63-46c4-a9ec-76ea4a7a2c45/82a7027b-321c-4bd9-8afc-2a12cfa23bfc.meta
 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/d1164a9b-ba63-46c4-a9ec-76ea4a7a2c45
 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/ce2831a4-3e82-4bf8-bb68-82afa0c401fe
 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images - Is in split-brain 

/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/ee8e0fd3-cf68-4f2a-9d0d-f7ee665cc8c3
 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/a4bea667-c65d-4085-9b90-896bf7fc55ff
 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/4981580d-628f-4266-8e7e-f7a0bcae2dbb
 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/2ee209b4-3e21-47fc-8342-033a37605d65
 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/dom_md/xleases 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/dom_md 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/28d32853-5c89-4c5f-8fb7-76fe0f88ff1f
 
Status: Connected 
Number of entries: 12 

Brick found3.ssd.org:/data/brick6/data 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/d1164a9b-ba63-46c4-a9ec-76ea4a7a2c45/82a7027b-321c-4bd9-8afc-2a12cfa23bfc
 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/d1164a9b-ba63-46c4-a9ec-76ea4a7a2c45/82a7027b-321c-4bd9-8afc-2a12cfa23bfc.meta
 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/d1164a9b-ba63-46c4-a9ec-76ea4a7a2c45
 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/ce2831a4-3e82-4bf8-bb68-82afa0c401fe
 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images - Is in split-brain 

/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/ee8e0fd3-cf68-4f2a-9d0d-f7ee665cc8c3
 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/a4bea667-c65d-4085-9b90-896bf7fc55ff
 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/4981580d-628f-4266-8e7e-f7a0bcae2dbb
 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/2ee209b4-3e21-47fc-8342-033a37605d65
 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/dom_md/xleases 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/dom_md 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images/28d32853-5c89-4c5f-8fb7-76fe0f88ff1f
 
Status: Connected 
Number of entries: 12 

luster volume heal VMData2 info split-brain gives: 

Brick found2.ssd.org:/data/brick6/data 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images 
Status: Connected 
Number of entries in split-brain: 1 

Brick found3.ssd.org:/data/brick6/data 
/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images 
Status: Connected 
Number of entries in split-brain: 1 

on found3 running getfattr -d -m . hex 
/data/brick6/data/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images gives: 

getfattr: hex: No such file or directory 
getfattr: data/brick6/data/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images: No such 
file or directory 
[root@found3 ~]# getfattr -d -m . hex 
/data/brick6/data/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images 
getfattr: hex: No such file or directory 
getfattr: Removing leading '/' from absolute path names 
# file: data/brick6/data/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images 
security.selinux="system_u:object_r:default_t:s0" 
trusted.afr.VMData2-client-6=0sAQAG 
trusted.gfid=0sK8ZFxmThRxeq7pYw7QTOCw== 
trusted.glusterfs.dht=0sAQBVqQ== 

on Found2 running getfattr -d -m . hex 
/data/brick6/data/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images gives: 

getfattr: hex: No such file or directory 
getfattr: Removing leading '/' from absolute path names 
# file: data/brick6/data/08aa5fc4-c9ba-4fcf-af57-72450b875d1a/images 
security.selinux="system_u:object_r:default_t:s0" 
trusted.afr.VMData2-client-6=0sAQAG 
trusted.afr.dirty=0s 
trusted.gfid=0sK8ZFxmThRxeq7pYw7QTOCw== 
trusted.glusterfs.dht=0sAQBVqQ== 

the only difference being the trusted.afr.dirty item in found2 which in not in 
found3. 

Any help would be appreciated. 




Russell Wecker 
IT Director 
Southern Asia-Pacific Division 
San Miguel II 
Bypass Silang, Silang, Cavite 
4118 Philippines 
Phone: +63 46 414 4000 x 5210 
Cell: +63 917 595 6395 
URL: [ http://ssd.adventist.asia/ | http://ssd.adventist.asia ] 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Announcing Glusterfs release 3.12.6 (Long Term Maintenance)

2018-02-19 Thread Jiffin Tony Thottan
The Gluster community is pleased to announce the release of Gluster 
3.12.6 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

We still carry following major issue that is reported in the 
release-notes as follows,


1.) - Expanding a gluster volume that is sharded may cause file corruption

    Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


    The last known cause for corruption (Bug #1465123) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


    Status of this bug can be tracked here, #1465123

Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.6/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.6/


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Upgrade from 3.8.15 to 3.12.5

2018-02-19 Thread rwecker
Thanks That Fixed both issuses 

Russell Wecker 
IT Director 
Southern Asia Pacific Division 


From: "Atin Mukherjee"  
To: "rwecker"  
Cc: "gluster-users"  
Sent: Monday, February 19, 2018 4:51:56 PM 
Subject: Re: [Gluster-users] Upgrade from 3.8.15 to 3.12.5 

I believe the peer rejected issue is something we recently identified and has 
been fixed through [ https://bugzilla.redhat.com/show_bug.cgi?id=1544637 | 
https://bugzilla.redhat.com/show_bug.cgi?id=1544637 ] and is available in 
3.12.6. I'd request you to upgrade to the latest version in 3.12 series. 

On Mon, Feb 19, 2018 at 12:27 PM, < [ mailto:rwec...@ssd.org | rwec...@ssd.org 
] > wrote: 



Hi, 

I have a 3 node cluster (Found1, Found2, Found2) which i wanted to upgrade I 
upgraded one node from 3.8.15 to 3.12.5 and now i am having multiple problems 
with the install. The 2 nodes not upgraded are still working fine(Found1,2) but 
the one upgraded has Peer Rejected (Connected) when peer status is run but it 
also has multiple brick that have "Transport endpoint is not connected" some 
brick seem to work some do not. 

any help would be appreciated. 

Thanks 


here are the log files 


glusterd.log 
[2018-02-19 05:32:38.589150] I [MSGID: 106478] [glusterd.c:1423:init] 
0-management: Maximum allowed open file descriptors set to 65536 
[2018-02-19 05:32:38.589237] I [MSGID: 106479] [glusterd.c:1481:init] 
0-management: Using /var/lib/glusterd as working directory 
[2018-02-19 05:32:38.589264] I [MSGID: 106479] [glusterd.c:1486:init] 
0-management: Using /var/run/gluster as pid file working directory 
[2018-02-19 05:32:38.609833] W [MSGID: 103071] 
[rdma.c:4630:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel 
creation failed [No such device] 
[2018-02-19 05:32:38.609892] W [MSGID: 103055] [rdma.c:4939:init] 
0-rdma.management: Failed to initialize IB Device 
[2018-02-19 05:32:38.609919] W [rpc-transport.c:350:rpc_transport_load] 
0-rpc-transport: 'rdma' initialization failed 
[2018-02-19 05:32:38.610149] W [rpcsvc.c:1682:rpcsvc_create_listener] 
0-rpc-service: cannot create listener, initing the transport failed 
[2018-02-19 05:32:38.610178] E [MSGID: 106243] [glusterd.c:1769:init] 
0-management: creation of 1 listeners failed, continuing with succeeded 
transport 
[2018-02-19 05:32:49.737152] I [MSGID: 106513] 
[glusterd-store.c:2241:glusterd_restore_op_version] 0-glusterd: retrieved 
op-version: 30712 
[2018-02-19 05:32:50.248992] I [MSGID: 106498] 
[glusterd-handler.c:3603:glusterd_friend_add_from_peerinfo] 0-management: 
connect returned 0 
[2018-02-19 05:32:50.249097] I [MSGID: 106498] 
[glusterd-handler.c:3603:glusterd_friend_add_from_peerinfo] 0-management: 
connect returned 0 
[2018-02-19 05:32:50.249161] W [MSGID: 106062] 
[glusterd-handler.c:3400:glusterd_transport_inet_options_build] 0-glusterd: 
Failed to get tcp-user-timeout 
[2018-02-19 05:32:50.249206] I [rpc-clnt.c:1044:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600 
[2018-02-19 05:32:50.249327] W [MSGID: 101002] [options.c:995:xl_opt_validate] 
0-management: option 'address-family' is deprecated, preferred is 
'transport.address-family', continuing with correction 
[2018-02-19 05:32:50.254789] W [MSGID: 106062] 
[glusterd-handler.c:3400:glusterd_transport_inet_options_build] 0-glusterd: 
Failed to get tcp-user-timeout 
[2018-02-19 05:32:50.254831] I [rpc-clnt.c:1044:rpc_clnt_connection_init] 
0-management: setting frame-timeout to 600 
[2018-02-19 05:32:50.254908] W [MSGID: 101002] [options.c:995:xl_opt_validate] 
0-management: option 'address-family' is deprecated, preferred is 
'transport.address-family', continuing with correction 
[2018-02-19 05:32:50.258683] I [MSGID: 106544] 
[glusterd.c:158:glusterd_uuid_init] 0-management: retrieved UUID: 
de955a28-c230-4ada-98ba-a8f404ee8827 
Final graph: 
+--+
 
1: volume management 
2: type mgmt/glusterd 
3: option rpc-auth.auth-glusterfs on 
4: option rpc-auth.auth-unix on 
5: option rpc-auth.auth-null on 
6: option rpc-auth-allow-insecure on 
7: option transport.listen-backlog 10 
8: option event-threads 1 
9: option ping-timeout 0 
10: option transport.socket.read-fail-log off 
11: option transport.socket.keepalive-interval 2 
12: option transport.socket.keepalive-time 10 
13: option transport-type rdma 
14: option working-directory /var/lib/glusterd 
15: end-volume 
16: 
+--+
 
[2018-02-19 05:32:50.259384] I [MSGID: 101190] 
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index 1 
[2018-02-19 05:32:50.284115] I [MSGID: 106163] 
[glusterd-handshake.c:1316:__glusterd_mgmt_hndsk_versions_ack] 0-management: 
using the op-version 30712 
[2018-02-19 05:32:50.285320] I [MSGID: 106493] 
[glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk] 0-glusterd: 

Re: [Gluster-users] Announcing Glusterfs release 3.12.6 (Long Term Maintenance)

2018-02-19 Thread Jiffin Tony Thottan



On Tuesday 20 February 2018 09:37 AM, Jiffin Tony Thottan wrote:


The Gluster community is pleased to announce the release of Gluster 
3.12.6 (packages available at [1,2,3]).


Release notes for the release can be found at [4].

We still carry following major issue that is reported in the 
release-notes as follows,


1.) - Expanding a gluster volume that is sharded may cause file 
corruption


    Sharded volumes are typically used for VM images, if such volumes 
are expanded or possibly contracted (i.e add/remove bricks and 
rebalance) there are reports of VM images getting corrupted.


    The last known cause for corruption (Bug #1465123) has a fix with 
this release. As further testing is still in progress, the issue is 
retained as a major issue.


    Status of this bug can be tracked here, #1465123



Above issue got fixed in 3.12.6. Sorry for mentioning it in the announce 
mail.


--

Jiffin



Thanks,
Gluster community


[1] https://download.gluster.org/pub/gluster/glusterfs/3.12/3.12.6/
[2] https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12
[3] https://build.opensuse.org/project/subprojects/home:glusterfs
[4] Release notes: 
https://gluster.readthedocs.io/en/latest/release-notes/3.12.6/




___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Upgrade from 3.8.15 to 3.12.5

2018-02-19 Thread Atin Mukherjee
I believe the peer rejected issue is something we recently identified and
has been fixed through https://bugzilla.redhat.com/show_bug.cgi?id=1544637
and is available in 3.12.6. I'd request you to upgrade to the latest
version in 3.12 series.

On Mon, Feb 19, 2018 at 12:27 PM,  wrote:

> Hi,
>
> I have a 3 node cluster (Found1, Found2, Found2) which i wanted to upgrade
> I upgraded one node from 3.8.15 to 3.12.5 and now i am having multiple
> problems with the install. The 2 nodes not upgraded are still working
> fine(Found1,2) but the one upgraded has Peer Rejected (Connected) when peer
> status is run but it also has multiple brick that have "Transport endpoint
> is not connected"  some brick seem to work some do not.
>
> any help would be appreciated.
>
> Thanks
>
>
> here are the log files
>
>
> glusterd.log
> [2018-02-19 05:32:38.589150] I [MSGID: 106478] [glusterd.c:1423:init]
> 0-management: Maximum allowed open file descriptors set to 65536
> [2018-02-19 05:32:38.589237] I [MSGID: 106479] [glusterd.c:1481:init]
> 0-management: Using /var/lib/glusterd as working directory
> [2018-02-19 05:32:38.589264] I [MSGID: 106479] [glusterd.c:1486:init]
> 0-management: Using /var/run/gluster as pid file working directory
> [2018-02-19 05:32:38.609833] W [MSGID: 103071] 
> [rdma.c:4630:__gf_rdma_ctx_create]
> 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
> [2018-02-19 05:32:38.609892] W [MSGID: 103055] [rdma.c:4939:init]
> 0-rdma.management: Failed to initialize IB Device
> [2018-02-19 05:32:38.609919] W [rpc-transport.c:350:rpc_transport_load]
> 0-rpc-transport: 'rdma' initialization failed
> [2018-02-19 05:32:38.610149] W [rpcsvc.c:1682:rpcsvc_create_listener]
> 0-rpc-service: cannot create listener, initing the transport failed
> [2018-02-19 05:32:38.610178] E [MSGID: 106243] [glusterd.c:1769:init]
> 0-management: creation of 1 listeners failed, continuing with succeeded
> transport
> [2018-02-19 05:32:49.737152] I [MSGID: 106513] 
> [glusterd-store.c:2241:glusterd_restore_op_version]
> 0-glusterd: retrieved op-version: 30712
> [2018-02-19 05:32:50.248992] I [MSGID: 106498] [glusterd-handler.c:3603:
> glusterd_friend_add_from_peerinfo] 0-management: connect returned 0
> [2018-02-19 05:32:50.249097] I [MSGID: 106498] [glusterd-handler.c:3603:
> glusterd_friend_add_from_peerinfo] 0-management: connect returned 0
> [2018-02-19 05:32:50.249161] W [MSGID: 106062] [glusterd-handler.c:3400:
> glusterd_transport_inet_options_build] 0-glusterd: Failed to get
> tcp-user-timeout
> [2018-02-19 05:32:50.249206] I [rpc-clnt.c:1044:rpc_clnt_connection_init]
> 0-management: setting frame-timeout to 600
> [2018-02-19 05:32:50.249327] W [MSGID: 101002] [options.c:995:xl_opt_validate]
> 0-management: option 'address-family' is deprecated, preferred is
> 'transport.address-family', continuing with correction
> [2018-02-19 05:32:50.254789] W [MSGID: 106062] [glusterd-handler.c:3400:
> glusterd_transport_inet_options_build] 0-glusterd: Failed to get
> tcp-user-timeout
> [2018-02-19 05:32:50.254831] I [rpc-clnt.c:1044:rpc_clnt_connection_init]
> 0-management: setting frame-timeout to 600
> [2018-02-19 05:32:50.254908] W [MSGID: 101002] [options.c:995:xl_opt_validate]
> 0-management: option 'address-family' is deprecated, preferred is
> 'transport.address-family', continuing with correction
> [2018-02-19 05:32:50.258683] I [MSGID: 106544]
> [glusterd.c:158:glusterd_uuid_init] 0-management: retrieved UUID:
> de955a28-c230-4ada-98ba-a8f404ee8827
> Final graph:
> +---
> ---+
>   1: volume management
>   2: type mgmt/glusterd
>   3: option rpc-auth.auth-glusterfs on
>   4: option rpc-auth.auth-unix on
>   5: option rpc-auth.auth-null on
>   6: option rpc-auth-allow-insecure on
>   7: option transport.listen-backlog 10
>   8: option event-threads 1
>   9: option ping-timeout 0
> 10: option transport.socket.read-fail-log off
>  11: option transport.socket.keepalive-interval 2
>  12: option transport.socket.keepalive-time 10
>  13: option transport-type rdma
>  14: option working-directory /var/lib/glusterd
>  15: end-volume
>  16:
> +---
> ---+
> [2018-02-19 05:32:50.259384] I [MSGID: 101190] 
> [event-epoll.c:613:event_dispatch_epoll_worker]
> 0-epoll: Started thread with index 1
> [2018-02-19 05:32:50.284115] I [MSGID: 106163]
> [glusterd-handshake.c:1316:__glusterd_mgmt_hndsk_versions_ack]
> 0-management: using the op-version 30712
> [2018-02-19 05:32:50.285320] I [MSGID: 106493] 
> [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk]
> 0-glusterd: Received RJT from uuid: a23fa00c-4c7c-436d-9d04-0c16941c,
> host: found2.ssd.org, port: 0
> [2018-02-19 05:32:50.286561] I [MSGID: 106493] 
> [glusterd-rpc-ops.c:486:__glusterd_friend_add_cbk]
> 0-glusterd: Received RJT from uuid: 

Re: [Gluster-users] .glusterfs grown larger than volume content

2018-02-19 Thread Jorick Astrego


On 02/16/2018 03:28 AM, Rasika Preethiraj wrote:
> Hello All,
>
> I posting this email rearding the gluster issue in 
> http://lists.gluster.org/pipermail/gluster-users/2016-April/026493.html
>
>  i searched this everywhere but still having no answer . we getting a 
> big problem with the out of space now. Kindly help me.
>
> Rasika
>

Hi,

We have a client with comparable issues, currently 1.5M orphaned links 
in .glusterfs.

I used this to find the orphaned links:

find .glusterfs -type l -exec file {} \; |grep broken >
~/orphanedlinks.txt


Then I get all the linknames with

cat orphanedlinks.txt |awk '{print $1}'|sed 's/://g' >
cleanup-orphanedlinks.txt






Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts 



Tel: 053 20 30 270  i...@netbulae.euStaalsteden 4-3A
KvK 08198180
Fax: 053 20 30 271  www.netbulae.eu 7547 TA Enschede
BTW NL821234584B01



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Diffrent volumes on diffrent interfaces

2018-02-19 Thread Atin Mukherjee
On Thu, Feb 15, 2018 at 1:39 AM, Gregor Burck 
wrote:

> Hi,
>
> I run a proxmox system with a glustervolume over three nodes.
> I think about setup a second volume, but want to use the other interfaces
> on
> the nodes.
>
> Is this recommended or possible?
>

It's possible. You'd need to peer probe the node with both the interfaces
and use the respective ones for the bricks while creating the volume.


>
> Bye
>
> Gregor
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] inotify and filesystem monitors

2018-02-19 Thread lejeczek

hi everyone

do you guys know why icrond does not catch/see what happens 
inside autofs mounted gluster volumes?
I rsync into such mount point and incron oblivious, from its 
perspective nothing happened.


many thanks, L.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS Ganesha HA w/ GlusterFS

2018-02-19 Thread TomK

On 2/19/2018 2:39 AM, TomK wrote:
+ gluster users as well.  Just read another post on the mailing lists 
about a similar ask from Nov which didn't really have a clear answer.


Perhaps there's a way to get NFSv4 work with GlusterFS without NFS 
Ganesha then?


Cheers,
Tom


Hey All,

I've setup GlusterFS on two virtuals and enabled NFS Ganesha on each 
node.  ATM the configs are identical between the two NFS Ganesha hosts. 
(Probably shouldn't be but I'm just testing things out.)


I need HA capability and notice these instructions here:

http://aravindavkgluster.readthedocs.io/en/latest/Administrator%20Guide/Configuring%20HA%20NFS%20Server/ 



However I don't have package glusterfs-ganesha available on this CentOS 
Linux release 7.4.1708 (Core) and the maintainer's of CentOS 7 haven't 
uploaded some of the 2.5.x packages yet so I can't use that version.


glusterfs-api-3.12.6-1.el7.x86_64
glusterfs-libs-3.12.6-1.el7.x86_64
glusterfs-3.12.6-1.el7.x86_64
glusterfs-fuse-3.12.6-1.el7.x86_64
glusterfs-server-3.12.6-1.el7.x86_64
python2-glusterfs-api-1.1-1.el7.noarch
glusterfs-client-xlators-3.12.6-1.el7.x86_64
glusterfs-cli-3.12.6-1.el7.x86_64

nfs-ganesha-xfs-2.3.2-1.el7.x86_64
nfs-ganesha-vfs-2.3.2-1.el7.x86_64
nfs-ganesha-2.3.2-1.el7.x86_64
nfs-ganesha-gluster-2.3.2-1.el7.x86_64

The only high availability packages are the following but they don't 
come with any instructions that I can find:


storhaug.noarch : High-Availability Add-on for NFS-Ganesha and Samba
storhaug-nfs.noarch : storhaug NFS-Ganesha module

Given that I'm missing that one package above, will configuring using 
ganesha-ha.conf still work?  Or should I be looking at another option 
alltogether?


Appreciate any help.  Ty!




--
Cheers,
Tom K.
-

Living on earth is expensive, but it includes a free trip around the sun.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS Ganesha HA w/ GlusterFS

2018-02-19 Thread Kaleb S. KEITHLEY
On 02/19/2018 11:37 AM, TomK wrote:
> On 2/19/2018 10:55 AM, Kaleb S. KEITHLEY wrote:
> Yep, I noticed a couple of pages including this for 'storhaug
> configuration' off google.  Adding 'mailing list' to the search didn't
> help alot:
> 
> https://sourceforge.net/p/nfs-ganesha/mailman/message/35929089/
> 
> https://www.spinics.net/lists/gluster-users/msg33018.html
> 
> Hence the ask here.  storhaug feels like it's not moving with any sort
> of update now.
> 
> Any plans to move back to the previous NFS Ganesha HA model with
> upcoming GlusterFS versions as a result?

No.

(re)writing or finishing storhaug has been on my plate ever since the
guy who was supposed to do it didn't.

I have lots of other stuff to do too. All I can say is it'll get done
when it gets done.


> 
> In the meantime I'll look to cobble up the GlusterFS 3.10 packages and
> try with those per your suggestion.
> 
> What's your thoughts on using HAPROXY / keepalived w/ NFS Ganesha and
> GlusterFS?  Anyone tried this sort of combination?  I want to avoid the
> situation where I have to remount clients as a result of a node failing.
>  In other words, avoid this situation:
> 
> [root@yes01 ~]# cd /n
> -bash: cd: /n: Stale file handle
> [root@yes01 ~]#
> 
> Cheers,
> Tom
> 
>> On 02/19/2018 10:24 AM, TomK wrote:
>>> On 2/19/2018 2:39 AM, TomK wrote:
>>> + gluster users as well.  Just read another post on the mailing lists
>>> about a similar ask from Nov which didn't really have a clear answer.
>>
>> That's funny because I've answered questions like this several times.
>>
>> Gluster+Ganesha+Pacemaker-based HA is available up to GlusterFS 3.10.x.
>>
>> If you need HA, that is one "out of the box" option.
>>
>> There's support for using CTDB in Samba for Ganesha HA, and people have
>> used it successfully with Gluster+Ganesha.
>>
>>>
>>> Perhaps there's a way to get NFSv4 work with GlusterFS without NFS
>>> Ganesha then?
>>
>> Not that I'm aware of.
>>
>>>
>>> Cheers,
>>> Tom
>>>
 Hey All,

 I've setup GlusterFS on two virtuals and enabled NFS Ganesha on each
 node.  ATM the configs are identical between the two NFS Ganesha
 hosts. (Probably shouldn't be but I'm just testing things out.)

 I need HA capability and notice these instructions here:

 http://aravindavkgluster.readthedocs.io/en/latest/Administrator%20Guide/Configuring%20HA%20NFS%20Server/



 However I don't have package glusterfs-ganesha available on this
 CentOS Linux release 7.4.1708 (Core) and the maintainer's of CentOS 7
 haven't uploaded some of the 2.5.x packages yet so I can't use that
 version.

 glusterfs-api-3.12.6-1.el7.x86_64
 glusterfs-libs-3.12.6-1.el7.x86_64
 glusterfs-3.12.6-1.el7.x86_64
 glusterfs-fuse-3.12.6-1.el7.x86_64
 glusterfs-server-3.12.6-1.el7.x86_64
 python2-glusterfs-api-1.1-1.el7.noarch
 glusterfs-client-xlators-3.12.6-1.el7.x86_64
 glusterfs-cli-3.12.6-1.el7.x86_64

 nfs-ganesha-xfs-2.3.2-1.el7.x86_64
 nfs-ganesha-vfs-2.3.2-1.el7.x86_64
 nfs-ganesha-2.3.2-1.el7.x86_64
 nfs-ganesha-gluster-2.3.2-1.el7.x86_64

 The only high availability packages are the following but they don't
 come with any instructions that I can find:

 storhaug.noarch : High-Availability Add-on for NFS-Ganesha and Samba
 storhaug-nfs.noarch : storhaug NFS-Ganesha module

 Given that I'm missing that one package above, will configuring using
 ganesha-ha.conf still work?  Or should I be looking at another option
 alltogether?

 Appreciate any help.  Ty!

>>>
>>>
>>
> 
> 

-- 

Kaleb
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS Ganesha HA w/ GlusterFS

2018-02-19 Thread TomK

On 2/19/2018 12:09 PM, Kaleb S. KEITHLEY wrote:
Sounds good and no problem at all.  Will look out for this update in the 
future.  In the meantime, three's a few things I'll try including your 
suggestion.


Was looking for a sense of direction with the projects and now you've 
given that.  Ty.  Appreciated!


Cheers,
Tom



On 02/19/2018 11:37 AM, TomK wrote:

On 2/19/2018 10:55 AM, Kaleb S. KEITHLEY wrote:
Yep, I noticed a couple of pages including this for 'storhaug
configuration' off google.  Adding 'mailing list' to the search didn't
help alot:

https://sourceforge.net/p/nfs-ganesha/mailman/message/35929089/

https://www.spinics.net/lists/gluster-users/msg33018.html

Hence the ask here.  storhaug feels like it's not moving with any sort
of update now.

Any plans to move back to the previous NFS Ganesha HA model with
upcoming GlusterFS versions as a result?


No.

(re)writing or finishing storhaug has been on my plate ever since the
guy who was supposed to do it didn't.

I have lots of other stuff to do too. All I can say is it'll get done
when it gets done.




In the meantime I'll look to cobble up the GlusterFS 3.10 packages and
try with those per your suggestion.

What's your thoughts on using HAPROXY / keepalived w/ NFS Ganesha and
GlusterFS?  Anyone tried this sort of combination?  I want to avoid the
situation where I have to remount clients as a result of a node failing.
  In other words, avoid this situation:

[root@yes01 ~]# cd /n
-bash: cd: /n: Stale file handle
[root@yes01 ~]#

Cheers,
Tom


On 02/19/2018 10:24 AM, TomK wrote:

On 2/19/2018 2:39 AM, TomK wrote:
+ gluster users as well.  Just read another post on the mailing lists
about a similar ask from Nov which didn't really have a clear answer.


That's funny because I've answered questions like this several times.

Gluster+Ganesha+Pacemaker-based HA is available up to GlusterFS 3.10.x.

If you need HA, that is one "out of the box" option.

There's support for using CTDB in Samba for Ganesha HA, and people have
used it successfully with Gluster+Ganesha.



Perhaps there's a way to get NFSv4 work with GlusterFS without NFS
Ganesha then?


Not that I'm aware of.



Cheers,
Tom


Hey All,

I've setup GlusterFS on two virtuals and enabled NFS Ganesha on each
node.  ATM the configs are identical between the two NFS Ganesha
hosts. (Probably shouldn't be but I'm just testing things out.)

I need HA capability and notice these instructions here:

http://aravindavkgluster.readthedocs.io/en/latest/Administrator%20Guide/Configuring%20HA%20NFS%20Server/



However I don't have package glusterfs-ganesha available on this
CentOS Linux release 7.4.1708 (Core) and the maintainer's of CentOS 7
haven't uploaded some of the 2.5.x packages yet so I can't use that
version.

glusterfs-api-3.12.6-1.el7.x86_64
glusterfs-libs-3.12.6-1.el7.x86_64
glusterfs-3.12.6-1.el7.x86_64
glusterfs-fuse-3.12.6-1.el7.x86_64
glusterfs-server-3.12.6-1.el7.x86_64
python2-glusterfs-api-1.1-1.el7.noarch
glusterfs-client-xlators-3.12.6-1.el7.x86_64
glusterfs-cli-3.12.6-1.el7.x86_64

nfs-ganesha-xfs-2.3.2-1.el7.x86_64
nfs-ganesha-vfs-2.3.2-1.el7.x86_64
nfs-ganesha-2.3.2-1.el7.x86_64
nfs-ganesha-gluster-2.3.2-1.el7.x86_64

The only high availability packages are the following but they don't
come with any instructions that I can find:

storhaug.noarch : High-Availability Add-on for NFS-Ganesha and Samba
storhaug-nfs.noarch : storhaug NFS-Ganesha module

Given that I'm missing that one package above, will configuring using
ganesha-ha.conf still work?  Or should I be looking at another option
alltogether?

Appreciate any help.  Ty!














--
Cheers,
Tom K.
-

Living on earth is expensive, but it includes a free trip around the sun.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] NFS Ganesha HA w/ GlusterFS

2018-02-19 Thread TomK

On 2/19/2018 10:55 AM, Kaleb S. KEITHLEY wrote:
Yep, I noticed a couple of pages including this for 'storhaug 
configuration' off google.  Adding 'mailing list' to the search didn't 
help alot:


https://sourceforge.net/p/nfs-ganesha/mailman/message/35929089/

https://www.spinics.net/lists/gluster-users/msg33018.html

Hence the ask here.  storhaug feels like it's not moving with any sort 
of update now.


Any plans to move back to the previous NFS Ganesha HA model with 
upcoming GlusterFS versions as a result?


In the meantime I'll look to cobble up the GlusterFS 3.10 packages and 
try with those per your suggestion.


What's your thoughts on using HAPROXY / keepalived w/ NFS Ganesha and 
GlusterFS?  Anyone tried this sort of combination?  I want to avoid the 
situation where I have to remount clients as a result of a node failing. 
 In other words, avoid this situation:


[root@yes01 ~]# cd /n
-bash: cd: /n: Stale file handle
[root@yes01 ~]#

Cheers,
Tom


On 02/19/2018 10:24 AM, TomK wrote:

On 2/19/2018 2:39 AM, TomK wrote:
+ gluster users as well.  Just read another post on the mailing lists
about a similar ask from Nov which didn't really have a clear answer.


That's funny because I've answered questions like this several times.

Gluster+Ganesha+Pacemaker-based HA is available up to GlusterFS 3.10.x.

If you need HA, that is one "out of the box" option.

There's support for using CTDB in Samba for Ganesha HA, and people have
used it successfully with Gluster+Ganesha.



Perhaps there's a way to get NFSv4 work with GlusterFS without NFS
Ganesha then?


Not that I'm aware of.



Cheers,
Tom


Hey All,

I've setup GlusterFS on two virtuals and enabled NFS Ganesha on each
node.  ATM the configs are identical between the two NFS Ganesha
hosts. (Probably shouldn't be but I'm just testing things out.)

I need HA capability and notice these instructions here:

http://aravindavkgluster.readthedocs.io/en/latest/Administrator%20Guide/Configuring%20HA%20NFS%20Server/


However I don't have package glusterfs-ganesha available on this
CentOS Linux release 7.4.1708 (Core) and the maintainer's of CentOS 7
haven't uploaded some of the 2.5.x packages yet so I can't use that
version.

glusterfs-api-3.12.6-1.el7.x86_64
glusterfs-libs-3.12.6-1.el7.x86_64
glusterfs-3.12.6-1.el7.x86_64
glusterfs-fuse-3.12.6-1.el7.x86_64
glusterfs-server-3.12.6-1.el7.x86_64
python2-glusterfs-api-1.1-1.el7.noarch
glusterfs-client-xlators-3.12.6-1.el7.x86_64
glusterfs-cli-3.12.6-1.el7.x86_64

nfs-ganesha-xfs-2.3.2-1.el7.x86_64
nfs-ganesha-vfs-2.3.2-1.el7.x86_64
nfs-ganesha-2.3.2-1.el7.x86_64
nfs-ganesha-gluster-2.3.2-1.el7.x86_64

The only high availability packages are the following but they don't
come with any instructions that I can find:

storhaug.noarch : High-Availability Add-on for NFS-Ganesha and Samba
storhaug-nfs.noarch : storhaug NFS-Ganesha module

Given that I'm missing that one package above, will configuring using
ganesha-ha.conf still work?  Or should I be looking at another option
alltogether?

Appreciate any help.  Ty!









--
Cheers,
Tom K.
-

Living on earth is expensive, but it includes a free trip around the sun.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users