Re: [Gluster-users] Active-Active HA ganesha-nfs configuration

2018-08-07 Thread Herb Burnswell
Thank you for the reply.

It is a bit concerning that you mention Gluster 3.8 and 3.9 set up of
NFS-Ganesha is not stable.

Regarding SMB, I am using SMB with CTDB and it works great.  However, I
would also like the ability to export NFS mounts as well.  I have read
about using CTDB for HA NFS.  Is that a viable/better solution?

Thanks,

HB

On Tue, Aug 7, 2018 at 12:51 AM, David Spisla  wrote:

> Hello Herbs,
> this setup is not as easy as it sounds. Here you can find additional setup
> instructions (look for the chapter of NFS-Ganesha HA):
> https://access.redhat.com/documentation/en-us/red_hat_
> gluster_storage/3.2/pdf/administration_guide/Red_Hat_Gluster_Storage-3.2-
> Administration_Guide-en-US.pdf
>
> Maybe you should use Gluster3.10 because the setup for 3.9 and 3.8 is not
> stable. With Gluster 3.10 I had a stable HA-Cluster.
> Also you should be aware of the fact, that since Gluster 3.11 this setup
> may not work because the Developers switching to storhaug:
> https://github.com/linux-ha-storage/storhaug/blob/master/README.md
>
> At the moment there is confusing in the coummunity about that issue.
> Storhaug is not complete and still under development.
> I don't know if my informations are up-to-date. Because of that fact I was
> switching to Samba (SMB). You can also have a HA Cluster with Samba/CTDB.
>
> Regards
> David Spisla
>
>
> 2018-08-07 4:40 GMT+02:00 Herb Burnswell :
>
>> All,
>>
>> I would like to set up HA NFS (Active/Active) on our 2 node gluster
>> environment using NFS-Ganesha.
>>
>> Specs:
>>
>> - RHEL 7
>> - glusterfs 3.8.15 built on Aug 16 2017 14:48:01
>>
>> I am following this process in this documentation, however it is
>> confusing to me:
>>
>> https://docs.gluster.org/en/v3/Administrator%20Guide/NFS-Gan
>> esha%20GlusterFS%20Integration/
>>
>> I already had Pacemaker/Corosync up and running on the our 2 node gluster
>> environment with fence resources on each.
>>
>> After the package installs and confirming "Pre-requisites to run
>> NFS-Ganesha", here is what I've done:
>>
>> 1. # gluster volume set all cluster.enable-shared-storage enable
>>
>> 2. Create /etc/ganesha/ganesha-ha.conf file (scrubbed.  I'm also assuming
>> that HA_NAME should be equal to the already created pacemaker cluster name):
>>
>> #
>> # HA File
>>
>> HA_NAME="clustername"
>> HA_CLUSTER_NODES="server1,server2"
>>
>> VIP_server1="10.19.3.66"
>> VIP_server2="10.19.3.67"
>>
>> 3. # gluster nfs-ganesha enable
>> Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the
>> trusted pool. Do you still want to continue?
>>  (y/n) y
>> This will take a few minutes to complete. Please wait ..
>> nfs-ganesha : success
>>
>>
>> 4. # gluster volume set vol1 ganesha.enable on
>> volume set: success
>>
>> At this point I can see the export available:
>>
>> # showmount -e
>> Export list for server1:
>> /vol1 (everyone)
>>
>> And I can successfully mount the export from another server.
>>
>> However, nothing appears to be done regarding HA.  nfs-ganesha is not
>> started on server2 and no additional resources are created in pacemaker.
>>
>> Can anyone provide guidance as to what I may be doing incorrectly?
>>
>> Thanks,
>>
>> HB
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Active-Active HA ganesha-nfs configuration

2018-08-07 Thread Kaleb S. KEITHLEY
Pacemaker and corosync running on both nodes?

ports in the firewall open for pacemaker and corosync?

selinux ?

`pcs status` shows what?

You should have a pair of ganesha_mon, ganesha_grace, ganesha_nfsd, and
VIP resource agents more or less matching your two hosts and the two VIPs.

Look for errors in /var/log/messages and /var/log/cluster/* for any
anomalies.

Might want to check with the real experts in #clusterlabs on freenode
too. Because when the stuff works, it's great, but when it doesn't work,
they're the experts.

--

Kaleb



On 08/06/2018 10:40 PM, Herb Burnswell wrote:
> All,
> 
> I would like to set up HA NFS (Active/Active) on our 2 node gluster
> environment using NFS-Ganesha.
> 
> Specs:
> 
> - RHEL 7
> - glusterfs 3.8.15 built on Aug 16 2017 14:48:01
> 
> I am following this process in this documentation, however it is
> confusing to me:
> 
> https://docs.gluster.org/en/v3/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/
> 
> I already had Pacemaker/Corosync up and running on the our 2 node
> gluster environment with fence resources on each.
> 
> After the package installs and confirming "Pre-requisites to run
> NFS-Ganesha", here is what I've done:
> 
> 1. # gluster volume set all cluster.enable-shared-storage enable
> 
> 2. Create /etc/ganesha/ganesha-ha.conf file (scrubbed.  I'm also
> assuming that HA_NAME should be equal to the already created pacemaker
> cluster name):
> 
> #
> # HA File
> 
> HA_NAME="clustername"
> HA_CLUSTER_NODES="server1,server2"
> 
> VIP_server1="10.19.3.66"
> VIP_server2="10.19.3.67"
> 
> 3. # gluster nfs-ganesha enable
> Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the
> trusted pool. Do you still want to continue?
>  (y/n) y
> This will take a few minutes to complete. Please wait ..
> nfs-ganesha : success 
> 
> 
> 4. # gluster volume set vol1 ganesha.enable on
> volume set: success
> 
> At this point I can see the export available:
> 
> # showmount -e
> Export list for server1:
> /vol1 (everyone)
> 
> And I can successfully mount the export from another server.
> 
> However, nothing appears to be done regarding HA.  nfs-ganesha is not
> started on server2 and no additional resources are created in pacemaker.
> 
> Can anyone provide guidance as to what I may be doing incorrectly?
> 
> Thanks,
> 
> HB
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Active-Active HA ganesha-nfs configuration

2018-08-07 Thread David Spisla
Hello Herbs,
this setup is not as easy as it sounds. Here you can find additional setup
instructions (look for the chapter of NFS-Ganesha HA):
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/pdf/administration_guide/Red_Hat_Gluster_Storage-3.2-Administration_Guide-en-US.pdf

Maybe you should use Gluster3.10 because the setup for 3.9 and 3.8 is not
stable. With Gluster 3.10 I had a stable HA-Cluster.
Also you should be aware of the fact, that since Gluster 3.11 this setup
may not work because the Developers switching to storhaug:
https://github.com/linux-ha-storage/storhaug/blob/master/README.md

At the moment there is confusing in the coummunity about that issue.
Storhaug is not complete and still under development.
I don't know if my informations are up-to-date. Because of that fact I was
switching to Samba (SMB). You can also have a HA Cluster with Samba/CTDB.

Regards
David Spisla


2018-08-07 4:40 GMT+02:00 Herb Burnswell :

> All,
>
> I would like to set up HA NFS (Active/Active) on our 2 node gluster
> environment using NFS-Ganesha.
>
> Specs:
>
> - RHEL 7
> - glusterfs 3.8.15 built on Aug 16 2017 14:48:01
>
> I am following this process in this documentation, however it is confusing
> to me:
>
> https://docs.gluster.org/en/v3/Administrator%20Guide/NFS-
> Ganesha%20GlusterFS%20Integration/
>
> I already had Pacemaker/Corosync up and running on the our 2 node gluster
> environment with fence resources on each.
>
> After the package installs and confirming "Pre-requisites to run
> NFS-Ganesha", here is what I've done:
>
> 1. # gluster volume set all cluster.enable-shared-storage enable
>
> 2. Create /etc/ganesha/ganesha-ha.conf file (scrubbed.  I'm also assuming
> that HA_NAME should be equal to the already created pacemaker cluster name):
>
> #
> # HA File
>
> HA_NAME="clustername"
> HA_CLUSTER_NODES="server1,server2"
>
> VIP_server1="10.19.3.66"
> VIP_server2="10.19.3.67"
>
> 3. # gluster nfs-ganesha enable
> Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the
> trusted pool. Do you still want to continue?
>  (y/n) y
> This will take a few minutes to complete. Please wait ..
> nfs-ganesha : success
>
>
> 4. # gluster volume set vol1 ganesha.enable on
> volume set: success
>
> At this point I can see the export available:
>
> # showmount -e
> Export list for server1:
> /vol1 (everyone)
>
> And I can successfully mount the export from another server.
>
> However, nothing appears to be done regarding HA.  nfs-ganesha is not
> started on server2 and no additional resources are created in pacemaker.
>
> Can anyone provide guidance as to what I may be doing incorrectly?
>
> Thanks,
>
> HB
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster 3.12 memory leak

2018-08-07 Thread lemonnierk
Hi,

Any chance that was what's leaking for the libgfapi users too ?
I assume the next release you mention will be 3.12.13, is that correct ?

On Tue, Aug 07, 2018 at 11:33:58AM +0530, Hari Gowtham wrote:
> Hi,
> 
> The reason for memory leak was found. The patch (
> https://review.gluster.org/#/c/20437/ ) will fix the leak.
> Should be made available with the next release. You can keep an eye on it.
> For more info refer the above mentioned bug.
> 
> Regards,
> Hari.
> On Fri, Aug 3, 2018 at 7:36 PM Alex K  wrote:
> >
> > Thank you Hari.
> > Hope we get a fix soon to put us out of our misery J
> >
> > Alex
> >
> > On Fri, Aug 3, 2018 at 4:58 PM, Hari Gowtham  wrote:
> >>
> >> Hi,
> >>
> >> It is a known issue.
> >> This bug will give more insight on the memory leak.
> >> https://bugzilla.redhat.com/show_bug.cgi?id=1593826
> >> On Fri, Aug 3, 2018 at 6:15 PM Alex K  wrote:
> >> >
> >> > Hi,
> >> >
> >> > I was using 3.8.12-1 up to 3.8.15-2. I did not have issue with these 
> >> > versions.
> >> > I still have systems running with those with no such memory leaks.
> >> >
> >> > Thanx,
> >> > Alex
> >> >
> >> >
> >> > On Fri, Aug 3, 2018 at 3:13 PM, Nithya Balachandran 
> >> >  wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> What version of gluster were you using before you  upgraded?
> >> >>
> >> >> Regards,
> >> >> Nithya
> >> >>
> >> >> On 3 August 2018 at 16:56, Alex K  wrote:
> >> >>>
> >> >>> Hi all,
> >> >>>
> >> >>> I am using gluster 3.12.9-1 on ovirt 4.1.9 and I have observed 
> >> >>> consistent high memory use which at some point renders the hosts 
> >> >>> unresponsive. This behavior is observed also while using 3.12.11-1 
> >> >>> with ovirt 4.2.5. I did not have this issue prior to upgrading gluster.
> >> >>>
> >> >>> I have seen a relevant bug reporting memory leaks of gluster and it 
> >> >>> seems that this is the case for my trouble. To temporarily resolve the 
> >> >>> high memory issue, I put hosts in maintenance then activate them back 
> >> >>> again. This indicates that the memory leak is caused from the gluster 
> >> >>> client. Ovirt is using fuse mounts.
> >> >>>
> >> >>> Is there any bug fx available for this?
> >> >>> This issue is hitting us hard with several production installations.
> >> >>>
> >> >>> Thanx,
> >> >>> Alex
> >> >>>
> >> >>> ___
> >> >>> Gluster-users mailing list
> >> >>> Gluster-users@gluster.org
> >> >>> https://lists.gluster.org/mailman/listinfo/gluster-users
> >> >>
> >> >>
> >> >
> >> > ___
> >> > Gluster-users mailing list
> >> > Gluster-users@gluster.org
> >> > https://lists.gluster.org/mailman/listinfo/gluster-users
> >>
> >>
> >>
> >> --
> >> Regards,
> >> Hari Gowtham.
> >
> >
> 
> 
> -- 
> Regards,
> Hari Gowtham.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users

-- 
PGP Fingerprint : 0x624E42C734DAC346
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster High CPU/Clients Hanging on Heavy Writes

2018-08-07 Thread Xavi Hernandez
Hi Yuhao,

On Mon, 6 Aug 2018, 15:26 Yuhao Zhang,  wrote:

> Hello,
>
> I just experienced another hanging one hour ago and the server was not
> even under heavy IO.
>
> Atin, I attached the process monitoring results and another statedump.
>
> Xavi, ZFS was fine, during the hanging, I can still write directly to the
> ZFS volume. My ZFS version: ZFS: Loaded module v0.6.5.6-0ubuntu16, ZFS pool
> version 5000, ZFS filesystem version 5
>

I highly recommend you to upgrade to version 0.6.5.8 at least. It fixes a
kernel panic that can happen when used with gluster. However this is not
your current problem.

Top statistics show low available memory and high CPU utilization of kswapd
process (along with one of the gluster processes). I've seen frequent
memory management problems with ZFS. Have you configured any ZFS
parameters? It's highly recommendable to tweak some memory limits.

If that were the problem, there's one thing that should alleviate it (and
see if it could be related):

echo 3 >/proc/sys/vm/drop_caches

This should be done on all bricks from time to time. You can wait until the
problem appears, but in this case the recovery time can be larger.

I think this should fix the high CPU usage of kswapd. If so, we'll need to
tweak some ZFS parameters.

I'm not sure if the high CPU usage of gluster could be related to this or
not.

Xavi

>
> Thank you,
> Yuhao
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster 3.12 memory leak

2018-08-07 Thread Hari Gowtham
Hi,

The reason for memory leak was found. The patch (
https://review.gluster.org/#/c/20437/ ) will fix the leak.
Should be made available with the next release. You can keep an eye on it.
For more info refer the above mentioned bug.

Regards,
Hari.
On Fri, Aug 3, 2018 at 7:36 PM Alex K  wrote:
>
> Thank you Hari.
> Hope we get a fix soon to put us out of our misery J
>
> Alex
>
> On Fri, Aug 3, 2018 at 4:58 PM, Hari Gowtham  wrote:
>>
>> Hi,
>>
>> It is a known issue.
>> This bug will give more insight on the memory leak.
>> https://bugzilla.redhat.com/show_bug.cgi?id=1593826
>> On Fri, Aug 3, 2018 at 6:15 PM Alex K  wrote:
>> >
>> > Hi,
>> >
>> > I was using 3.8.12-1 up to 3.8.15-2. I did not have issue with these 
>> > versions.
>> > I still have systems running with those with no such memory leaks.
>> >
>> > Thanx,
>> > Alex
>> >
>> >
>> > On Fri, Aug 3, 2018 at 3:13 PM, Nithya Balachandran  
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >> What version of gluster were you using before you  upgraded?
>> >>
>> >> Regards,
>> >> Nithya
>> >>
>> >> On 3 August 2018 at 16:56, Alex K  wrote:
>> >>>
>> >>> Hi all,
>> >>>
>> >>> I am using gluster 3.12.9-1 on ovirt 4.1.9 and I have observed 
>> >>> consistent high memory use which at some point renders the hosts 
>> >>> unresponsive. This behavior is observed also while using 3.12.11-1 with 
>> >>> ovirt 4.2.5. I did not have this issue prior to upgrading gluster.
>> >>>
>> >>> I have seen a relevant bug reporting memory leaks of gluster and it 
>> >>> seems that this is the case for my trouble. To temporarily resolve the 
>> >>> high memory issue, I put hosts in maintenance then activate them back 
>> >>> again. This indicates that the memory leak is caused from the gluster 
>> >>> client. Ovirt is using fuse mounts.
>> >>>
>> >>> Is there any bug fx available for this?
>> >>> This issue is hitting us hard with several production installations.
>> >>>
>> >>> Thanx,
>> >>> Alex
>> >>>
>> >>> ___
>> >>> Gluster-users mailing list
>> >>> Gluster-users@gluster.org
>> >>> https://lists.gluster.org/mailman/listinfo/gluster-users
>> >>
>> >>
>> >
>> > ___
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org
>> > https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> --
>> Regards,
>> Hari Gowtham.
>
>


-- 
Regards,
Hari Gowtham.
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users