On 30/04/19 6:59 PM, Strahil Nikolov wrote:
Hi,

I'm posting this again as it got bounced.
Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.


It do take care those, but need to follow certain prerequisite, but please fencing won't configured for this setup. May we think about in future.

--

Jiffin


Still, this will be a lot of work to achieve.

Best Regards,
Strahil Nikolov

On Apr 30, 2019 15:19, Jim Kinney <jim.kin...@gmail.com> wrote:
+1!
I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
instead of fuse mounts. Having an integrated, designed in process to coordinate 
multiple nodes into an HA cluster will very welcome.

On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan <jthot...@redhat.com> 
wrote:
Hi all,

Some of you folks may be familiar with HA solution provided for nfs-ganesha by 
gluster using pacemaker and corosync.

That feature was removed in glusterfs 3.10 in favour for common HA project 
"Storhaug". Even Storhaug was not progressed

much from last two years and current development is in halt state, hence 
planning to restore old HA ganesha solution back

to gluster code repository with some improvement and targetting for next 
gluster release 7.

   I have opened up an issue [1] with details and posted initial set of patches 
[2]

Please share your thoughts on the same


Regards,

Jiffin

[1] https://github.com/gluster/glusterfs/issues/663

[2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)


--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.
Keep in mind that corosync/pacemaker  is hard for proper setup by new 
admins/users.

I'm still trying to remediate the effects of poor configuration at work.
Also, storhaug is nice for hyperconverged setups where the host is not only 
hosting bricks, but  other  workloads.
Corosync/pacemaker require proper fencing to be setup and most of the stonith 
resources 'shoot the other node in the head'.
I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha 
true') and gluster to be bringing up the Floating IPs and taking care of the 
NFS locks, so no disruption will be felt by the clients.

Still, this will be a lot of work to achieve.

Best Regards,
Strahil NikolovOn Apr 30, 2019 15:19, Jim Kinney <jim.kin...@gmail.com> wrote:
+1!
I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
instead of fuse mounts. Having an integrated, designed in process to coordinate 
multiple nodes into an HA cluster will very welcome.

On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan <jthot...@redhat.com> 
wrote:
Hi all,

Some of you folks may be familiar with HA solution provided for nfs-ganesha by 
gluster using pacemaker and corosync.

That feature was removed in glusterfs 3.10 in favour for common HA project 
"Storhaug". Even Storhaug was not progressed

much from last two years and current development is in halt state, hence 
planning to restore old HA ganesha solution back

to gluster code repository with some improvement and targetting for next 
gluster release 7.

I have opened up an issue [1] with details and posted initial set of patches [2]

Please share your thoughts on the same

Regards,

Jiffin

[1] https://github.com/gluster/glusterfs/issues/663

[2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)

--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.
_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel

Reply via email to