Re: [ClusterLabs] Antw: Re: [Slightly OT] OCFS2 over LVM

2015-08-25 Thread Digimer
On 25/08/15 04:45 AM, Ulrich Windl wrote:
 Digimer  schrieb am 24.08.2015 um 18:20 in Nachricht
> <55db4453.10...@alteeve.ca>:
> [...]
>> Using a pair of nodes with a traditional file system exported by NFS and
>> made accessible by a floating (virtual) IP address gives you redundancy
>> without incurring the complexity and performance overhead of cluster
>> locking. Also, you won't need clvmd either. The trade-off through is
>> that if/when the primary fails, the nfs daemon will appear to restart to
>> the users and that may require a reconnection (not sure, I use nfs
>> sparingly).
> 
> But that's a cheap trick: You say don't provide HA-storage (CFS), but use 
> existing one (NFS). How do you build a HA-NFS server? You need another 
> cluster. Not everybody has that many nodes available.

DRBD in single-primary will do the job just fine. Recovery is simply a
matter of; fence -> promote to primary -> mount -> start nfs -> take
virtual IP, done.

Only 2-nodes needed. This is a common setup.

>> Generally speaking, I recommend always avoiding cluster FSes unless
>> they're really required. I say this as a person who uses gfs2 in every
>> cluster I build, but I do so carefully and in limited uses. In my case,
>> gfs2 backs ISOs and XML definition files for VMs, things that change
>> rarely so cluster locking overhead is all but a non-issue, and I have to
>> have DLM for clustered LVM anyway, so I've already incurred the
>> complexity costs so hey, why not.
>>
>> -- 
>> Digimer
>> Papers and Projects: https://alteeve.ca/w/ 
>> What if the cure for cancer is trapped in the mind of a person without
>> access to education?
>>
>> ___
>> Users mailing list: Users@clusterlabs.org 
>> http://clusterlabs.org/mailman/listinfo/users 
>>
>> Project Home: http://www.clusterlabs.org 
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
>> Bugs: http://bugs.clusterlabs.org 
> 
> 
> 
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 


-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: [Slightly OT] OCFS2 over LVM

2015-08-25 Thread Ulrich Windl
>>> Digimer  schrieb am 24.08.2015 um 18:20 in Nachricht
<55db4453.10...@alteeve.ca>:
[...]
> Using a pair of nodes with a traditional file system exported by NFS and
> made accessible by a floating (virtual) IP address gives you redundancy
> without incurring the complexity and performance overhead of cluster
> locking. Also, you won't need clvmd either. The trade-off through is
> that if/when the primary fails, the nfs daemon will appear to restart to
> the users and that may require a reconnection (not sure, I use nfs
> sparingly).

But that's a cheap trick: You say don't provide HA-storage (CFS), but use 
existing one (NFS). How do you build a HA-NFS server? You need another cluster. 
Not everybody has that many nodes available.

> 
> Generally speaking, I recommend always avoiding cluster FSes unless
> they're really required. I say this as a person who uses gfs2 in every
> cluster I build, but I do so carefully and in limited uses. In my case,
> gfs2 backs ISOs and XML definition files for VMs, things that change
> rarely so cluster locking overhead is all but a non-issue, and I have to
> have DLM for clustered LVM anyway, so I've already incurred the
> complexity costs so hey, why not.
> 
> -- 
> Digimer
> Papers and Projects: https://alteeve.ca/w/ 
> What if the cure for cancer is trapped in the mind of a person without
> access to education?
> 
> ___
> Users mailing list: Users@clusterlabs.org 
> http://clusterlabs.org/mailman/listinfo/users 
> 
> Project Home: http://www.clusterlabs.org 
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> Bugs: http://bugs.clusterlabs.org 





___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org