Re: [ovirt-users] oVirt/gluster storage questions for 2-3 node datacenter

2014-08-29 Thread David King
Paul,

Thanks for the response.

You mention that the issue is orphaned files during updates when one node
is down.  However I am less concerned about adding and removing files
because the file server will be predominately VM disks so the file
structure is fairly static.  Those VM files will be quite active however -
will gluster be able to keep track of partial updates to a large file when
one out of two bricks are down?

Right now I am leaning towards using SSD for host local disk - single
brick gluster volumes intended for VMs which are node specific and then 3
way replicas for the higher availability zones which tend to be more read
oriented.   I presume that read-only access only needs to get data from one
of the 3 replicas so that should be reasonably performant.

Thanks,
David



On Thu, Aug 28, 2014 at 6:13 PM, Paul Robert Marino prmari...@gmail.com
wrote:

 I'll try to answer some of these.
 1) its not a serious problem persay the issue is if one node goes down and
 you delete a file while the second node is down it will be restored when
 the second node comes back which may cause orphaned files where as if you
 use 3 servers they will use quorum to figure out what needs to be restored
 or deleted. Further more your read and write performance may suffer
 especially in comparison to having 1 replica of the file with stripping.

 2) see answer 1 and just create the volume with 1 replica and only include
 the URI for bricks on two of the hosts when you create it.

 3) I think so but have never tried it you just have to define it as a
 local storage domain.

 4) well that's a philosophical question. You can theory have two hosted
 engines on separate VMs on two separate physical boxes but if for any
 reason they both go down you will be living in interesting times (as in
 the Chinese curse)

 5) YES! And have more than one.

 -- Sent from my HP Pre3

 --
 On Aug 28, 2014 9:39 AM, David King da...@rexden.us wrote:

 Hi,

 I am currently testing oVirt 3.4.3 + gluster 3.5.2 for use in my
 relatively small home office environment on a single host.  I have 2  Intel
 hosts with SSD and magnetic disk and one AMD host with only magnetic disk.
  I have been trying to figure out the best way to configure my environment
 given my previous attempt with oVirt 3.3 encountered storage issues.

 I will be hosting two types of VMs - VMs that can be tied to a particular
 system (such as 3 node FreeIPA domain or some test VMs), and VMs which
 could migrate between systems for improved uptime.

 The processor issue seems straightforward.  Have a single datacenter with
 two clusters - one for the Intel systems and one for the AMD systems.  Put
 VMs which need to live migrate on the Intel cluster.  If necessary VMs can
 be manually switched between the Intel and AMD cluster with a downtime.

 The Gluster side of the storage seems less clear.  The bulk of the gluster
 with oVirt issues I experienced and have seen on the list seem to be two
 node setups with 2 bricks in the Gluster volume.

 So here are my questions:

 1) Should I avoid 2 brick Gluster volumes?

 2) What is the risk in having the SSD volumes with only 2 bricks given
 that there would be 3 gluster servers?  How should I configure them?

 3) Is there a way to use local storage for a host locked VM other than
 creating a gluster volume with one brick?

 4) Should I avoid using the hosted engine configuration?  I do have an
 external VMware ESXi system to host the engine for now but would like to
 phase it out eventually.

 5) If I do the hosted engine should I make the underlying gluster volume 3
 brick replicated?

 Thanks in advance for any help you can provide.

 -David

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt/gluster storage questions for 2-3 node datacenter

2014-08-29 Thread Vijay Bellur

On 08/29/2014 07:34 PM, David King wrote:

Paul,

Thanks for the response.

You mention that the issue is orphaned files during updates when one
node is down.  However I am less concerned about adding and removing
files because the file server will be predominately VM disks so the file
structure is fairly static.  Those VM files will be quite active however
- will gluster be able to keep track of partial updates to a large file
when one out of two bricks are down?



Yes, gluster only updates regions of the file that need to be 
synchronized during self-healing. More details on this synchronization 
can be found in the self-healing section of afr's design document [1].



Right now I am leaning towards using SSD for host local disk - single
brick gluster volumes intended for VMs which are node specific and then
3 way replicas for the higher availability zones which tend to be more
read oriented.   I presume that read-only access only needs to get data
from one of the 3 replicas so that should be reasonably performant.


Yes, read operations are directed to only one of the replicas.

Regards,
Vijay

[1] https://github.com/gluster/glusterfs/blob/master/doc/features/afr-v1.md

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt/gluster storage questions for 2-3 node datacenter

2014-08-29 Thread Paul Robert Marino
On Fri, Aug 29, 2014 at 12:25 PM, Vijay Bellur vbel...@redhat.com wrote:
 On 08/29/2014 07:34 PM, David King wrote:

 Paul,

 Thanks for the response.

 You mention that the issue is orphaned files during updates when one
 node is down.  However I am less concerned about adding and removing
 files because the file server will be predominately VM disks so the file
 structure is fairly static.  Those VM files will be quite active however
 - will gluster be able to keep track of partial updates to a large file
 when one out of two bricks are down?


 Yes, gluster only updates regions of the file that need to be synchronized
 during self-healing. More details on this synchronization can be found in
 the self-healing section of afr's design document [1].


 Right now I am leaning towards using SSD for host local disk - single
 brick gluster volumes intended for VMs which are node specific and then

I wouldn't use single brick gluster volumes for local disk you don't
need it and it will actually make it more complicated with no real
benefits.

 3 way replicas for the higher availability zones which tend to be more
 read oriented.   I presume that read-only access only needs to get data
 from one of the 3 replicas so that should be reasonably performant.


 Yes, read operations are directed to only one of the replicas.

 Regards,
 Vijay

 [1] https://github.com/gluster/glusterfs/blob/master/doc/features/afr-v1.md

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt/gluster storage questions for 2-3 node datacenter

2014-08-29 Thread David King
Hi Paul,

I would prefer to do a direct mount for local disk.  However I am not certain 
how to configure a single system with both local storage and bluster replicated 
storage.

- The “Configure Local Storage”  option for Hosts wants to make a datacenter 
and cluster for the system.  I presume that’s because oVirt wants to be able to 
mount the storage on all hosts in a datacenter.  

- Configuring a POSIX storage domain with local disk does not work as oVirt 
wants to mount the disk on all systems in the datacenter.  

I suppose my third option would be to put these systems as libvirt VMs and not 
manage them with oVirt.  This is fairly reasonable as I use Foreman for 
provisioning except that I will need to figure out how to make them co-exist.  
Has anyone tried this?

Am I missing other options for local non-replicated disk?  

Thanks,
David

-- 
David King
On August 29, 2014 at 3:01:49 PM, Paul Robert Marino (prmari...@gmail.com) 
wrote:

On Fri, Aug 29, 2014 at 12:25 PM, Vijay Bellur vbel...@redhat.com wrote:  
 On 08/29/2014 07:34 PM, David King wrote:  
  
 Paul,  
  
 Thanks for the response.  
  
 You mention that the issue is orphaned files during updates when one  
 node is down. However I am less concerned about adding and removing  
 files because the file server will be predominately VM disks so the file  
 structure is fairly static. Those VM files will be quite active however  
 - will gluster be able to keep track of partial updates to a large file  
 when one out of two bricks are down?  
  
  
 Yes, gluster only updates regions of the file that need to be synchronized  
 during self-healing. More details on this synchronization can be found in  
 the self-healing section of afr's design document [1].  
  
  
 Right now I am leaning towards using SSD for host local disk - single  
 brick gluster volumes intended for VMs which are node specific and then  

I wouldn't use single brick gluster volumes for local disk you don't  
need it and it will actually make it more complicated with no real  
benefits.  

 3 way replicas for the higher availability zones which tend to be more  
 read oriented. I presume that read-only access only needs to get data  
 from one of the 3 replicas so that should be reasonably performant.  
  
  
 Yes, read operations are directed to only one of the replicas.  
  
 Regards,  
 Vijay  
  
 [1] https://github.com/gluster/glusterfs/blob/master/doc/features/afr-v1.md  
  
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt/gluster storage questions for 2-3 node datacenter

2014-08-28 Thread David King
Hi,

I am currently testing oVirt 3.4.3 + gluster 3.5.2 for use in my relatively
small home office environment on a single host.  I have 2  Intel hosts with
SSD and magnetic disk and one AMD host with only magnetic disk.  I have
been trying to figure out the best way to configure my environment given my
previous attempt with oVirt 3.3 encountered storage issues.

I will be hosting two types of VMs - VMs that can be tied to a particular
system (such as 3 node FreeIPA domain or some test VMs), and VMs which
could migrate between systems for improved uptime.

The processor issue seems straightforward.  Have a single datacenter with
two clusters - one for the Intel systems and one for the AMD systems.  Put
VMs which need to live migrate on the Intel cluster.  If necessary VMs can
be manually switched between the Intel and AMD cluster with a downtime.

The Gluster side of the storage seems less clear.  The bulk of the gluster
with oVirt issues I experienced and have seen on the list seem to be two
node setups with 2 bricks in the Gluster volume.

So here are my questions:

1) Should I avoid 2 brick Gluster volumes?

2) What is the risk in having the SSD volumes with only 2 bricks given that
there would be 3 gluster servers?  How should I configure them?

3) Is there a way to use local storage for a host locked VM other than
creating a gluster volume with one brick?

4) Should I avoid using the hosted engine configuration?  I do have an
external VMware ESXi system to host the engine for now but would like to
phase it out eventually.

5) If I do the hosted engine should I make the underlying gluster volume 3
brick replicated?

Thanks in advance for any help you can provide.

-David
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt/gluster storage questions for 2-3 node datacenter

2014-08-28 Thread Paul Robert Marino
I'll try to answer some of these.1) its not a serious problem persay the issue is if one node goes down and you delete a file while the second node is down it will be restored when the second node comes back which may cause orphaned files where as if you use 3 servers they will use quorum to figure out what needs to be restored or deleted. Further more your read and write performance may suffer especially in comparison to having 1 replica of the file with stripping.2) see answer 1 and just create the volume with 1 replica and only include the URI for bricks on two of the hosts when you create it.3) I think so but have never tried it you just have to define it as a local storage domain.4) well that's a philosophical question. You can theory have two hosted engines on separate VMs on two separate physical boxes but if for any reason they both go down you will "be living in interesting times" (as in the Chinese curse)5) YES! And have more than one.-- Sent from my HP Pre3On Aug 28, 2014 9:39 AM, David King da...@rexden.us wrote: Hi,I am currently testing oVirt 3.4.3 + gluster 3.5.2 for use in my relatively small home office environment on a single host.  I have 2  Intel hosts with SSD and magnetic disk and one AMD host with only magnetic disk.  I have been trying to figure out the best way to configure my environment given my previous attempt with oVirt 3.3 encountered storage issues.
I will be hosting two types of VMs - VMs that can be tied to a particular system (such as 3 node FreeIPA domain or some test VMs), and VMs which could migrate between systems for improved uptime.
The processor issue seems straightforward.  Have a single datacenter with two clusters - one for the Intel systems and one for the AMD systems.  Put VMs which need to live migrate on the Intel cluster.  If necessary VMs can be manually switched between the Intel and AMD cluster with a downtime.
The Gluster side of the storage seems less clear.  The bulk of the gluster with oVirt issues I experienced and have seen on the list seem to be two node setups with 2 bricks in the Gluster volume.  
So here are my questions:1) Should I avoid 2 brick Gluster volumes?  2) What is the risk in having the SSD volumes with only 2 bricks given that there would be 3 gluster servers?  How should I configure them?
3) Is there a way to use local storage for a host locked VM other than creating a gluster volume with one brick?  4) Should I avoid using the hosted engine configuration?  I do have an external VMware ESXi system to host the engine for now but would like to phase it out eventually.
5) If I do the hosted engine should I make the underlying gluster volume 3 brick replicated?Thanks in advance for any help you can provide.  -David

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users