Re: [ovirt-users] Error trying to add new hosted-engine host to upgraded oVirt cluster

2014-11-13 Thread David King
Hi,

Thanks for the patch - that appears to solve the lockspace problem.

Once I got past that error I got a second error at the CPU selection phase:

[ ERROR ] Failed to execute stage 'Environment customization': Invalid CPU
 type specified: None

Investigating further I realized that I had not pulled the answers from the
first host.  I switched to the first host and have added the new host
without any issues.

Thanks for the help,

-David

On Thu, Nov 13, 2014 at 3:36 AM, Sandro Bonazzola sbona...@redhat.com
wrote:

 Il 13/11/2014 09:12, Jiri Moskovcak ha scritto:
  On 11/12/2014 04:20 PM, Sandro Bonazzola wrote:
  Il 12/11/2014 16:10, David King ha scritto:
  Hi everyone,
 
  I have upgraded my oVirt 3.4 hosted engine cluster to oVirt 3.5 using
 the
  upgrade instructions on the Wiki.  Everything appears to be working
 fine
  after the upgrade.
 
  However, I am now trying to add a new host to the hosted engine
  configuration but the hosted-engine --deploy fails after sshing the
 answers
  file from the upgraded primary configuration.  The following errors
 can be
  found in the setup log:
 
 
  Answer file lacks lockspace UUIDs, please use an answer file generated
 from
  the same version you are using on this additional host
 
  Can you please open a BZ about this issue so we can track it?
  Jiri, Martin, is the file backend able to handle this kind of upgrade?
 
  Hi Sandro,
  yes, it is able to handle it, the lockspace uuid is needed only form
 iscsi (lvm based) storage, which is not the case when upgrading from 3.4,
 so we
  should be safe skipping the check for lockspace UUID in the setup if the
 storage is on nfs.
 
  --Jirka
 
  @David, I'm afraid the setup is not able to add host to the cluster
 created in 3.4, the workaround might be to deploy the host with the setup
 from 3.4
  and then update it. Sorry for the inconvenience :-/

 David, can you apply this patch http://gerrit.ovirt.org/35104 on the host
 you're adding to the cluster?
 It should solve your issue.

 
  --Jirka
 
 
 
 
 
  I confirmed that the answers file on the upgraded host does not have
 any
  lockspace UUIDs:
 
  OVEHOSTED_STORAGE/storageDatacenterName=str:hosted_datacenter
  OVEHOSTED_STORAGE/storageDomainName=str:hosted_storage
  OVEHOSTED_STORAGE/storageType=none:None
  OVEHOSTED_STORAGE/volUUID=str:da160775-07fe-4569-b45f-03be0c5896a5
  OVEHOSTED_STORAGE/domainType=str:nfs3
  OVEHOSTED_STORAGE/imgSizeGB=str:25
  OVEHOSTED_STORAGE/storageDomainConnection=str:192.168.8.12:
  /mnt/data2/vm/engine
 
 OVEHOSTED_STORAGE/connectionUUID=str:880093ea-b0c1-448d-ac55-cde99feebc23
  OVEHOSTED_STORAGE/spUUID=str:5e7ff7c2-6e75-4ba8-a5cc-e8dc5d37e478
  OVEHOSTED_STORAGE/imgUUID=str:c9466bb6-a78c-4caa-bce3-22c87a5f3f1a
  OVEHOSTED_STORAGE/sdUUID=str:b12fd59c-380a-40b3-b7f2-02d455de1d3b
 
 
  Is there something I can do to update the answers file on the updated
 3.5
  working host so this will work?
 
  Thanks,
  David
 
  PS: Here is the relevant section of the hosted-engine setup log file:
 
  2014-11-11 22:57:04 DEBUG otopi.context context._executeMethod:138
 Stage
  validation METHOD
 
 otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace.Plugin._validation
  2014-11-11 22:57:04 DEBUG otopi.context context._executeMethod:152
 method
  exception
  Traceback (most recent call last):
 File /usr/lib/python2.7/site-packages/otopi/context.py, line
 142, in
  _executeMethod
   method['method']()
 File
 
 /usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/sanlock/lockspace.py,
  line 102, in _validation
   'Answer file lacks lockspace UUIDs, please use an '
  RuntimeError: Answer file lacks lockspace UUIDs, please use an answer
 file
  generated from the same version you are using on this additional host
  2014-11-11 22:57:04 ERROR otopi.context context._executeMethod:161
 Failed
  to execute stage 'Setup validation': Answer file lacks lockspace
 UUIDs,
  please use an answer file generated from the same version you are
 using on
  this additional host
  2014-11-11 22:57:04 DEBUG otopi.context context.dumpEnvironment:490
  ENVIRONMENT DUMP - BEGIN
  2014-11-11 22:57:04 DEBUG otopi.context context.dumpEnvironment:500
 ENV
  BASE/error=bool:'True'
  2014-11-11 22:57:04 DEBUG otopi.context context.dumpEnvironment:500
 ENV
  BASE/exceptionInfo=list:'[(type 'exceptions.RuntimeError',
  RuntimeError('Answer file lacks lockspace UUIDs, please use an answer
 file
  generated from the same version you are using on this additional
 host',),
  traceback object at 0x34c85a8)]'
  2014-11-11 22:57:04 DEBUG otopi.context context.dumpEnvironment:504
  ENVIRONMENT DUMP - END
  2014-11-11 22:57:04 INFO otopi.context context.runSequence:417 Stage:
  Clean up
  2014-11-11 22:57:04 DEBUG otopi.context context.runSequence:421 STAGE
  cleanup
  2014-11-11 22:57:04 DEBUG otopi.context context._executeMethod:138
 Stage
  cleanup METHOD
 
 otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile.Plugin

[ovirt-users] Error trying to add new hosted-engine host to upgraded oVirt cluster

2014-11-12 Thread David King
Hi everyone,

I have upgraded my oVirt 3.4 hosted engine cluster to oVirt 3.5 using the
upgrade instructions on the Wiki.  Everything appears to be working fine
after the upgrade.

However, I am now trying to add a new host to the hosted engine
configuration but the hosted-engine --deploy fails after sshing the answers
file from the upgraded primary configuration.  The following errors can be
found in the setup log:


Answer file lacks lockspace UUIDs, please use an answer file generated from
 the same version you are using on this additional host


I confirmed that the answers file on the upgraded host does not have any
lockspace UUIDs:

OVEHOSTED_STORAGE/storageDatacenterName=str:hosted_datacenter
 OVEHOSTED_STORAGE/storageDomainName=str:hosted_storage
 OVEHOSTED_STORAGE/storageType=none:None
 OVEHOSTED_STORAGE/volUUID=str:da160775-07fe-4569-b45f-03be0c5896a5
 OVEHOSTED_STORAGE/domainType=str:nfs3
 OVEHOSTED_STORAGE/imgSizeGB=str:25
 OVEHOSTED_STORAGE/storageDomainConnection=str:192.168.8.12:
 /mnt/data2/vm/engine
 OVEHOSTED_STORAGE/connectionUUID=str:880093ea-b0c1-448d-ac55-cde99feebc23
 OVEHOSTED_STORAGE/spUUID=str:5e7ff7c2-6e75-4ba8-a5cc-e8dc5d37e478
 OVEHOSTED_STORAGE/imgUUID=str:c9466bb6-a78c-4caa-bce3-22c87a5f3f1a
 OVEHOSTED_STORAGE/sdUUID=str:b12fd59c-380a-40b3-b7f2-02d455de1d3b


Is there something I can do to update the answers file on the updated 3.5
working host so this will work?

Thanks,
David

PS: Here is the relevant section of the hosted-engine setup log file:

2014-11-11 22:57:04 DEBUG otopi.context context._executeMethod:138 Stage
 validation METHOD
 otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace.Plugin._validation
 2014-11-11 22:57:04 DEBUG otopi.context context._executeMethod:152 method
 exception
 Traceback (most recent call last):
   File /usr/lib/python2.7/site-packages/otopi/context.py, line 142, in
 _executeMethod
 method['method']()
   File
 /usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/sanlock/lockspace.py,
 line 102, in _validation
 'Answer file lacks lockspace UUIDs, please use an '
 RuntimeError: Answer file lacks lockspace UUIDs, please use an answer file
 generated from the same version you are using on this additional host
 2014-11-11 22:57:04 ERROR otopi.context context._executeMethod:161 Failed
 to execute stage 'Setup validation': Answer file lacks lockspace UUIDs,
 please use an answer file generated from the same version you are using on
 this additional host
 2014-11-11 22:57:04 DEBUG otopi.context context.dumpEnvironment:490
 ENVIRONMENT DUMP - BEGIN
 2014-11-11 22:57:04 DEBUG otopi.context context.dumpEnvironment:500 ENV
 BASE/error=bool:'True'
 2014-11-11 22:57:04 DEBUG otopi.context context.dumpEnvironment:500 ENV
 BASE/exceptionInfo=list:'[(type 'exceptions.RuntimeError',
 RuntimeError('Answer file lacks lockspace UUIDs, please use an answer file
 generated from the same version you are using on this additional host',),
 traceback object at 0x34c85a8)]'
 2014-11-11 22:57:04 DEBUG otopi.context context.dumpEnvironment:504
 ENVIRONMENT DUMP - END
 2014-11-11 22:57:04 INFO otopi.context context.runSequence:417 Stage:
 Clean up
 2014-11-11 22:57:04 DEBUG otopi.context context.runSequence:421 STAGE
 cleanup
 2014-11-11 22:57:04 DEBUG otopi.context context._executeMethod:138 Stage
 cleanup METHOD
 otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile.Plugin._cleanup
 2014-11-11 22:57:04 DEBUG otopi.context context._executeMethod:138 Stage
 cleanup METHOD
 otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._cleanup
 2014-11-11 22:57:04 DEBUG otopi.context context._executeMethod:138 Stage
 cleanup METHOD
 otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._cleanup
 2014-11-11 22:57:04 DEBUG otopi.context context._executeMethod:138 Stage
 cleanup METHOD
 otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._cleanup
 2014-11-11 22:57:04 DEBUG
 otopi.plugins.ovirt_hosted_engine_setup.storage.storage
 storage._spmStop:609 spmStop
 2014-11-11 22:57:04 DEBUG
 otopi.plugins.ovirt_hosted_engine_setup.storage.storage
 storage._cleanup:970 Not SPM?
 Traceback (most recent call last):
   File
 /usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/storage.py,
 line 968, in _cleanup
 self._spmStop()
   File
 /usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/storage.py,
 line 617, in _spmStop
 raise RuntimeError(status_uuid)
 RuntimeError: Not SPM
 2014-11-11 22:57:04 DEBUG
 otopi.plugins.ovirt_hosted_engine_setup.storage.storage
 storage._storagePoolConnection:580 disconnectStoragePool
 2014-11-11 22:57:08 DEBUG otopi.context context._executeMethod:138 Stage
 cleanup METHOD
 otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._cleanup
 2014-11-11 22:57:08 DEBUG otopi.context context._executeMethod:138 Stage
 cleanup METHOD
 

Re: [ovirt-users] Error trying to add new hosted-engine host to upgraded oVirt cluster

2014-11-12 Thread David King
Hi,

I have created:

Bug 1163385 https://bugzilla.redhat.com/show_bug.cgi?id=1163385 - Error
trying to add new hosted-engine host to upgraded oVirt cluster

It is my first RedHat Bugzilla bug so please let me know if there is
something I should do differently in the future.

Thanks,
David


On Wed, Nov 12, 2014 at 10:20 AM, Sandro Bonazzola sbona...@redhat.com
wrote:

 Il 12/11/2014 16:10, David King ha scritto:
  Hi everyone,
 
  I have upgraded my oVirt 3.4 hosted engine cluster to oVirt 3.5 using the
  upgrade instructions on the Wiki.  Everything appears to be working fine
  after the upgrade.
 
  However, I am now trying to add a new host to the hosted engine
  configuration but the hosted-engine --deploy fails after sshing the
 answers
  file from the upgraded primary configuration.  The following errors can
 be
  found in the setup log:
 
 
  Answer file lacks lockspace UUIDs, please use an answer file generated
 from
  the same version you are using on this additional host

 Can you please open a BZ about this issue so we can track it?
 Jiri, Martin, is the file backend able to handle this kind of upgrade?


 
 
  I confirmed that the answers file on the upgraded host does not have any
  lockspace UUIDs:
 
  OVEHOSTED_STORAGE/storageDatacenterName=str:hosted_datacenter
  OVEHOSTED_STORAGE/storageDomainName=str:hosted_storage
  OVEHOSTED_STORAGE/storageType=none:None
  OVEHOSTED_STORAGE/volUUID=str:da160775-07fe-4569-b45f-03be0c5896a5
  OVEHOSTED_STORAGE/domainType=str:nfs3
  OVEHOSTED_STORAGE/imgSizeGB=str:25
  OVEHOSTED_STORAGE/storageDomainConnection=str:192.168.8.12:
  /mnt/data2/vm/engine
 
 OVEHOSTED_STORAGE/connectionUUID=str:880093ea-b0c1-448d-ac55-cde99feebc23
  OVEHOSTED_STORAGE/spUUID=str:5e7ff7c2-6e75-4ba8-a5cc-e8dc5d37e478
  OVEHOSTED_STORAGE/imgUUID=str:c9466bb6-a78c-4caa-bce3-22c87a5f3f1a
  OVEHOSTED_STORAGE/sdUUID=str:b12fd59c-380a-40b3-b7f2-02d455de1d3b
 
 
  Is there something I can do to update the answers file on the updated 3.5
  working host so this will work?
 
  Thanks,
  David
 
  PS: Here is the relevant section of the hosted-engine setup log file:
 
  2014-11-11 22:57:04 DEBUG otopi.context context._executeMethod:138 Stage
  validation METHOD
 
 otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace.Plugin._validation
  2014-11-11 22:57:04 DEBUG otopi.context context._executeMethod:152
 method
  exception
  Traceback (most recent call last):
File /usr/lib/python2.7/site-packages/otopi/context.py, line 142, in
  _executeMethod
  method['method']()
File
 
 /usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/sanlock/lockspace.py,
  line 102, in _validation
  'Answer file lacks lockspace UUIDs, please use an '
  RuntimeError: Answer file lacks lockspace UUIDs, please use an answer
 file
  generated from the same version you are using on this additional host
  2014-11-11 22:57:04 ERROR otopi.context context._executeMethod:161
 Failed
  to execute stage 'Setup validation': Answer file lacks lockspace UUIDs,
  please use an answer file generated from the same version you are using
 on
  this additional host
  2014-11-11 22:57:04 DEBUG otopi.context context.dumpEnvironment:490
  ENVIRONMENT DUMP - BEGIN
  2014-11-11 22:57:04 DEBUG otopi.context context.dumpEnvironment:500 ENV
  BASE/error=bool:'True'
  2014-11-11 22:57:04 DEBUG otopi.context context.dumpEnvironment:500 ENV
  BASE/exceptionInfo=list:'[(type 'exceptions.RuntimeError',
  RuntimeError('Answer file lacks lockspace UUIDs, please use an answer
 file
  generated from the same version you are using on this additional
 host',),
  traceback object at 0x34c85a8)]'
  2014-11-11 22:57:04 DEBUG otopi.context context.dumpEnvironment:504
  ENVIRONMENT DUMP - END
  2014-11-11 22:57:04 INFO otopi.context context.runSequence:417 Stage:
  Clean up
  2014-11-11 22:57:04 DEBUG otopi.context context.runSequence:421 STAGE
  cleanup
  2014-11-11 22:57:04 DEBUG otopi.context context._executeMethod:138 Stage
  cleanup METHOD
 
 otopi.plugins.ovirt_hosted_engine_setup.core.remote_answerfile.Plugin._cleanup
  2014-11-11 22:57:04 DEBUG otopi.context context._executeMethod:138 Stage
  cleanup METHOD
  otopi.plugins.ovirt_hosted_engine_setup.engine.add_host.Plugin._cleanup
  2014-11-11 22:57:04 DEBUG otopi.context context._executeMethod:138 Stage
  cleanup METHOD
  otopi.plugins.ovirt_hosted_engine_setup.pki.vdsmpki.Plugin._cleanup
  2014-11-11 22:57:04 DEBUG otopi.context context._executeMethod:138 Stage
  cleanup METHOD
  otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._cleanup
  2014-11-11 22:57:04 DEBUG
  otopi.plugins.ovirt_hosted_engine_setup.storage.storage
  storage._spmStop:609 spmStop
  2014-11-11 22:57:04 DEBUG
  otopi.plugins.ovirt_hosted_engine_setup.storage.storage
  storage._cleanup:970 Not SPM?
  Traceback (most recent call last):
File
 
 /usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/storage.py,
  line 968, in _cleanup

Re: [ovirt-users] oVirt/gluster storage questions for 2-3 node datacenter

2014-08-29 Thread David King
Paul,

Thanks for the response.

You mention that the issue is orphaned files during updates when one node
is down.  However I am less concerned about adding and removing files
because the file server will be predominately VM disks so the file
structure is fairly static.  Those VM files will be quite active however -
will gluster be able to keep track of partial updates to a large file when
one out of two bricks are down?

Right now I am leaning towards using SSD for host local disk - single
brick gluster volumes intended for VMs which are node specific and then 3
way replicas for the higher availability zones which tend to be more read
oriented.   I presume that read-only access only needs to get data from one
of the 3 replicas so that should be reasonably performant.

Thanks,
David



On Thu, Aug 28, 2014 at 6:13 PM, Paul Robert Marino prmari...@gmail.com
wrote:

 I'll try to answer some of these.
 1) its not a serious problem persay the issue is if one node goes down and
 you delete a file while the second node is down it will be restored when
 the second node comes back which may cause orphaned files where as if you
 use 3 servers they will use quorum to figure out what needs to be restored
 or deleted. Further more your read and write performance may suffer
 especially in comparison to having 1 replica of the file with stripping.

 2) see answer 1 and just create the volume with 1 replica and only include
 the URI for bricks on two of the hosts when you create it.

 3) I think so but have never tried it you just have to define it as a
 local storage domain.

 4) well that's a philosophical question. You can theory have two hosted
 engines on separate VMs on two separate physical boxes but if for any
 reason they both go down you will be living in interesting times (as in
 the Chinese curse)

 5) YES! And have more than one.

 -- Sent from my HP Pre3

 --
 On Aug 28, 2014 9:39 AM, David King da...@rexden.us wrote:

 Hi,

 I am currently testing oVirt 3.4.3 + gluster 3.5.2 for use in my
 relatively small home office environment on a single host.  I have 2  Intel
 hosts with SSD and magnetic disk and one AMD host with only magnetic disk.
  I have been trying to figure out the best way to configure my environment
 given my previous attempt with oVirt 3.3 encountered storage issues.

 I will be hosting two types of VMs - VMs that can be tied to a particular
 system (such as 3 node FreeIPA domain or some test VMs), and VMs which
 could migrate between systems for improved uptime.

 The processor issue seems straightforward.  Have a single datacenter with
 two clusters - one for the Intel systems and one for the AMD systems.  Put
 VMs which need to live migrate on the Intel cluster.  If necessary VMs can
 be manually switched between the Intel and AMD cluster with a downtime.

 The Gluster side of the storage seems less clear.  The bulk of the gluster
 with oVirt issues I experienced and have seen on the list seem to be two
 node setups with 2 bricks in the Gluster volume.

 So here are my questions:

 1) Should I avoid 2 brick Gluster volumes?

 2) What is the risk in having the SSD volumes with only 2 bricks given
 that there would be 3 gluster servers?  How should I configure them?

 3) Is there a way to use local storage for a host locked VM other than
 creating a gluster volume with one brick?

 4) Should I avoid using the hosted engine configuration?  I do have an
 external VMware ESXi system to host the engine for now but would like to
 phase it out eventually.

 5) If I do the hosted engine should I make the underlying gluster volume 3
 brick replicated?

 Thanks in advance for any help you can provide.

 -David

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt/gluster storage questions for 2-3 node datacenter

2014-08-29 Thread David King
Hi Paul,

I would prefer to do a direct mount for local disk.  However I am not certain 
how to configure a single system with both local storage and bluster replicated 
storage.

- The “Configure Local Storage”  option for Hosts wants to make a datacenter 
and cluster for the system.  I presume that’s because oVirt wants to be able to 
mount the storage on all hosts in a datacenter.  

- Configuring a POSIX storage domain with local disk does not work as oVirt 
wants to mount the disk on all systems in the datacenter.  

I suppose my third option would be to put these systems as libvirt VMs and not 
manage them with oVirt.  This is fairly reasonable as I use Foreman for 
provisioning except that I will need to figure out how to make them co-exist.  
Has anyone tried this?

Am I missing other options for local non-replicated disk?  

Thanks,
David

-- 
David King
On August 29, 2014 at 3:01:49 PM, Paul Robert Marino (prmari...@gmail.com) 
wrote:

On Fri, Aug 29, 2014 at 12:25 PM, Vijay Bellur vbel...@redhat.com wrote:  
 On 08/29/2014 07:34 PM, David King wrote:  
  
 Paul,  
  
 Thanks for the response.  
  
 You mention that the issue is orphaned files during updates when one  
 node is down. However I am less concerned about adding and removing  
 files because the file server will be predominately VM disks so the file  
 structure is fairly static. Those VM files will be quite active however  
 - will gluster be able to keep track of partial updates to a large file  
 when one out of two bricks are down?  
  
  
 Yes, gluster only updates regions of the file that need to be synchronized  
 during self-healing. More details on this synchronization can be found in  
 the self-healing section of afr's design document [1].  
  
  
 Right now I am leaning towards using SSD for host local disk - single  
 brick gluster volumes intended for VMs which are node specific and then  

I wouldn't use single brick gluster volumes for local disk you don't  
need it and it will actually make it more complicated with no real  
benefits.  

 3 way replicas for the higher availability zones which tend to be more  
 read oriented. I presume that read-only access only needs to get data  
 from one of the 3 replicas so that should be reasonably performant.  
  
  
 Yes, read operations are directed to only one of the replicas.  
  
 Regards,  
 Vijay  
  
 [1] https://github.com/gluster/glusterfs/blob/master/doc/features/afr-v1.md  
  
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt/gluster storage questions for 2-3 node datacenter

2014-08-28 Thread David King
Hi,

I am currently testing oVirt 3.4.3 + gluster 3.5.2 for use in my relatively
small home office environment on a single host.  I have 2  Intel hosts with
SSD and magnetic disk and one AMD host with only magnetic disk.  I have
been trying to figure out the best way to configure my environment given my
previous attempt with oVirt 3.3 encountered storage issues.

I will be hosting two types of VMs - VMs that can be tied to a particular
system (such as 3 node FreeIPA domain or some test VMs), and VMs which
could migrate between systems for improved uptime.

The processor issue seems straightforward.  Have a single datacenter with
two clusters - one for the Intel systems and one for the AMD systems.  Put
VMs which need to live migrate on the Intel cluster.  If necessary VMs can
be manually switched between the Intel and AMD cluster with a downtime.

The Gluster side of the storage seems less clear.  The bulk of the gluster
with oVirt issues I experienced and have seen on the list seem to be two
node setups with 2 bricks in the Gluster volume.

So here are my questions:

1) Should I avoid 2 brick Gluster volumes?

2) What is the risk in having the SSD volumes with only 2 bricks given that
there would be 3 gluster servers?  How should I configure them?

3) Is there a way to use local storage for a host locked VM other than
creating a gluster volume with one brick?

4) Should I avoid using the hosted engine configuration?  I do have an
external VMware ESXi system to host the engine for now but would like to
phase it out eventually.

5) If I do the hosted engine should I make the underlying gluster volume 3
brick replicated?

Thanks in advance for any help you can provide.

-David
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users