Hi John,
You are right, if you want to create many VMs from Template you can
create a pool.
I think the main difference between creating a single VM and creating a
pool is that in a pool you can not create a VM with cloned disks.
regards,
Maor
On 06/09/2014 08:45 AM, John Xue wrote:
> Dear all,
Hi,
Context here :
- 2 setups (2 datacenters) in oVirt 3.4.1 with CentOS 6.4 and 6.5 hosts
- connected to some LUNs in iSCSI on a dedicated physical network
Every host has two interfaces used for management and end-user LAN
activity. Every host also have 4 additional NICs dedicated to the iSCSI
I just blocked connection to storage for testing, but on result I had this
error: "Failed to acquire lock error -243", so I added it in reproduce steps.
If you know another steps to reproduce this error, without blocking connection
to storage it also can be wonderful if you can provide them.
Than
OK, I have good news and bad news :)
Good news is that I can run different VM's on different nodes when all
of their drives are on FC Storage domain. I don't think that all of I/O
is running through SPM, but I need to test that. Simply put, for every
virtual disk that you create on the shared
Bad news happens only when running a VM for the first time, if it helps...
On 06/09/2014 01:30 PM, combuster wrote:
OK, I have good news and bad news :)
Good news is that I can run different VM's on different nodes when all
of their drives are on FC Storage domain. I don't think that all of
I
Hi Nicolas,
Which DC level are you using?
iSCSI multipath should be supported only from DC with compatibility
version of 3.4
regards,
Maor
On 06/09/2014 01:06 PM, Nicolas Ecarnot wrote:
> Hi,
>
> Context here :
> - 2 setups (2 datacenters) in oVirt 3.4.1 with CentOS 6.4 and 6.5 hosts
> - connec
Interesting, my storage network is a L2 only and doesn't run on the
ovirtmgmt (which is the only thing HostedEngine sees) but I've only
seen this issue when running ctdb in front of my NFS server. I
previously was using localhost as all my hosts had the nfs server on
it (gluster).
On Mon, Jun 9, 2
Le 09-06-2014 13:55, Maor Lipchuk a écrit :
Hi Nicolas,
Which DC level are you using?
iSCSI multipath should be supported only from DC with compatibility
version of 3.4
Hi Maor,
Oops you're right, my both 3.4 datacenters are using 3.3 level.
I migrated recently.
How safe or risky is it to in
basically, you should upgrade your DC to 3.4, and then upgrade the
clusters you desire also to 3.4.
You might need to upgrade your hosts to be compatible with the cluster
emulated machines, or they might become non-operational if qemu-kvm does
not support it.
ether way, you can always ask for adv
Le 09-06-2014 14:44, Maor Lipchuk a écrit :
basically, you should upgrade your DC to 3.4, and then upgrade the
clusters you desire also to 3.4.
Well, that seems to have worked, except I had to raise the cluster level
first, then the DC level.
Now, I can see the iSCSI multipath tab has appear
On Mon, Jun 9, 2014 at 9:23 AM, Nicolas Ecarnot wrote:
> Le 09-06-2014 14:44, Maor Lipchuk a écrit :
>
>> basically, you should upgrade your DC to 3.4, and then upgrade the
>> clusters you desire also to 3.4.
>
>
> Well, that seems to have worked, except I had to raise the cluster level
> first, t
Could anyone please confirm the correct process to run oVirt node on a standard
CentOS install, rather than using the node iso?
I'm currently doing the following:
- Install CentOS 6.5
- Install qemu-kvm-rhev rpm's to resolve live snapshot issues on the
CentOS supplied rpm's
So I understand that the news is still fresh and there may not be much
going on yet in making Ceph work with ovirt, but I thought I would reach
out and see if it was possible to hack them together and still use librdb
rather then NFS.
I know, why not just use Gluster... the problem is I have tried
Simon Barrett wrote:
Could anyone please confirm the correct process to run oVirt node on a
standard CentOS install, rather than using the node iso?
I'm currently doing the following:
- Install CentOS 6.5
- Install qemu-kvm-rhev rpm's to resolve live snapshot issue
Hi Bob,
Thanks for your feedback.
We fixed the issue and the new version of oVirt WGT ISO (3.5-2 alpha) is now
available from the oVirt website:
http://resources.ovirt.org/pub/ovirt-master-snapshot-static/iso/ovirt-guest-tools/ovirt-guest-tools-3.5-2.iso
as well is the updated installer:
http:
On 06/09/2014 01:28 PM, Nathan Stratton wrote:
So I understand that the news is still fresh and there may not be much
going on yet in making Ceph work with ovirt, but I thought I would reach
out and see if it was possible to hack them together and still use
librdb rather then NFS.
I know, why no
Hello,
at the moment we are investigating stalls of Windows XP VMs during
live migration. Our environment consists of:
- FC20 hypervisor nodes
- qemu 1.6.2
- OVirt 3.4.1
- Guest: Windows XP SP2
- VM Disks: Virtio & IDE tested
- SPICE / VNC: both tested
- Balloon: With & without tested
- Cluster c
Thanks, I will take a look at it, anyone else currently using Gluster for
backend images in production?
><>
nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
On Mon, Jun 9, 2014 at 2:55 PM, Itamar Heim wrote:
> On 06/09/2014 01:28 PM, Nathan Stratton wrote
So after adding the L3 capabilities to my storage network, I'm no
longer seeing this issue anymore. So the engine needs to be able to
access the storage domain it sits on? But that doesn't show up in the
UI?
Ivan, was this also the case with your setup? Engine couldn't access
storage domain?
On M
nvm, just as I hit send the error has returned.
Ignore this..
On Tue, Jun 10, 2014 at 9:01 AM, Andrew Lau wrote:
> So after adding the L3 capabilities to my storage network, I'm no
> longer seeing this issue anymore. So the engine needs to be able to
> access the storage domain it sits on? But th
Nah, I've explicitly allowed hosted-engine vm to be able to access the
NAS device as the NFS share itself, before the deploy procedure even
started. But I'm puzzled at how you can reproduce the bug, all was well
on my setup before I've stated manual migration of the engine's vm. Even
auto migra
Hm, another update on this one. If I create another VM with another
virtual disk on the node that already have a vm running from the FC
storage, then libvirt doesn't brake. I guess it just happens for the
first time on any of the nodes. If this is the case, I would have to
bring all of the vm's
I'm really having a hard time finding out why it's happening..
If I set the cluster to global for a minute or two, the scores will
reset back to 2400. Set maintenance mode to none, and all will be fine
until a migration occurs. It seems it tries to migrate, fails and sets
the score to 0 permanentl
On 06/10/2014 07:19 AM, Andrew Lau wrote:
I'm really having a hard time finding out why it's happening..
If I set the cluster to global for a minute or two, the scores will
reset back to 2400. Set maintenance mode to none, and all will be fine
until a migration occurs. It seems it tries to migra
24 matches
Mail list logo