Re: [ovirt-users] Add External Provider - OpenStack Image - Test - Failed to communicate with the External Provider

2014-04-22 Thread Udaya Kiran P
Hi,

I have found similar bug reported in below links,

http://lists.ovirt.org/pipermail/users/2013-September/016639.html

https://bugzilla.redhat.com/show_bug.cgi?format=multipleid=1051577

https://bugzilla.redhat.com/show_bug.cgi?id=1017538


I tried setting the keystone URL in the engine. 

engine-config --set KeystoneAuthUrl=http://xx.xx.xx.xx:35357


But the error remains.

Please suggest.


Regards,
Udaya Kiran
On Tuesday, 22 April 2014 11:15 AM, Udaya Kiran P ukiran...@yahoo.in wrote:
 
Hello !

I am trying to add OpenStack Glance as an External Provider to oVirt 3.4.  
After filling the authentication details when I click on Test button, it 
displays the error  - Failed to communicate with the External Provider

I was able to execute the curl command for glance image-list for the same 
openstack setup from the oVirtEngine machine.

Below errors were logged in engine.log

2014-04-22 11:24:55,464 INFO  
[org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] 
(ajp--127.0.0.1-8702-5) [6af23c05] Running command: 
TestProviderConnectivityCommand internal: false. Entities affected :  ID: 
aaa0----123456789aaa Type: System
2014-04-22 11:24:55,541 ERROR 
[org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] 
(ajp--127.0.0.1-8702-5) [6af23c05] Command 
org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand throw Vdc 
Bll exception. With error message VdcBLLException: (Failed with error 
PROVIDER_FAILURE and code 5050)
2014-04-22 11:24:55,625 ERROR 
[org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] 
(ajp--127.0.0.1-8702-5) [6af23c05] Transaction rolled-back for command: 
org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand.

Any inputs to resolve this?

Regards,
Udaya Kiran
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Add External Provider - OpenStack Image - Test - Failed to communicate with the External Provider

2014-04-22 Thread Elad Ben Aharon
After setting Keystone URL in DB, did you restart ovirt-engine service?

- Original Message -
From: Udaya Kiran P ukiran...@yahoo.in
To: users users@ovirt.org
Sent: Tuesday, April 22, 2014 9:04:04 AM
Subject: Re: [ovirt-users] Add External Provider - OpenStack Image - Test   
-   Failed to communicate with the External Provider

Hi, 

I have found similar bug reported in below links, 

http://lists.ovirt.org/pipermail/users/2013-September/016639.html 
https://bugzilla.redhat.com/show_bug.cgi?format=multipleid=1051577 
https://bugzilla.redhat.com/show_bug.cgi?id=1017538 

I tried setting the keystone URL in the engine. 

engine-config --set KeystoneAuthUrl=http://xx.xx.xx.xx:35357 

But the error remains. 

Please suggest. 


Regards, 
Udaya Kiran 
On Tuesday, 22 April 2014 11:15 AM, Udaya Kiran P ukiran...@yahoo.in wrote: 
Hello ! 

I am trying to add OpenStack Glance as an External Provider to oVirt 3.4. After 
filling the authentication details when I click on Test button, it displays the 
error -  Failed to communicate with the External Provider  

I was able to execute the curl command for glance image-list for the same 
openstack setup from the oVirtEngine machine. 

Below errors were logged in engine.log 

2014-04-22 11:24:55,464 INFO 
[org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] 
(ajp--127.0.0.1-8702-5) [6af23c05] Running command: 
TestProviderConnectivityCommand internal: false. Entities affected : ID: 
aaa0----123456789aaa Type: System 
2014-04-22 11:24:55,541 ERROR 
[org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] 
(ajp--127.0.0.1-8702-5) [6af23c05] Command 
org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand throw Vdc 
Bll exception. With error message VdcBLLException: (Failed with error 
PROVIDER_FAILURE and code 5050) 
2014-04-22 11:24:55,625 ERROR 
[org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] 
(ajp--127.0.0.1-8702-5) [6af23c05] Transaction rolled-back for command: 
org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand. 

Any inputs to resolve this? 

Regards, 
Udaya Kiran 

___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Add External Provider - OpenStack Image - Test - Failed to communicate with the External Provider

2014-04-22 Thread Udaya Kiran P
I changed the KeystoneAuthUrl to http://10.10.125.58:5000/v2.0.

It worked. 

Thanks for your help.

Regards,
Udaya Kiran
On , Udaya Kiran P ukiran...@yahoo.in wrote:
 
Hi Elad,

Yes.

I executed service ovirt-engine restart.

Now I get,

#engine-config -g KeystoneAuthUrl
KeystoneAuthUrl: http://10.10.125.58:35357 version: general

Any other settings to be changed? or Logs which can point to what is causing 
this?

Thank You,

Regards,
Udaya Kiran

On Tuesday, 22 April 2014 11:51 AM, Elad Ben Aharon ebena...@redhat.com wrote:
 
After setting Keystone URL in DB, did you restart ovirt-engine service?

- Original Message -
From: Udaya Kiran P ukiran...@yahoo.in
To: users users@ovirt.org
Sent: Tuesday, April 22, 2014 9:04:04 AM
Subject: Re: [ovirt-users] Add External Provider - OpenStack Image - Test    -  
  Failed to
 communicate with the External Provider

Hi, 

I have found similar bug reported in below links, 

http://lists.ovirt.org/pipermail/users/2013-September/016639.html 
https://bugzilla.redhat.com/show_bug.cgi?format=multipleid=1051577 
https://bugzilla.redhat.com/show_bug.cgi?id=1017538 

I tried setting the keystone URL in the engine. 

engine-config --set KeystoneAuthUrl=http://xx.xx.xx.xx:35357 

But the error remains. 

Please suggest. 


Regards, 
Udaya Kiran 
On Tuesday, 22 April 2014 11:15 AM, Udaya Kiran P ukiran...@yahoo.in wrote: 
Hello ! 

I am trying to add OpenStack Glance as an External Provider to oVirt 3.4. After 
filling the authentication details when I click on Test button, it displays the 
error -  Failed to communicate with the External Provider  

I was able to execute the curl command for glance image-list for the same 
openstack setup from the oVirtEngine machine. 

Below errors were logged in engine.log 

2014-04-22 11:24:55,464 INFO 
[org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] 
(ajp--127.0.0.1-8702-5) [6af23c05] Running command: 
TestProviderConnectivityCommand internal: false. Entities affected : ID: 
aaa0----123456789aaa Type: System 
2014-04-22 11:24:55,541 ERROR
 [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] 
(ajp--127.0.0.1-8702-5) [6af23c05] Command 
org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand throw Vdc 
Bll exception. With error message VdcBLLException: (Failed with error 
PROVIDER_FAILURE and code 5050) 
2014-04-22 11:24:55,625 ERROR 
[org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] 
(ajp--127.0.0.1-8702-5) [6af23c05] Transaction rolled-back for command: 
org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand. 

Any inputs to resolve this? 

Regards, 
Udaya Kiran 

___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC

2014-04-22 Thread Sven Kieske
Hi,

thanks for the very detailed answers.

So here is another question:

How are MACs handled which got assigned by hand?
Do they also get registered with a global or with
the datacenter pool?
Are they tracked anyway?
I'm currently assigning macs via API directly
to the vms and do not let ovirt decide itself
which mac goes where.

Am 18.04.2014 12:17, schrieb Martin Mucha:
 Hi, 
 
 I'll try to describe it little bit more. Lets say, that we've got one data 
 center. It's not configured yet to have its own mac pool. So in system is 
 only one, global pool. We create few VMs and it's NICs will obtain its MAC 
 from this global pool, marking them as used. Next we alter data center 
 definition, so now it uses it's own mac pool. In system from this point on 
 exists two mac pools, one global and one related to this data center, but 
 those allocated MACs are still allocated in global pool, since new data 
 center creation does not (yet) contain logic to get all assigned MACs related 
 to this data center and reassign them in new pool. However, after app restart 
 all VmNics are read from db and placed to appropriate pools. Lets assume, 
 that we've performed such restart. Now we realized, that we actually don't 
 want that data center have own mac pool, so we alter it's definition removing 
 mac pool ranges. Pool related to this data center will be removed and it's 
 content will be 
  moved to a scope above this data center -- into global scope pool. We know, 
 that everything what's allocated in pool to be removed is still used, but we 
 need to track it elsewhere and currently there's just one option, global 
 pool. So to answer your last question. When I remove scope, it's pool is gone 
 and its content moved elsewhere. Next, when MAC is returned to the pool, the 
 request goes like: give me pool for this virtual machine, and whatever pool 
 it is, I'm returning this MAC to it. Clients of ScopedMacPoolManager do not 
 know which pool they're talking to. Decision, which pool is right for them, 
 is done behind the scenes upon their identification (I want pool for this 
 logical network).
 
 Notice, that there is one problem in deciding which scope/pool to use. 
 There are places in code, which requires pool related to given data center, 
 identified by guid. For that request, only data center scope or something 
 broader like global scope can be returned. So even if one want to use one 
 pool per logical network, requests identified by data center id still can 
 return only data center scope or broader, and there are no chance returning 
 pool related to logical network (except for situation, where there is sole 
 logical network in that data center).
 
 Thanks for suggestion for another scopes. One question: if we're implementing 
 them, would you like just to pick a *sole* non-global scope you want to use 
 in your system (like data center related pools ONLY plus one global, or 
 logical network related pools ONLY plus one global) or would it be (more) 
 beneficial to you to have implemented some sort of cascading and overriding? 
 Like: this data center uses *this* pool, BUT except for *this* logical 
 network, which should use *this* one instead.
 
 I'll update feature page to contain these paragraphs.
 
 M.
 
 
 - Original Message -
 From: Itamar Heim ih...@redhat.com
 To: Martin Mucha mmu...@redhat.com, users@ovirt.org, de...@ovirt.org
 Sent: Thursday, April 10, 2014 9:04:37 AM
 Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC (was: new feature)
 
 On 04/10/2014 09:59 AM, Martin Mucha wrote:
 Hi,

 I'd like to notify you about new feature, which allows to specify distinct 
 MAC pools, currently one per data center.
 http://www.ovirt.org/Scoped_MacPoolManager

 any comments/proposals for improvement are very welcomed.
 Martin.
 
 
 (changed title to reflect content)
 
 When specified mac ranges for given scope, where there wasn't any 
 definition previously, allocated MAC from default pool will not be moved to 
 scoped one until next engine restart. Other way, when removing scoped 
 mac pool definition, all MACs from this pool will be moved to default one.
 
 cna you please elaborate on this one?
 
 as for potential other scopes - i can think of cluster, vm pool and 
 logical network as potential ones.
 
 one more question - how do you know to return the mac address to the 
 correct pool on delete?


-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Is there are fix or workaround for the Ovirt 3.3 Cluster and VDSM 4.14?

2014-04-22 Thread Sven Kieske
I just see a backport for engine 3.4 not for engine 3.3

Am I missing something?
In which 3.3.x release was this fixed?

Am 22.04.2014 00:17, schrieb Eli Mesika:
 The correct BZ fixing this issue is BZ 1067096 and it was back-ported in 
 commit 1f86f681e8a6e11f2b026ff7bdc579ccf5edeac9

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] Feature Page: Mac Pool per DC

2014-04-22 Thread Martin Mucha
Hi,

I like to answer questions. Presence of questions in motivated environment 
means that there is flaw in documentation/study material, which needs to be 
fixed :)

To answer your question. 
You got pool you want to use -- either global one (explicitly using method 
org.ovirt.engine.core.bll.network.macPoolManager.ScopedMacPoolManager#defaultScope())
 or related to some scope, which you identify somehow -- like in previous mail: 
give me pool for this data center. When you have this pool, you can allocate 
*some* new mac (system decides which one it will be) or you can allocate 
*explicit* one, use MAC address you've specified. I think that the latter is 
what you've meant by assigning by hand. There is just performance difference 
between these two allocation. Once the pool, which has to be used, is 
identified, everything which comes after it happens on *this* pool.

Example(I'm using naming from code here, storagePool is a db table for data 
center):
ScopedMacPoolManager.scopeFor().storagePool(storagePoolId).getPool().addMac(00:1a:4a:15:c0:fe);

Lets discuss parts from this command:

ScopedMacPoolManager.scopeFor() // means I want scope ...
ScopedMacPoolManager.scopeFor().storagePool(storagePoolId)   //... which is 
related to storagePool and identified by storagePoolID
ScopedMacPoolManager.scopeFor().storagePool(storagePoolId).getPool()//... 
and I want existing pool for this scope
ScopedMacPoolManager.scopeFor().storagePool(storagePoolId).getPool().addMac(00:1a:4a:15:c0:fe)
   //... and I want to add this mac address to it.

So in short, whatever you do with pool you get anyhow, happens on this pool 
only. You do not have code-control on what pool you get, like if system is 
configured to use single pool only, then request for datacenter-related pool 
still return that sole one, but once you have that pool, everything happen on 
this pool, and, unless datacenter configuration is altered, same request in 
future for pool should return same pool.

Now small spoiler(It's not merged to production branch yet) -- performance 
difference between allocating user provided MAC and MAC from mac pool range: 
You should try to avoid to allocate MAC which is outside of ranges of 
configured mac pool(either global or scoped one). It's perfectly OK, to 
allocate specific MAC address from inside these ranges, actually is little bit 
more efficient than letting system pick one for you. But if you use one from 
outside of those ranges, your allocated MAC end up in less memory efficient 
storage(approx 100 times less efficient). So if you want to use user-specified 
MACs, you can, but tell system from which range those MACs will be(via mac pool 
configuration).

M.

- Original Message -
From: Sven Kieske s.kie...@mittwald.de
To: Martin Mucha mmu...@redhat.com, Itamar Heim ih...@redhat.com
Cc: users@ovirt.org, de...@ovirt.org
Sent: Tuesday, April 22, 2014 8:31:31 AM
Subject: Re: [ovirt-devel] [ovirt-users] Feature Page: Mac Pool per DC

Hi,

thanks for the very detailed answers.

So here is another question:

How are MACs handled which got assigned by hand?
Do they also get registered with a global or with
the datacenter pool?
Are they tracked anyway?
I'm currently assigning macs via API directly
to the vms and do not let ovirt decide itself
which mac goes where.

Am 18.04.2014 12:17, schrieb Martin Mucha:
 Hi, 
 
 I'll try to describe it little bit more. Lets say, that we've got one data 
 center. It's not configured yet to have its own mac pool. So in system is 
 only one, global pool. We create few VMs and it's NICs will obtain its MAC 
 from this global pool, marking them as used. Next we alter data center 
 definition, so now it uses it's own mac pool. In system from this point on 
 exists two mac pools, one global and one related to this data center, but 
 those allocated MACs are still allocated in global pool, since new data 
 center creation does not (yet) contain logic to get all assigned MACs related 
 to this data center and reassign them in new pool. However, after app restart 
 all VmNics are read from db and placed to appropriate pools. Lets assume, 
 that we've performed such restart. Now we realized, that we actually don't 
 want that data center have own mac pool, so we alter it's definition removing 
 mac pool ranges. Pool related to this data center will be removed and it's 
 content will be 
  moved to a scope above this data center -- into global scope pool. We know, 
 that everything what's allocated in pool to be removed is still used, but we 
 need to track it elsewhere and currently there's just one option, global 
 pool. So to answer your last question. When I remove scope, it's pool is gone 
 and its content moved elsewhere. Next, when MAC is returned to the pool, the 
 request goes like: give me pool for this virtual machine, and whatever pool 
 it is, I'm returning this MAC to it. Clients of ScopedMacPoolManager do not 
 know which pool they're talking to. Decision, which pool 

Re: [ovirt-users] Is there are fix or workaround for the Ovirt 3.3 Cluster and VDSM 4.14?

2014-04-22 Thread Eli Mesika


- Original Message -
 From: Sven Kieske s.kie...@mittwald.de
 To: users@ovirt.org, Eli Mesika emes...@redhat.com
 Sent: Tuesday, April 22, 2014 9:42:43 AM
 Subject: Re: [ovirt-users] Is there are fix or workaround for the Ovirt 3.3 
 Cluster and VDSM 4.14?
 
 I just see a backport for engine 3.4 not for engine 3.3
 
 Am I missing something?
 In which 3.3.x release was this fixed?

We didn't fixed that for 3.3.z , BZ had no Z stream flag ...

 
 Am 22.04.2014 00:17, schrieb Eli Mesika:
  The correct BZ fixing this issue is BZ 1067096 and it was back-ported in
  commit 1f86f681e8a6e11f2b026ff7bdc579ccf5edeac9
 
 --
 Mit freundlichen Grüßen / Regards
 
 Sven Kieske
 
 Systemadministrator
 Mittwald CM Service GmbH  Co. KG
 Königsberger Straße 6
 32339 Espelkamp
 T: +49-5772-293-100
 F: +49-5772-293-333
 https://www.mittwald.de
 Geschäftsführer: Robert Meyer
 St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
 Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Query for adding non kvm hosts to ovirt engine

2014-04-22 Thread Peter Harris
I have a small test environment, where I am trying to work out the detail
of configuring ovirt + gluster as our main customer support environment.
All boxes running Scientific Linux 6.5 and ovirt 3.4 and glusterfs 3.5:

mgr1 : ovirt-engine installed and vdsm-gluster
vmhost1 : main vm server, added to mgr1 as host
vmhost2 : second vm server, add to mgr1 as host
strg1 : first gluster storage server
strg2 : second gluster storage server

mgr1 is an old box, hardware does not support virtualization
vmhost1 and vmhost2 have full vt capability
strg1 and strg2 hardware does not support virtualization

In the past, if I have created a gluster only ovirt-engine install, I have
been able to add strg1 and strg2 as hosts. Is there any way I can use mgr1
to manage both vm hosts and storage hosts, and add strg1 and strg2 to mgr1
as hosts. At the moment, I get the following error:
2014-04-22 10:27:37 DEBUG otopi.ovirt_host_deploy.hardware
hardware._isVirtualizationEnabled:186 virtualization support GenuineIntel
(cpu: False, bios: True)
2014-04-22 10:27:37 DEBUG otopi.ovirt_host_deploy.hardware
hardware.detect:202 Hardware supports virtualization
2014-04-22 10:27:37 DEBUG otopi.context context._executeMethod:152 method
exception
Traceback (most recent call last):
  File /tmp/ovirt-2p0fjrQMTb/pythonlib/otopi/context.py, line 142, in
_executeMethod
method['method']()
  File
/tmp/ovirt-2p0fjrQMTb/otopi-plugins/ovirt-host-deploy/vdsm/hardware.py,
line 68, in _validate_virtualization
_('Hardware does not support virtualization')
RuntimeError: Hardware does not support virtualization

Many thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Download oVirt Node install on wiki still points to 3.3

2014-04-22 Thread Fabian Deutsch
Am Donnerstag, den 17.04.2014, 10:59 -0400 schrieb Brian Proffitt:
 Fixed it!
 
 Thanks, Jorick, for the catch!

+1

Thanks
fabian


 BKP
 
 - Original Message -
  From: Sandro Bonazzola sbona...@redhat.com
  To: Jorick Astrego j.astr...@netbulae.eu, users users@ovirt.org, 
  Brian Proffitt bprof...@redhat.com,
  Fabian Deutsch fabi...@redhat.com
  Sent: Thursday, April 17, 2014 10:46:35 AM
  Subject: Re: [ovirt-users] Download oVirt Node install on wiki still points 
  to 3.3
  
  Il 17/04/2014 13:53, Jorick Astrego ha scritto:
   I have wiki access but don't know if it's intentional that the oVirt Node
   release link still points to 3.3
   (http://resources.ovirt.org/releases/3.3/iso/)
   
   http://www.ovirt.org/Quick_Start_Guide#Install_Hosts
   
   */Download oVirt Node installation CD/*
   
   /Download the latest version of ovirt Node from oVirt Node release
   http://resources.ovirt.org/releases/3.3/iso/ and burn the ISO image
   onto a disc./
   
  
  No, it's not intentional.
  Feel free to update it for using latest version of oVirt Node:
  http://resources.ovirt.org/releases/stable/iso/
  
  
  
   
   Kind regards,
   
   Jorick Astrego
   Netbulae B.V.
   
   
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
   
  
  
  --
  Sandro Bonazzola
  Better technology. Faster innovation. Powered by community collaboration.
  See how it works at redhat.com
  



signature.asc
Description: This is a digitally signed message part
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] converting windows guests with v2v, where to put the iso?

2014-04-22 Thread Richard W.M. Jones
On Thu, Apr 17, 2014 at 09:31:51AM -0500, Jeremiah Jahn wrote:
 I'm getting the following error. RedHat seems to thin the answer is to
 use their virtio-win rpm. As far as I can tell, this is not available
 anywhere, So I've just got to go and get the iso that it installs from
 kvm.org. Ok, done. now that I have said iso, where the heck do I put
 it, since it doesn't seem to align with the structure that v2v is
 looking for.
 
 virt-v2v: Installation failed because the following files referenced
 in the configuration file are required, but missing:
 /usr/share/virtio-win/drivers/i386/WinXP

Matt, do we have any documentation about where to put this ISO?
I believe it's got something to do with the contents of the file
/etc/virt-v2v.conf ...

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-p2v converts physical machines to virtual machines.  Boot with a
live CD or over the network (PXE) and turn machines into KVM guests.
http://libguestfs.org/virt-v2v
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] converting windows guests with v2v, where to put the iso?

2014-04-22 Thread Matthew Booth
On 22/04/14 11:26, Richard W.M. Jones wrote:
 On Thu, Apr 17, 2014 at 09:31:51AM -0500, Jeremiah Jahn wrote:
 I'm getting the following error. RedHat seems to thin the answer is to
 use their virtio-win rpm. As far as I can tell, this is not available
 anywhere, So I've just got to go and get the iso that it installs from
 kvm.org. Ok, done. now that I have said iso, where the heck do I put
 it, since it doesn't seem to align with the structure that v2v is
 looking for.

 virt-v2v: Installation failed because the following files referenced
 in the configuration file are required, but missing:
 /usr/share/virtio-win/drivers/i386/WinXP
 
 Matt, do we have any documentation about where to put this ISO?
 I believe it's got something to do with the contents of the file
 /etc/virt-v2v.conf ...

You have 2 options:

1. Install the virtio-win rpm, which is available in the RHEL
Supplementary subchannel.

2. Source the expanded Windows virtio drivers some other way, and put
them in the expected place. In the above case, it's expecting to find
the virtio driver (collection of files including *.inf) for i386 Windows
XP in the above directory.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine health check issues

2014-04-22 Thread René Koch

Hi,

I rebooted one of my ovirt hosts today and the result is now that I 
can't start hosted-engine anymore.


ovirt-ha-agent isn't running because the lockspace file is missing 
(sanlock complains about it).
So I tried to start hosted-engine with --vm-start and I get the 
following errors:


== /var/log/sanlock.log ==
2014-04-22 12:38:17+0200 654 [3093]: r2 cmd_acquire 2,9,5733 invalid 
lockspace found -1 failed 0 name 2851af27-8744-445d-9fb1-a0d083c8dc82


== /var/log/messages ==
Apr 22 12:38:17 ovirt-host02 sanlock[3079]: 2014-04-22 12:38:17+0200 654 
[3093]: r2 cmd_acquire 2,9,5733 invalid lockspace found -1 failed 0 name 
2851af27-8744-445d-9fb1-a0d083c8dc82
Apr 22 12:38:17 ovirt-host02 kernel: ovirtmgmt: port 2(vnet0) entering 
disabled state

Apr 22 12:38:17 ovirt-host02 kernel: device vnet0 left promiscuous mode
Apr 22 12:38:17 ovirt-host02 kernel: ovirtmgmt: port 2(vnet0) entering 
disabled state


== /var/log/vdsm/vdsm.log ==
Thread-21::DEBUG::2014-04-22 
12:38:17,563::libvirtconnection::124::root::(wrapper) Unknown 
libvirterror: ecode: 38 edom: 42 level: 2 message: Failed to acquire 
lock: No space left on device
Thread-21::DEBUG::2014-04-22 
12:38:17,563::vm::2263::vm.Vm::(_startUnderlyingVm) 
vmId=`f26dd37e-13b5-430c-b2f2-ecd098b82a91`::_ongoingCreations released
Thread-21::ERROR::2014-04-22 
12:38:17,564::vm::2289::vm.Vm::(_startUnderlyingVm) 
vmId=`f26dd37e-13b5-430c-b2f2-ecd098b82a91`::The vm start process failed

Traceback (most recent call last):
  File /usr/share/vdsm/vm.py, line 2249, in _startUnderlyingVm
self._run()
  File /usr/share/vdsm/vm.py, line 3170, in _run
self._connection.createXML(domxml, flags),
  File /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py, 
line 92, in wrapper

ret = f(*args, **kwargs)
  File /usr/lib64/python2.6/site-packages/libvirt.py, line 2665, in 
createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', 
conn=self)

libvirtError: Failed to acquire lock: No space left on device

== /var/log/messages ==
Apr 22 12:38:17 ovirt-host02 vdsm vm.Vm ERROR 
vmId=`f26dd37e-13b5-430c-b2f2-ecd098b82a91`::The vm start process 
failed#012Traceback (most recent call last):#012  File 
/usr/share/vdsm/vm.py, line 2249, in _startUnderlyingVm#012 
self._run()#012  File /usr/share/vdsm/vm.py, line 3170, in _run#012 
 self._connection.createXML(domxml, flags),#012  File 
/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py, line 92, 
in wrapper#012ret = f(*args, **kwargs)#012  File 
/usr/lib64/python2.6/site-packages/libvirt.py, line 2665, in 
createXML#012if ret is None:raise libvirtError('virDomainCreateXML() 
failed', conn=self)#012libvirtError: Failed to acquire lock: No space 
left on device


== /var/log/vdsm/vdsm.log ==
Thread-21::DEBUG::2014-04-22 
12:38:17,569::vm::2731::vm.Vm::(setDownStatus) 
vmId=`f26dd37e-13b5-430c-b2f2-ecd098b82a91`::Changed state to Down: 
Failed to acquire lock: No space left on device



No space left on device is nonsense as there is enough space (I had this 
issue last time as well where I had to patch machine.py, but this file 
is now Python 2.6.6 compatible.


Any idea what prevents hosted-engine from starting?
ovirt-ha-broker, vdsmd and sanlock are running btw.

Btw, I can see in log that json rpc server module is missing - which 
package is required for CentOS 6.5?
Apr 22 12:37:14 ovirt-host02 vdsm vds WARNING Unable to load the json 
rpc server module. Please make sure it is installed.



Thanks,
René



On 04/17/2014 10:02 AM, Martin Sivak wrote:

Hi,


How can I disable notifications?


The notification is configured in /etc/ovirt-hosted-engine-ha/broker.conf 
section notification.
The email is sent when the key state_transition exists and the string 
OldState-NewState contains the (case insensitive) regexp from the value.


Is it intended to send out these messages and detect that ovirt engine
is down (which is false anyway), but not to restart the vm?


Forget about emails for now and check the 
/var/log/ovirt-hosted-engine-ha/agent.log and broker.log (and attach them as 
well btw).


oVirt hosts think that hosted engine is down because it seems that hosts
can't write to hosted-engine.lockspace due to glusterfs issues (or at
least I think so).


The hosts think so or can't really write there? The lockspace is managed by 
sanlock and our HA daemons do not touch it at all. We only ask sanlock to get 
make sure we have unique server id.


Is is possible or planned to make the whole ha feature optional?


Well the system won't perform any automatic actions if you put the hosted 
engine to global maintenance and only start/stop/migrate the VM manually.
I would discourage you from stopping agent/broker, because the engine itself 
has some logic based on the reporting.

Regards

--
Martin Sivák
msi...@redhat.com
Red Hat Czech
RHEV-M SLA / Brno, CZ

- Original Message -

On 04/15/2014 04:53 PM, Jiri Moskovcak wrote:

On 04/14/2014 10:50 AM, René Koch wrote:

Hi,

I have 

Re: [ovirt-users] ovirt-engine install getting failed

2014-04-22 Thread Subhashish Laha
It worked. Thanks Latcho






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted Engine error -243

2014-04-22 Thread Kevin Tibi
Hi all,

I have a probleme with my hosted engine. Every 10 min i have a event in
engine :

VM HostedEngine is down. Exit message: internal error Failed to acquire
lock: error -243

My data is a local export NFS.

Thx for you help.

Kevin.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Query for adding non kvm hosts to ovirt engine

2014-04-22 Thread noc

On 22-4-2014 11:56, Peter Harris wrote:
I have a small test environment, where I am trying to work out the 
detail of configuring ovirt + gluster as our main customer support 
environment. All boxes running Scientific Linux 6.5 and ovirt 3.4 and 
glusterfs 3.5:


mgr1 : ovirt-engine installed and vdsm-gluster
vmhost1 : main vm server, added to mgr1 as host
vmhost2 : second vm server, add to mgr1 as host
strg1 : first gluster storage server
strg2 : second gluster storage server

mgr1 is an old box, hardware does not support virtualization
vmhost1 and vmhost2 have full vt capability
strg1 and strg2 hardware does not support virtualization

In the past, if I have created a gluster only ovirt-engine install, I 
have been able to add strg1 and strg2 as hosts. Is there any way I can 
use mgr1 to manage both vm hosts and storage hosts, and add strg1 and 
strg2 to mgr1 as hosts. At the moment, I get the following error:
2014-04-22 10:27:37 DEBUG otopi.ovirt_host_deploy.hardware 
hardware._isVirtualizationEnabled:186 virtualization support 
GenuineIntel (cpu: False, bios: True)
2014-04-22 10:27:37 DEBUG otopi.ovirt_host_deploy.hardware 
hardware.detect:202 Hardware supports virtualization
2014-04-22 10:27:37 DEBUG otopi.context context._executeMethod:152 
method exception

Traceback (most recent call last):
  File /tmp/ovirt-2p0fjrQMTb/pythonlib/otopi/context.py, line 142, 
in _executeMethod

method['method']()
  File 
/tmp/ovirt-2p0fjrQMTb/otopi-plugins/ovirt-host-deploy/vdsm/hardware.py, 
line 68, in _validate_virtualization

_('Hardware does not support virtualization')
RuntimeError: Hardware does not support virtualization


Create a second cluster named 'Storage' :-) and make sure the virt 
checkbox is unchecked and the gluster checkbox is checked and add both 
storage hosts to this cluster. Hopefully during the install it won't 
complain about the virt bits. I run such a setup but my storage hosts 
are virt capable so I don't know if this will work for you.


Regards,

Joop

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Call for Papers Deadline in Three Weeks: Fossetcon 2014

2014-04-22 Thread Brian Proffitt
Conference: Fossetcon 2014
Information: Fossetcon relies on the Free and Open Source Software Community 
for support in making Fossetcon a memorable event that aims to be: The Gateway 
to the Open Source Community. They are currently seeking talks on 
virtualization.
Date: Sept 11-13, 2014
Location: Orlando, Florida
Website: http://fossetcon.org/2014/

Call for Papers Deadline: June 15, 2014
Call for Papers URL: http://fossetcon.org/2014/call-papers-and-registration

-- 
Brian Proffitt

oVirt Community Manager
Project Atomic Community Lead
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Call for Papers Deadline in One Week: Fossetcon 2014

2014-04-22 Thread Brian Proffitt
Conference: Fossetcon 2014
Information: Fossetcon relies on the Free and Open Source Software Community 
for support in making Fossetcon a memorable event that aims to be: The Gateway 
to the Open Source Community. They are currently seeking talks on 
virtualization.
Date: Sept 11-13, 2014
Location: Orlando, Florida
Website: http://fossetcon.org/2014/

Call for Papers Deadline: June 15, 2014
Call for Papers URL: http://fossetcon.org/2014/call-papers-and-registration

-- 
Brian Proffitt

oVirt Community Manager
Project Atomic Community Lead
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] converting windows guests with v2v, where to put the iso?

2014-04-22 Thread Jeremiah Jahn
The Supplementary channel isn't available to CentOS/SL linux folks or
through EPEL. So some Ovirt/distro agnostic documentation would be
great.  There is also the same problem with the windows guest agents
as well.  I'm really not sure why these two packages aren't shipped as
part of ovirt and are RHEL only.

Just my two cents.

On Tue, Apr 22, 2014 at 5:31 AM, Matthew Booth mbo...@redhat.com wrote:
 On 22/04/14 11:26, Richard W.M. Jones wrote:
 On Thu, Apr 17, 2014 at 09:31:51AM -0500, Jeremiah Jahn wrote:
 I'm getting the following error. RedHat seems to thin the answer is to
 use their virtio-win rpm. As far as I can tell, this is not available
 anywhere, So I've just got to go and get the iso that it installs from
 kvm.org. Ok, done. now that I have said iso, where the heck do I put
 it, since it doesn't seem to align with the structure that v2v is
 looking for.

 virt-v2v: Installation failed because the following files referenced
 in the configuration file are required, but missing:
 /usr/share/virtio-win/drivers/i386/WinXP

 Matt, do we have any documentation about where to put this ISO?
 I believe it's got something to do with the contents of the file
 /etc/virt-v2v.conf ...

 You have 2 options:

 1. Install the virtio-win rpm, which is available in the RHEL
 Supplementary subchannel.

 2. Source the expanded Windows virtio drivers some other way, and put
 them in the expected place. In the above case, it's expecting to find
 the virtio driver (collection of files including *.inf) for i386 Windows
 XP in the above directory.

 Matt
 --
 Matthew Booth
 Red Hat Engineering, Virtualisation Team

 Phone: +442070094448 (UK)
 GPG ID:  D33C3490
 GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt + GLUSTER

2014-04-22 Thread Jeremiah Jahn
I am.

On Mon, Apr 21, 2014 at 1:50 PM, Joop jvdw...@xs4all.nl wrote:
 Ovirt User wrote:

 Hello,

 anyone are using ovirt with glusterFS as storage domain in production
 environment ?



 Not directly production but almost. Having problems?

 Regards,

 Joop


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt + GLUSTER

2014-04-22 Thread Gabi C
second to that!

3 node acting both as VMs and gluster hosts, engine on separate vm (ESXi)
machine


On Tue, Apr 22, 2014 at 3:53 PM, Jeremiah Jahn 
jerem...@goodinassociates.com wrote:

 I am.

 On Mon, Apr 21, 2014 at 1:50 PM, Joop jvdw...@xs4all.nl wrote:
  Ovirt User wrote:
 
  Hello,
 
  anyone are using ovirt with glusterFS as storage domain in production
  environment ?
 
 
 
  Not directly production but almost. Having problems?
 
  Regards,
 
  Joop
 
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt + GLUSTER

2014-04-22 Thread Jeremiah Jahn
Nothing too complicated.
SL 6x and 5
8 vm hosts running on a hitachi blade symphony.
25 Server guests
15+ desktop windows guests
3x 12TB storage servers (All 1TB based raid 10 SSDs)  10Gbs PTP
between 2 of the servers, with one geolocation server offsite.
Most of the server images/luns are exported through 4Gbps FC cards
from the servers using LIO except the desktop machines with are
attached directly to the gluster storage pool since normal users can
create them.  Various servers use use the gluster system directly for
storing a large number of  documents and providing public access to
the tune of about 300 to 400 thousand requests per day.  We
aggregate, provide public access, efiling, e-payments and case
management software to most of the illinois circuit court system.
Gluster has worked like a champ.

On Tue, Apr 22, 2014 at 8:05 AM, Ovirt User ldrt8...@gmail.com wrote:
 what type of configuration and use case ?

 Il giorno 22/apr/2014, alle ore 14:53, Jeremiah Jahn 
 jerem...@goodinassociates.com ha scritto:

 I am.

 On Mon, Apr 21, 2014 at 1:50 PM, Joop jvdw...@xs4all.nl wrote:
 Ovirt User wrote:

Hello,

 anyone are using ovirt with glusterFS as storage domain in production
 environment ?



 Not directly production but almost. Having problems?

 Regards,

 Joop


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt + GLUSTER

2014-04-22 Thread Jeremiah Jahn
oh, and engine is currently on a separate server, but will probably
move to one of the storage servers.

On Tue, Apr 22, 2014 at 8:43 AM, Jeremiah Jahn
jerem...@goodinassociates.com wrote:
 Nothing too complicated.
 SL 6x and 5
 8 vm hosts running on a hitachi blade symphony.
 25 Server guests
 15+ desktop windows guests
 3x 12TB storage servers (All 1TB based raid 10 SSDs)  10Gbs PTP
 between 2 of the servers, with one geolocation server offsite.
 Most of the server images/luns are exported through 4Gbps FC cards
 from the servers using LIO except the desktop machines with are
 attached directly to the gluster storage pool since normal users can
 create them.  Various servers use use the gluster system directly for
 storing a large number of  documents and providing public access to
 the tune of about 300 to 400 thousand requests per day.  We
 aggregate, provide public access, efiling, e-payments and case
 management software to most of the illinois circuit court system.
 Gluster has worked like a champ.

 On Tue, Apr 22, 2014 at 8:05 AM, Ovirt User ldrt8...@gmail.com wrote:
 what type of configuration and use case ?

 Il giorno 22/apr/2014, alle ore 14:53, Jeremiah Jahn 
 jerem...@goodinassociates.com ha scritto:

 I am.

 On Mon, Apr 21, 2014 at 1:50 PM, Joop jvdw...@xs4all.nl wrote:
 Ovirt User wrote:

Hello,

 anyone are using ovirt with glusterFS as storage domain in production
 environment ?



 Not directly production but almost. Having problems?

 Regards,

 Joop


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine health check issues

2014-04-22 Thread Itamar Heim

On 04/14/2014 11:50 AM, René Koch wrote:

Hi,

I have some issues with hosted engine status.

oVirt hosts think that hosted engine is down because it seems that hosts
can't write to hosted-engine.lockspace due to glusterfs issues (or at
least I think so).

Here's the output of vm-status:

# hosted-engine --vm-status


--== Host 1 status ==--

Status up-to-date  : False
Hostname   : 10.0.200.102
Host ID: 1
Engine status  : unknown stale-data
Score  : 2400
Local maintenance  : False
Host timestamp : 1397035677
Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=1397035677 (Wed Apr  9 11:27:57 2014)
 host-id=1
 score=2400
 maintenance=False
 state=EngineUp


--== Host 2 status ==--

Status up-to-date  : True
Hostname   : 10.0.200.101
Host ID: 2
Engine status  : {'reason': 'vm not running on this
host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}
Score  : 0
Local maintenance  : False
Host timestamp : 1397464031
Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=1397464031 (Mon Apr 14 10:27:11 2014)
 host-id=2
 score=0
 maintenance=False
 state=EngineUnexpectedlyDown
 timeout=Mon Apr 14 10:35:05 2014

oVirt engine is sending me 2 emails every 10 minutes with the following
subjects:
- ovirt-hosted-engine state transition EngineDown-EngineStart
- ovirt-hosted-engine state transition EngineStart-EngineUp

In oVirt webadmin I can see the following message:
VM HostedEngine is down. Exit message: internal error Failed to acquire
lock: error -243.

These messages are really annoying as oVirt isn't doing anything with
hosted engine - I have an uptime of 9 days in my engine vm.

So my questions are now:
Is it intended to send out these messages and detect that ovirt engine
is down (which is false anyway), but not to restart the vm?

How can I disable notifications? I'm planning to write a Nagios plugin
which parses the output of hosted-engine --vm-status and only Nagios
should notify me, not hosted-engine script.

Is is possible or planned to make the whole ha feature optional? I
really really really hate cluster software as it causes more troubles
then standalone machines and in my case the hosted-engine ha feature
really causes troubles (and I didn't had a hardware or network outage
yet only issues with hosted-engine ha agent). I don't need any ha
feature for hosted engine. I just want to run engine virtualized on
oVirt and if engine vm fails (e.g. because of issues with a host) I'll
restart it on another node.

Thanks,
René




I'm pretty sure we removed hosted-engine on gluster due to concerns 
around the locking issues.

is the gluster configured with quorum to avoid split brains?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine health check issues

2014-04-22 Thread René Koch

On 04/22/2014 04:04 PM, Itamar Heim wrote:

On 04/14/2014 11:50 AM, René Koch wrote:

Hi,

I have some issues with hosted engine status.

oVirt hosts think that hosted engine is down because it seems that hosts
can't write to hosted-engine.lockspace due to glusterfs issues (or at
least I think so).

Here's the output of vm-status:

# hosted-engine --vm-status


--== Host 1 status ==--

Status up-to-date  : False
Hostname   : 10.0.200.102
Host ID: 1
Engine status  : unknown stale-data
Score  : 2400
Local maintenance  : False
Host timestamp : 1397035677
Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=1397035677 (Wed Apr  9 11:27:57 2014)
 host-id=1
 score=2400
 maintenance=False
 state=EngineUp


--== Host 2 status ==--

Status up-to-date  : True
Hostname   : 10.0.200.101
Host ID: 2
Engine status  : {'reason': 'vm not running on this
host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}
Score  : 0
Local maintenance  : False
Host timestamp : 1397464031
Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=1397464031 (Mon Apr 14 10:27:11 2014)
 host-id=2
 score=0
 maintenance=False
 state=EngineUnexpectedlyDown
 timeout=Mon Apr 14 10:35:05 2014

oVirt engine is sending me 2 emails every 10 minutes with the following
subjects:
- ovirt-hosted-engine state transition EngineDown-EngineStart
- ovirt-hosted-engine state transition EngineStart-EngineUp

In oVirt webadmin I can see the following message:
VM HostedEngine is down. Exit message: internal error Failed to acquire
lock: error -243.

These messages are really annoying as oVirt isn't doing anything with
hosted engine - I have an uptime of 9 days in my engine vm.

So my questions are now:
Is it intended to send out these messages and detect that ovirt engine
is down (which is false anyway), but not to restart the vm?

How can I disable notifications? I'm planning to write a Nagios plugin
which parses the output of hosted-engine --vm-status and only Nagios
should notify me, not hosted-engine script.

Is is possible or planned to make the whole ha feature optional? I
really really really hate cluster software as it causes more troubles
then standalone machines and in my case the hosted-engine ha feature
really causes troubles (and I didn't had a hardware or network outage
yet only issues with hosted-engine ha agent). I don't need any ha
feature for hosted engine. I just want to run engine virtualized on
oVirt and if engine vm fails (e.g. because of issues with a host) I'll
restart it on another node.

Thanks,
René




I'm pretty sure we removed hosted-engine on gluster due to concerns
around the locking issues.
is the gluster configured with quorum to avoid split brains?



At the moment there's no quorum (1 host online is enough - but GlusterFS 
network is on dedicated nics which are directly connected between two 
hosts), as I'm waiting for additional memory and disks for the other 2 
nodes (so I have only 2 nodes atm).


But GlusterFS looks fine (now) - same for info heal-failed and info 
split-brain:


# gluster volume heal engine info
Gathering Heal info on volume engine has been successful

Brick ovirt-host01-gluster:/data/engine
Number of entries: 0

Brick ovirt-host02-gluster:/data/engine
Number of entries: 0


I can also create (touch) the lockspace file on the mounted GlusterFS 
volume - so imho GlusterFS isn't blocking libvirt.



Regards,
René
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt snapshot failing on one VM

2014-04-22 Thread Aharon Canan
hi 

please check https://bugzilla.redhat.com/show_bug.cgi?id=1017289 
I know that snapshot on gluster domain has few issues... 



Regards, 
__ 
Aharon Canan 


- Original Message -

From: Steve Dainard sdain...@miovision.com 
To: d...@redhat.com 
Cc: users users@ovirt.org 
Sent: Tuesday, April 22, 2014 5:31:18 PM 
Subject: Re: [ovirt-users] Ovirt snapshot failing on one VM 

Sorry for the confusion. 

I attempted to take a live snapshot of a running VM. After that failed, I 
migrated the VM to another host, and attempted the live snapshot again without 
success, eliminating a single host as the cause of failure. 

Ovirt is 3.3.4, storage domain is gluster 3.4.2.1, OS is CentOS 6.5. 

Package versions: 
libvirt-0.10.2-29.el6_5.5.x86_64 
libvirt-lock-sanlock-0.10.2-29.el6_5.5.x86_64 
qemu-img-rhev-0.12.1.2-2.415.el6.nux.3.x86_64 
qemu-kvm-rhev-0.12.1.2-2.415.el6.nux.3.x86_64 
qemu-kvm-rhev-tools-0.12.1.2-2.415.el6.nux.3.x86_64 
vdsm-4.13.3-4.el6.x86_64 
vdsm-gluster-4.13.3-4.el6.noarch 


I made another live snapshot attempt at 10:21 EST today, full vdsm.log 
attached, and a truncated engine.log. 

Thanks, 

Steve 


On Tue, Apr 22, 2014 at 9:48 AM, Dafna Ron  d...@redhat.com  wrote: 


please explain the flow of what you are trying to do, are you trying to live 
migrate the disk (from one storage to another), are you trying to migrate the 
vm and after vm migration is finished you try to take a live snapshot of the 
vm? or are you trying to take a live snapshot of the vm during a vm migration 
from host1 to host2? 

Please attach full vdsm logs from any host you are using (if you are trying to 
migrate the vm from host1 to host2) + please attach engine log. 

Also, what is the vdsm, libvirt and qemu versions, what ovirt version are you 
using and what is the storage you are using? 

Thanks, 

Dafna 




On 04/22/2014 02:12 PM, Steve Dainard wrote: 

blockquote

I've attempted migrating the vm to another host and taking a snapshot, but I 
get this error: 

6efd33f4-984c-4513-b5e6- fffdca2e983b::ERROR::2014-04- 22 
01:09:37,296::volume::286:: Storage.Volume::(clone) Volume.clone: can't clone: 
/rhev/data-center/9497ef2c- 8368-4c92-8d61-7f318a90748f/ 
95b9d922-4df7-4d3b-9bca- 467e2fd9d573/images/466d9ae9- 
e46a-46f8-9f4b-964d8af0675b/ 1a67de4b-aa1c-4436-baca- ca55726d54d7 to 
/rhev/data-center/9497ef2c- 8368-4c92-8d61-7f318a90748f/ 
95b9d922-4df7-4d3b-9bca- 467e2fd9d573/images/466d9ae9- 
e46a-46f8-9f4b-964d8af0675b/ b230596f-97bc-4532-ba57- 5654fa9c6c51 

A bit more of the vdsm log is attached. 

Other vm's are snapshotting without issue. 



Any help appreciated, 

*Steve 
* 


__ _ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/ mailman/listinfo/users 




-- 
Dafna Ron 

/blockquote



___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt snapshot failing on one VM

2014-04-22 Thread Dafna Ron

This is the actual problem:

bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::DEBUG::2014-04-22 
10:21:49,374::volume::1058::Storage.Misc.excCmd::(createVolume) FAILED: 
err = 
'/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/4
66d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb: error 
while creating qcow2: No such file or directory\n'; rc = 1


from that you see the actual failure:

bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::ERROR::2014-04-22 
10:21:49,392::volume::286::Storage.Volume::(clone) Volume.clone: can't 
clone: 
/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d
9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7 to 
/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee1

4a0dc6fb
bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::ERROR::2014-04-22 
10:21:49,392::volume::508::Storage.Volume::(create) Unexpected error

Traceback (most recent call last):
  File /usr/share/vdsm/storage/volume.py, line 466, in create
srcVolUUID, imgPath, volPath)
  File /usr/share/vdsm/storage/fileVolume.py, line 160, in _create
volParent.clone(imgPath, volUUID, volFormat, preallocate)
  File /usr/share/vdsm/storage/volume.py, line 287, in clone
raise se.CannotCloneVolume(self.volumePath, dst_path, str(e))
CannotCloneVolume: Cannot clone volume: 
'src=/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7, 
dst=/rhev/data-cen
ter/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb: 
Error creating a new volume: ([Formatting 
\'/rhev/data-center/9497ef2c-8368-
4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb\', 
fmt=qcow2 size=21474836480 
backing_file=\'../466d9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa
1c-4436-baca-ca55726d54d7\' backing_fmt=\'qcow2\' encryption=off 
cluster_size=65536 ],)'



do you have any alert in the webadmin to restart the vm?

Dafna

On 04/22/2014 03:31 PM, Steve Dainard wrote:

Sorry for the confusion.

I attempted to take a live snapshot of a running VM. After that 
failed, I migrated the VM to another host, and attempted the live 
snapshot again without success, eliminating a single host as the cause 
of failure.


Ovirt is 3.3.4, storage domain is gluster 3.4.2.1, OS is CentOS 6.5.

Package versions:
libvirt-0.10.2-29.el6_5.5.x86_64
libvirt-lock-sanlock-0.10.2-29.el6_5.5.x86_64
qemu-img-rhev-0.12.1.2-2.415.el6.nux.3.x86_64
qemu-kvm-rhev-0.12.1.2-2.415.el6.nux.3.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.415.el6.nux.3.x86_64
vdsm-4.13.3-4.el6.x86_64
vdsm-gluster-4.13.3-4.el6.noarch


I made another live snapshot attempt at 10:21 EST today, full vdsm.log 
attached, and a truncated engine.log.


Thanks,

*Steve
*


On Tue, Apr 22, 2014 at 9:48 AM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


please explain the flow of what you are trying to do, are you
trying to live migrate the disk (from one storage to another), are
you trying to migrate the vm and after vm migration is finished
you try to take a live snapshot of the vm? or are you trying to
take a live snapshot of the vm during a vm migration from host1 to
host2?

Please attach full vdsm logs from any host you are using (if you
are trying to migrate the vm from host1 to host2) + please attach
engine log.

Also, what is the vdsm, libvirt and qemu versions, what ovirt
version are you using and what is the storage you are using?

Thanks,

Dafna




On 04/22/2014 02:12 PM, Steve Dainard wrote:

I've attempted migrating the vm to another host and taking a
snapshot, but I get this error:

6efd33f4-984c-4513-b5e6-fffdca2e983b::ERROR::2014-04-22
01:09:37,296::volume::286::Storage.Volume::(clone)
Volume.clone: can't clone:

/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7
to

/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/b230596f-97bc-4532-ba57-5654fa9c6c51

A bit more of the vdsm log is attached.

Other vm's are snapshotting without issue.



Any help appreciated,

*Steve
*


___
Users mailing list
Users@ovirt.org mailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



-- 
Dafna Ron






--
Dafna Ron

Re: [ovirt-users] Feature Page: Mac Pool per DC

2014-04-22 Thread Itamar Heim

On 04/18/2014 01:17 PM, Martin Mucha wrote:

Hi,

I'll try to describe it little bit more. Lets say, that we've got one data 
center. It's not configured yet to have its own mac pool. So in system is only 
one, global pool. We create few VMs and it's NICs will obtain its MAC from this 
global pool, marking them as used. Next we alter data center definition, so now 
it uses it's own mac pool. In system from this point on exists two mac pools, 
one global and one related to this data center, but those allocated MACs are 
still allocated in global pool, since new data center creation does not (yet) 
contain logic to get all assigned MACs related to this data center and reassign 
them in new pool. However, after app restart all VmNics are read from db and 
placed to appropriate pools. Lets assume, that we've performed such restart. 
Now we realized, that we actually don't want that data center have own mac 
pool, so we alter it's definition removing mac pool ranges. Pool related to 
this data center will be removed and it's content will !

be moved t
o a scope above this data center -- into global scope pool. We know, that everything 
what's allocated in pool to be removed is still used, but we need to track it elsewhere 
and currently there's just one option, global pool. So to answer your last question. When 
I remove scope, it's pool is gone and its content moved elsewhere. Next, when MAC is 
returned to the pool, the request goes like: give me pool for this virtual machine, 
and whatever pool it is, I'm returning this MAC to it. Clients of 
ScopedMacPoolManager do not know which pool they're talking to. Decision, which pool is 
right for them, is done behind the scenes upon their identification (I want pool for this 
logical network).


Notice, that there is one problem in deciding which scope/pool to use. There 
are places in code, which requires pool related to given data center, identified by guid. 
For that request, only data center scope or something broader like global scope can be 
returned. So even if one want to use one pool per logical network, requests identified by 
data center id still can return only data center scope or broader, and there are no 
chance returning pool related to logical network (except for situation, where there is 
sole logical network in that data center).

Thanks for suggestion for another scopes. One question: if we're implementing them, would 
you like just to pick a *sole* non-global scope you want to use in your system (like data 
center related pools ONLY plus one global, or logical network related pools ONLY plus one 
global) or would it be (more) beneficial to you to have implemented some sort of 
cascading and overriding? Like: this data center uses *this* pool, BUT except for 
*this* logical network, which should use *this* one instead.

I'll update feature page to contain these paragraphs.


I have to say i really don't like the notion of having to restart the 
engine for a change done via the webadmin to apply.
also, iiuc your flow correctly, mac addresses may not go back to the 
pool anyway until an engine restart, since the change will only take 
effect on engine restart anyway, then available mac's per scope will be 
re-calculated.






M.


- Original Message -
From: Itamar Heim ih...@redhat.com
To: Martin Mucha mmu...@redhat.com, users@ovirt.org, de...@ovirt.org
Sent: Thursday, April 10, 2014 9:04:37 AM
Subject: Re: [ovirt-users] Feature Page: Mac Pool per DC (was: new feature)

On 04/10/2014 09:59 AM, Martin Mucha wrote:

Hi,

I'd like to notify you about new feature, which allows to specify distinct MAC 
pools, currently one per data center.
http://www.ovirt.org/Scoped_MacPoolManager

any comments/proposals for improvement are very welcomed.
Martin.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




(changed title to reflect content)


When specified mac ranges for given scope, where there wasn't any definition previously, 
allocated MAC from default pool will not be moved to scoped one until next engine restart. Other 
way, when removing scoped mac pool definition, all MACs from this pool will be moved to default 
one.


cna you please elaborate on this one?

as for potential other scopes - i can think of cluster, vm pool and
logical network as potential ones.

one more question - how do you know to return the mac address to the
correct pool on delete?



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt snapshot failing on one VM

2014-04-22 Thread Steve Dainard
No alert in web ui, I restarted the VM yesterday just in case, no change. I
also restored an earlier snapshot and tried to re-snapshot, same result.


*Steve *


On Tue, Apr 22, 2014 at 10:57 AM, Dafna Ron d...@redhat.com wrote:

 This is the actual problem:

 bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::DEBUG::2014-04-22
 10:21:49,374::volume::1058::Storage.Misc.excCmd::(createVolume) FAILED:
 err = '/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/
 95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/4
 66d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb:
 error while creating qcow2: No such file or directory\n'; rc = 1

 from that you see the actual failure:

 bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::ERROR::2014-04-22
 10:21:49,392::volume::286::Storage.Volume::(clone) Volume.clone: can't
 clone: /rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/
 95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d
 9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7 to
 /rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/
 95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-
 e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee1
 4a0dc6fb
 bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::ERROR::2014-04-22
 10:21:49,392::volume::508::Storage.Volume::(create) Unexpected error
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/volume.py, line 466, in create
 srcVolUUID, imgPath, volPath)
   File /usr/share/vdsm/storage/fileVolume.py, line 160, in _create
 volParent.clone(imgPath, volUUID, volFormat, preallocate)
   File /usr/share/vdsm/storage/volume.py, line 287, in clone
 raise se.CannotCloneVolume(self.volumePath, dst_path, str(e))
 CannotCloneVolume: Cannot clone volume: 'src=/rhev/data-center/
 9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-
 4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-
 964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7, dst=/rhev/data-cen
 ter/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-
 4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-
 964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb: Error creating a new
 volume: ([Formatting \'/rhev/data-center/9497ef2c-8368-
 4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-
 467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/
 87efa937-b31f-4bb1-aee1-0ee14a0dc6fb\', fmt=qcow2 size=21474836480
 backing_file=\'../466d9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa
 1c-4436-baca-ca55726d54d7\' backing_fmt=\'qcow2\' encryption=off
 cluster_size=65536 ],)'


 do you have any alert in the webadmin to restart the vm?

 Dafna


 On 04/22/2014 03:31 PM, Steve Dainard wrote:

 Sorry for the confusion.

 I attempted to take a live snapshot of a running VM. After that failed, I
 migrated the VM to another host, and attempted the live snapshot again
 without success, eliminating a single host as the cause of failure.

 Ovirt is 3.3.4, storage domain is gluster 3.4.2.1, OS is CentOS 6.5.

 Package versions:
 libvirt-0.10.2-29.el6_5.5.x86_64
 libvirt-lock-sanlock-0.10.2-29.el6_5.5.x86_64
 qemu-img-rhev-0.12.1.2-2.415.el6.nux.3.x86_64
 qemu-kvm-rhev-0.12.1.2-2.415.el6.nux.3.x86_64
 qemu-kvm-rhev-tools-0.12.1.2-2.415.el6.nux.3.x86_64
 vdsm-4.13.3-4.el6.x86_64
 vdsm-gluster-4.13.3-4.el6.noarch


 I made another live snapshot attempt at 10:21 EST today, full vdsm.log
 attached, and a truncated engine.log.

 Thanks,

 *Steve
 *



 On Tue, Apr 22, 2014 at 9:48 AM, Dafna Ron d...@redhat.com mailto:
 d...@redhat.com wrote:

 please explain the flow of what you are trying to do, are you
 trying to live migrate the disk (from one storage to another), are
 you trying to migrate the vm and after vm migration is finished
 you try to take a live snapshot of the vm? or are you trying to
 take a live snapshot of the vm during a vm migration from host1 to
 host2?

 Please attach full vdsm logs from any host you are using (if you
 are trying to migrate the vm from host1 to host2) + please attach
 engine log.

 Also, what is the vdsm, libvirt and qemu versions, what ovirt
 version are you using and what is the storage you are using?

 Thanks,

 Dafna




 On 04/22/2014 02:12 PM, Steve Dainard wrote:

 I've attempted migrating the vm to another host and taking a
 snapshot, but I get this error:

 6efd33f4-984c-4513-b5e6-fffdca2e983b::ERROR::2014-04-22
 01:09:37,296::volume::286::Storage.Volume::(clone)
 Volume.clone: can't clone:
 /rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/
 95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-
 e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7
 to
 /rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/
 95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-
 e46a-46f8-9f4b-964d8af0675b/b230596f-97bc-4532-ba57-5654fa9c6c51

 A bit more of the vdsm log is attached.

 Other vm's are snapshotting without issue.



   

Re: [ovirt-users] Ovirt snapshot failing on one VM

2014-04-22 Thread Dafna Ron

are you able to take an offline snapshot? (while the vm is down)
how many snapshots do you have on this vm?

On 04/22/2014 04:19 PM, Steve Dainard wrote:
No alert in web ui, I restarted the VM yesterday just in case, no 
change. I also restored an earlier snapshot and tried to re-snapshot, 
same result.


*Steve
*


On Tue, Apr 22, 2014 at 10:57 AM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


This is the actual problem:

bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::DEBUG::2014-04-22
10:21:49,374::volume::1058::Storage.Misc.excCmd::(createVolume)
FAILED: err =

'/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/4
66d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb:
error while creating qcow2: No such file or directory\n'; rc = 1

from that you see the actual failure:

bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::ERROR::2014-04-22
10:21:49,392::volume::286::Storage.Volume::(clone) Volume.clone:
can't clone:

/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d
9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7
to

/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee1
4a0dc6fb
bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::ERROR::2014-04-22
10:21:49,392::volume::508::Storage.Volume::(create) Unexpected error
Traceback (most recent call last):
  File /usr/share/vdsm/storage/volume.py, line 466, in create
srcVolUUID, imgPath, volPath)
  File /usr/share/vdsm/storage/fileVolume.py, line 160, in _create
volParent.clone(imgPath, volUUID, volFormat, preallocate)
  File /usr/share/vdsm/storage/volume.py, line 287, in clone
raise se.CannotCloneVolume(self.volumePath, dst_path, str(e))
CannotCloneVolume: Cannot clone volume:

'src=/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7,
dst=/rhev/data-cen

ter/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb:
Error creating a new volume: ([Formatting
\'/rhev/data-center/9497ef2c-8368-

4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb\',
fmt=qcow2 size=21474836480
backing_file=\'../466d9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa
1c-4436-baca-ca55726d54d7\' backing_fmt=\'qcow2\' encryption=off
cluster_size=65536 ],)'


do you have any alert in the webadmin to restart the vm?

Dafna


On 04/22/2014 03:31 PM, Steve Dainard wrote:

Sorry for the confusion.

I attempted to take a live snapshot of a running VM. After
that failed, I migrated the VM to another host, and attempted
the live snapshot again without success, eliminating a single
host as the cause of failure.

Ovirt is 3.3.4, storage domain is gluster 3.4.2.1, OS is
CentOS 6.5.

Package versions:
libvirt-0.10.2-29.el6_5.5.x86_64
libvirt-lock-sanlock-0.10.2-29.el6_5.5.x86_64
qemu-img-rhev-0.12.1.2-2.415.el6.nux.3.x86_64
qemu-kvm-rhev-0.12.1.2-2.415.el6.nux.3.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.415.el6.nux.3.x86_64
vdsm-4.13.3-4.el6.x86_64
vdsm-gluster-4.13.3-4.el6.noarch


I made another live snapshot attempt at 10:21 EST today, full
vdsm.log attached, and a truncated engine.log.

Thanks,

*Steve
*



On Tue, Apr 22, 2014 at 9:48 AM, Dafna Ron d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com wrote:

please explain the flow of what you are trying to do, are you
trying to live migrate the disk (from one storage to
another), are
you trying to migrate the vm and after vm migration is
finished
you try to take a live snapshot of the vm? or are you
trying to
take a live snapshot of the vm during a vm migration from
host1 to
host2?

Please attach full vdsm logs from any host you are using
(if you
are trying to migrate the vm from host1 to host2) + please
attach
engine log.

Also, what is the vdsm, libvirt and qemu versions, what ovirt
version are you using and what is the storage you are using?

Thanks,

Dafna




On 04/22/2014 02:12 PM, Steve Dainard wrote:

I've attempted migrating the vm to another host and
taking a
snapshot, but I get 

Re: [ovirt-users] Ovirt snapshot failing on one VM

2014-04-22 Thread Dafna Ron

it's the same error:

c1d7c4e-392b-4a62-9836-3add1360a46d::DEBUG::2014-04-22 
12:13:44,340::volume::1058::Storage.Misc.excCmd::(createVolume) FAILED: 
err = 
'/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/4
66d9ae9-e46a-46f8-9f4b-964d8af0675b/0b2d15e5-bf4f-4eaf-90e2-f1bd51a3a936: error 
while creating qcow2: No such file or directory\n'; rc = 1



were these 23 snapshots created any way each time we fail to create the 
snapshot or are these older snapshots which you actually created before 
the failure?


at this point my main theory is that somewhere along the line you had 
some sort of failure in your storage and from that time each snapshot 
you create will fail.
if the snapshots are created during the failure can you please delete 
the snapshots you do not need and try again?


There should not be a limit on how many snapshots you can have since 
it's only a link changing the image the vm should boot from.
Having said that, it's not ideal to have that many snapshots and can 
probably lead to unexpected results so I would not recommend having that 
many snapshots on a single vm :)


for example, my second theory would be that because we have so many 
snapshots we have some sort of race where part of the createVolume 
command expects some result from a query run before the create itself 
and because there are so many snapshots there is no such file on the 
volume because it's too far up the list.


can you also run: ls -l 
/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b


lets see what images are listed under that vm.

btw, you know that your export domain is getting 
StorageDomainDoesNotExist in the vdsm log? is that domain in up state? 
can you try to deactivate the export domain?


Thanks,

Dafna




On 04/22/2014 05:20 PM, Steve Dainard wrote:

Ominous..

23 snapshots. Is there an upper limit?

Offline snapshot fails as well. Both logs attached again (snapshot 
attempted at 12:13 EST).


*Steve *

On Tue, Apr 22, 2014 at 11:20 AM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


are you able to take an offline snapshot? (while the vm is down)
how many snapshots do you have on this vm?


On 04/22/2014 04:19 PM, Steve Dainard wrote:

No alert in web ui, I restarted the VM yesterday just in case,
no change. I also restored an earlier snapshot and tried to
re-snapshot, same result.

*Steve
*



On Tue, Apr 22, 2014 at 10:57 AM, Dafna Ron d...@redhat.com
mailto:d...@redhat.com mailto:d...@redhat.com
mailto:d...@redhat.com wrote:

This is the actual problem:

bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::DEBUG::2014-04-22
   
10:21:49,374::volume::1058::Storage.Misc.excCmd::(createVolume)

FAILED: err =
   
'/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/4
   
66d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb:

error while creating qcow2: No such file or directory\n';
rc = 1

from that you see the actual failure:

bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::ERROR::2014-04-22
10:21:49,392::volume::286::Storage.Volume::(clone)
Volume.clone:
can't clone:
   
/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d
   
9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7

to
   
/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee1

4a0dc6fb
bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::ERROR::2014-04-22
10:21:49,392::volume::508::Storage.Volume::(create)
Unexpected error
Traceback (most recent call last):
  File /usr/share/vdsm/storage/volume.py, line 466, in
create
srcVolUUID, imgPath, volPath)
  File /usr/share/vdsm/storage/fileVolume.py, line 160,
in _create
volParent.clone(imgPath, volUUID, volFormat, preallocate)
  File /usr/share/vdsm/storage/volume.py, line 287, in clone
raise se.CannotCloneVolume(self.volumePath, dst_path,
str(e))
CannotCloneVolume: Cannot clone volume:
   
'src=/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7,

dst=/rhev/data-cen
   

[ovirt-users] Ovirt user experience survey

2014-04-22 Thread Malini Rao
Hello everyone!

We would like to collect feedback on how Ovirt meets your functional and 
usability goals so that it can become the basis for future improvements. Here 
is a short survey that I hope you will take the time to complete. 

Survey link - http://www.keysurvey.com/f/609310/5d3f/

This is a high level survey and if you would like to talk more or stay involved 
in user experience design activities or even send any sketches for that idea 
that has been brewing in your head for a while, then please reach out to me or 
send a message to this list with 'User experience:' prefixing your subject line.

Your opinion is valuable and we hope to hear your voice through your survey 
response :) Please respond to this survey by May 1, 2014. 

Thanks in advance,

Malini
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] difference between thin/depentend and clone/dependent vm virtual machine

2014-04-22 Thread Tamer Lima
hello,

I am in trouble

I have 3 servers dedicated to test OVIRT:
01- engine  + vdsm   (8 cpus,  32GB ram , 2TB HD)
02 - vdsm   (8 cpus,  32GB ram , 2TB HD)
03 - vdsm   (8 cpus,  32GB ram , 2TB HD)

I want to create cloned virtual machines but in my configuration I can only
save virtual machines on server 01; my configuration refers a DATA DOMAIN
on server 01

All my virtual machines are : 2 cpu , 6 GB ram , 500gb HD  and were created
like CLONE

My server 01 is the data domain and all new virtual machine is created, via
NFS,  on server 01 , who has 2TB maximum capacity   ( the same size of
partition  /sda3 = 2TB)

how can I save each virtual machine on a desired vdsm server ?

What I want is :
server 01 -   engine + vdsm  :03 virtual machines running and hosted
phisicaly on this host
server 02 -   vdsm  :04 virtual machines running and hosted  phisicaly
on this host
server 03 -   vdsm  :04 virtual machines running and hosted  phisicaly
on this host

but I have this :
server 01 -   engine + vdsm  :03 virtual machines running  and hosted
phisicaly on this host
server 02 -   vdsm  :01 virtual machines running on this server  BUT
hosted  phisicaly on server 01
server 03 -   vdsm  :none, because my DATA DOMAIN IS FULL   (2TB )

How to solve this problem ?
is it possible create one DATA DOMAIN for each VDSM host  ?   I think this
is the solution but I do not know how to point VMs to be saved on specific
data domain.

thanks




On Fri, Apr 18, 2014 at 4:48 AM, Michal Skrivanek 
michal.skriva...@redhat.com wrote:


 On Apr 17, 2014, at 16:43 , Tamer Lima tamer.amer...@gmail.com wrote:

  hi,  thanks for reply
 
  I am investigating what is and how thin virtualization works
 
  Do you know if  HADOOP is indicated to work under thin environment ?
  On Hadoop I will put large workloads  and this  thin virtualization
  utilizes more resources than exists (shareable environment)
  that is,
  if I have a real physical necessity of 500gb for each hadoop host  and
 my Thin Virtualization has  2TB on NFS,  I can have only 4 virtual machines
  (500GB each), or less.
 
  For this case I believe clone virtual machine is the right choice. But
 in my environment it takes 1h30m to build one cloned virtual machine.

 if you plan to overcommit then go with thin. The drawback is that if you
 of course hit the physical limit the VMs will run out of space...
 if you plan to allocate 500GB each, consume all of it, never plan to grow
 then go with the clone….yes, it's going to take time to write all that
 stuff. With thin you need to do the same amount of writes, but gradually
 over time while you're allocating it it

 hope it helps

 Thanks,
 michal

 
 
 
  Am I correct ?
 
 
 
 
 
  On Thu, Apr 17, 2014 at 7:33 AM, Michal Skrivanek 
 michal.skriva...@redhat.com wrote:
 
  On Apr 16, 2014, at 16:41 , Tamer Lima tamer.amer...@gmail.com wrote:
 
  
  
   Hello,
  
   I created VMs by two ways :
  
   1)  on tab virtual machines  new vm   template (centos_65_64bits)
   1.1 configuration : I do not select stateless checkbox
   1.2 this process takes a 1h30 to create each machine.
  
   2)  on tab pools  new vm   template (centos_65_64bits)
   2.1 default configuration : stateless
   2.2 Here I created 3 virtual machines at once
   2.3 this process takes only one minute
  
   On the tab virtual machines I can see all virtual machines.
   Pooled machines have different icon image
   and description is different too:
  
   machines generated from tab VM  are described as clone/dependent
   - clone is a phisical copy?
   machines generated from tab POOL are described as thin/independent
   - thin is a just a  reference to template vm ? what is phisical? any
 configuration file?
 
  yeah, sort of.
  just google thin provisioning in general:)
 
 
  
  
   In practice, what is the difference between these machines ?
  
  
  
  
   http://www.ovirt.org/Features/PrestartedVm
   Today there are 2 types of Vm pools:
 • Manual - the Vm is supposed to be manually returned to the
 pool. In practice, this is not really entirely supported.
 • Automatic - once the user shuts down the Vm - it returns to
 the pool (stateless).
  
all vm created from pool  are stateless ?
 
  the automatic pool, yes
 
  Thanks,
  michal
 
  
  
   thanks
  
  
  
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
 
 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt snapshot failing on one VM

2014-04-22 Thread Steve Dainard
All snapshots are from before failure.

That's a bit scary that there may be a 'too many snapshots' issue. I take
snapshots for point in time consistency, and without the ability to
collapse them while the vm is running I'm not sure what the best option is
here. What is the recommended snapshot limit? Or maybe a better question;
whats the intended use case for snapshots in ovirt?

Export domain is currently unavailable, and without it active I can't
disable it properly.

# ls -tl
/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b
total 8517740
-rw-rw. 1 vdsm kvm97583104 Apr 22 14:03
1a67de4b-aa1c-4436-baca-ca55726d54d7
-rw-r--r--. 1 vdsm kvm 268 Apr 22 12:13
1a67de4b-aa1c-4436-baca-ca55726d54d7.meta
-rw-r--r--. 1 vdsm kvm 272 Apr 22 01:06
87390b64-becd-4a6f-a4fc-d27655f59b64.meta
-rw-rw. 1 vdsm kvm 1048576 Apr 22 01:04
1a67de4b-aa1c-4436-baca-ca55726d54d7.lease
-rw-rw. 1 vdsm kvm   107413504 Apr 20 22:00
87390b64-becd-4a6f-a4fc-d27655f59b64
-rw-rw. 1 vdsm kvm   104267776 Apr 19 22:00
6f9fd451-6c82-4390-802c-9e23a7d89427
-rw-rw. 1 vdsm kvm 1048576 Apr 19 22:00
87390b64-becd-4a6f-a4fc-d27655f59b64.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 19 22:00
6f9fd451-6c82-4390-802c-9e23a7d89427.meta
-rw-rw. 1 vdsm kvm   118358016 Apr 18 22:00
c298ce3b-ec6a-4526-9971-a769f4d3d69b
-rw-rw. 1 vdsm kvm 1048576 Apr 18 22:00
6f9fd451-6c82-4390-802c-9e23a7d89427.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 18 22:00
c298ce3b-ec6a-4526-9971-a769f4d3d69b.meta
-rw-rw. 1 vdsm kvm   120913920 Apr 17 22:00
0ee58208-6be8-4f81-bd51-0bd4b6d5d83a
-rw-rw. 1 vdsm kvm 1048576 Apr 17 22:00
c298ce3b-ec6a-4526-9971-a769f4d3d69b.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 17 22:00
0ee58208-6be8-4f81-bd51-0bd4b6d5d83a.meta
-rw-rw. 1 vdsm kvm   117374976 Apr 16 22:00
9aeb973d-9a54-441e-9ce9-f4f1a233da26
-rw-rw. 1 vdsm kvm 1048576 Apr 16 22:00
0ee58208-6be8-4f81-bd51-0bd4b6d5d83a.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 16 22:00
9aeb973d-9a54-441e-9ce9-f4f1a233da26.meta
-rw-rw. 1 vdsm kvm   110886912 Apr 15 22:00
0eae2185-884a-44d3-9099-e952b6b7ec37
-rw-rw. 1 vdsm kvm 1048576 Apr 15 22:00
9aeb973d-9a54-441e-9ce9-f4f1a233da26.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 15 22:00
0eae2185-884a-44d3-9099-e952b6b7ec37.meta
-rw-rw. 1 vdsm kvm 1048576 Apr 14 22:00
0eae2185-884a-44d3-9099-e952b6b7ec37.lease
-rw-rw. 1 vdsm kvm   164560896 Apr 14 22:00
ceffc643-b823-44b3-961e-93f3dc971886
-rw-r--r--. 1 vdsm kvm 272 Apr 14 22:00
ceffc643-b823-44b3-961e-93f3dc971886.meta
-rw-rw. 1 vdsm kvm 1048576 Apr 13 22:00
ceffc643-b823-44b3-961e-93f3dc971886.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 13 22:00
878fc690-ab08-489c-955b-9159f62026b1.meta
-rw-rw. 1 vdsm kvm   109182976 Apr 13 21:59
878fc690-ab08-489c-955b-9159f62026b1
-rw-rw. 1 vdsm kvm   110297088 Apr 12 22:00
5210eec2-a0eb-462e-95d5-7cf27db312f5
-rw-rw. 1 vdsm kvm 1048576 Apr 12 22:00
878fc690-ab08-489c-955b-9159f62026b1.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 12 22:00
5210eec2-a0eb-462e-95d5-7cf27db312f5.meta
-rw-rw. 1 vdsm kvm76480512 Apr 11 22:00
dcce0903-0f24-434b-9d1c-d70e3969e5ea
-rw-rw. 1 vdsm kvm 1048576 Apr 11 22:00
5210eec2-a0eb-462e-95d5-7cf27db312f5.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 11 22:00
dcce0903-0f24-434b-9d1c-d70e3969e5ea.meta
-rw-rw. 1 vdsm kvm 1048576 Apr 11 12:34
dcce0903-0f24-434b-9d1c-d70e3969e5ea.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 11 12:34
d3a1c505-8f6a-4c2b-97b7-764cd5baea47.meta
-rw-rw. 1 vdsm kvm   20824 Apr 11 12:33
d3a1c505-8f6a-4c2b-97b7-764cd5baea47
-rw-rw. 1 vdsm kvm14614528 Apr 10 16:12
638c2164-2edc-4294-ac99-c51963140940
-rw-rw. 1 vdsm kvm 1048576 Apr 10 16:12
d3a1c505-8f6a-4c2b-97b7-764cd5baea47.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 10 16:12
638c2164-2edc-4294-ac99-c51963140940.meta
-rw-rw. 1 vdsm kvm12779520 Apr 10 16:06
f8f1f164-c0d9-4716-9ab3-9131179a79bd
-rw-rw. 1 vdsm kvm 1048576 Apr 10 16:05
638c2164-2edc-4294-ac99-c51963140940.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 10 16:05
f8f1f164-c0d9-4716-9ab3-9131179a79bd.meta
-rw-rw. 1 vdsm kvm92995584 Apr 10 16:00
f9b14795-a26c-4edb-ae34-22361531a0a1
-rw-rw. 1 vdsm kvm 1048576 Apr 10 16:00
f8f1f164-c0d9-4716-9ab3-9131179a79bd.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 10 16:00
f9b14795-a26c-4edb-ae34-22361531a0a1.meta
-rw-rw. 1 vdsm kvm30015488 Apr 10 14:57
39cbf947-f084-4e75-8d6b-b3e5c32b82d6
-rw-rw. 1 vdsm kvm 1048576 Apr 10 14:57
f9b14795-a26c-4edb-ae34-22361531a0a1.lease
-rw-r--r--. 1 vdsm kvm 272 Apr 10 14:57
39cbf947-f084-4e75-8d6b-b3e5c32b82d6.meta
-rw-rw. 1 vdsm kvm19267584 Apr 10 14:34
3ece1489-9bff-4223-ab97-e45135106222
-rw-rw. 1 vdsm kvm 1048576 Apr 10 14:34
39cbf947-f084-4e75-8d6b-b3e5c32b82d6.lease
-rw-r--r--. 1 vdsm kvm 

Re: [ovirt-users] Hosted Engine error -243

2014-04-22 Thread Doron Fediuck


- Original Message -
 From: Kevin Tibi kevint...@hotmail.com
 To: users users@ovirt.org
 Sent: Tuesday, April 22, 2014 2:12:50 PM
 Subject: [ovirt-users] Hosted Engine error -243
 
 Hi all,
 
 I have a probleme with my hosted engine. Every 10 min i have a event in
 engine :
 
 VM HostedEngine is down. Exit message: internal error Failed to acquire lock:
 error -243
 
 My data is a local export NFS.
 
 Thx for you help.
 
 Kevin.
 

Hi Kevin,
can you please check the /var/log/ovirt-hosted-* log files in your hosts
and let us know if you see something else there or in your vdsm log file?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted engine health check issues

2014-04-22 Thread Doron Fediuck
Hi Rene,
any idea what closed your ovirtmgmt bridge?
as long as it is down vdsm may have issues starting up properly
and this is why you see the complaints on the rpc server.

Can you try manually fixing the network part first and then
restart vdsm?
Once vdsm is happy hosted engine VM will start.

- Original Message -
 From: René Koch rk...@linuxland.at
 To: Martin Sivak msi...@redhat.com
 Cc: users@ovirt.org
 Sent: Tuesday, April 22, 2014 1:46:38 PM
 Subject: Re: [ovirt-users] hosted engine health check issues
 
 Hi,
 
 I rebooted one of my ovirt hosts today and the result is now that I
 can't start hosted-engine anymore.
 
 ovirt-ha-agent isn't running because the lockspace file is missing
 (sanlock complains about it).
 So I tried to start hosted-engine with --vm-start and I get the
 following errors:
 
 == /var/log/sanlock.log ==
 2014-04-22 12:38:17+0200 654 [3093]: r2 cmd_acquire 2,9,5733 invalid
 lockspace found -1 failed 0 name 2851af27-8744-445d-9fb1-a0d083c8dc82
 
 == /var/log/messages ==
 Apr 22 12:38:17 ovirt-host02 sanlock[3079]: 2014-04-22 12:38:17+0200 654
 [3093]: r2 cmd_acquire 2,9,5733 invalid lockspace found -1 failed 0 name
 2851af27-8744-445d-9fb1-a0d083c8dc82
 Apr 22 12:38:17 ovirt-host02 kernel: ovirtmgmt: port 2(vnet0) entering
 disabled state
 Apr 22 12:38:17 ovirt-host02 kernel: device vnet0 left promiscuous mode
 Apr 22 12:38:17 ovirt-host02 kernel: ovirtmgmt: port 2(vnet0) entering
 disabled state
 
 == /var/log/vdsm/vdsm.log ==
 Thread-21::DEBUG::2014-04-22
 12:38:17,563::libvirtconnection::124::root::(wrapper) Unknown
 libvirterror: ecode: 38 edom: 42 level: 2 message: Failed to acquire
 lock: No space left on device
 Thread-21::DEBUG::2014-04-22
 12:38:17,563::vm::2263::vm.Vm::(_startUnderlyingVm)
 vmId=`f26dd37e-13b5-430c-b2f2-ecd098b82a91`::_ongoingCreations released
 Thread-21::ERROR::2014-04-22
 12:38:17,564::vm::2289::vm.Vm::(_startUnderlyingVm)
 vmId=`f26dd37e-13b5-430c-b2f2-ecd098b82a91`::The vm start process failed
 Traceback (most recent call last):
File /usr/share/vdsm/vm.py, line 2249, in _startUnderlyingVm
  self._run()
File /usr/share/vdsm/vm.py, line 3170, in _run
  self._connection.createXML(domxml, flags),
File /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py,
 line 92, in wrapper
  ret = f(*args, **kwargs)
File /usr/lib64/python2.6/site-packages/libvirt.py, line 2665, in
 createXML
  if ret is None:raise libvirtError('virDomainCreateXML() failed',
 conn=self)
 libvirtError: Failed to acquire lock: No space left on device
 
 == /var/log/messages ==
 Apr 22 12:38:17 ovirt-host02 vdsm vm.Vm ERROR
 vmId=`f26dd37e-13b5-430c-b2f2-ecd098b82a91`::The vm start process
 failed#012Traceback (most recent call last):#012  File
 /usr/share/vdsm/vm.py, line 2249, in _startUnderlyingVm#012
 self._run()#012  File /usr/share/vdsm/vm.py, line 3170, in _run#012
   self._connection.createXML(domxml, flags),#012  File
 /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py, line 92,
 in wrapper#012ret = f(*args, **kwargs)#012  File
 /usr/lib64/python2.6/site-packages/libvirt.py, line 2665, in
 createXML#012if ret is None:raise libvirtError('virDomainCreateXML()
 failed', conn=self)#012libvirtError: Failed to acquire lock: No space
 left on device
 
 == /var/log/vdsm/vdsm.log ==
 Thread-21::DEBUG::2014-04-22
 12:38:17,569::vm::2731::vm.Vm::(setDownStatus)
 vmId=`f26dd37e-13b5-430c-b2f2-ecd098b82a91`::Changed state to Down:
 Failed to acquire lock: No space left on device
 
 
 No space left on device is nonsense as there is enough space (I had this
 issue last time as well where I had to patch machine.py, but this file
 is now Python 2.6.6 compatible.
 
 Any idea what prevents hosted-engine from starting?
 ovirt-ha-broker, vdsmd and sanlock are running btw.
 
 Btw, I can see in log that json rpc server module is missing - which
 package is required for CentOS 6.5?
 Apr 22 12:37:14 ovirt-host02 vdsm vds WARNING Unable to load the json
 rpc server module. Please make sure it is installed.
 
 
 Thanks,
 René
 
 
 
 On 04/17/2014 10:02 AM, Martin Sivak wrote:
  Hi,
 
  How can I disable notifications?
 
  The notification is configured in /etc/ovirt-hosted-engine-ha/broker.conf
  section notification.
  The email is sent when the key state_transition exists and the string
  OldState-NewState contains the (case insensitive) regexp from the value.
 
  Is it intended to send out these messages and detect that ovirt engine
  is down (which is false anyway), but not to restart the vm?
 
  Forget about emails for now and check the
  /var/log/ovirt-hosted-engine-ha/agent.log and broker.log (and attach them
  as well btw).
 
  oVirt hosts think that hosted engine is down because it seems that hosts
  can't write to hosted-engine.lockspace due to glusterfs issues (or at
  least I think so).
 
  The hosts think so or can't really write there? The lockspace is managed by
  sanlock and our HA daemons do not touch it at all. We only 

[ovirt-users] [ACTION REQUESTED] please review new oVirt look-and-feel patch

2014-04-22 Thread Greg Sheremeta
Hi,

A while back, I sent out an email for the new oVirt look-and-feel feature [1]. 
The new look and feel patch is ready for both code and UI review. At this point 
we're not looking for design review, although you are welcome to suggest design 
improvements. I'm mostly looking for help regression testing the entire UI.

Especially if you're an oVirt *UI* developer, please download the patch and try 
it [2]. The patch introduces some new CSS globally, and I had to adjust many 
screens and dialogs. It's quite possible that there are dialogs that I don't 
know about, so if you maintain any especially-hidden dialogs, please try the 
patch and verify your dialogs look good.

And you can also just try out the patch to see the amazing new look and feel!

Thanks for your feedback.

Greg

[1] http://www.ovirt.org/Features/NewLookAndFeelPatternFlyPhase1
[2] http://gerrit.ovirt.org/#/c/24594/

Greg Sheremeta
Red Hat, Inc.
Sr. Software Engineer, RHEV
Cell: 919-807-1086
gsher...@redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Fwd: [RHSA-2014:0421-01] Moderate: qemu-kvm-rhev security update

2014-04-22 Thread Sven Kieske
Hi list,

if you do not already monitor some security lists:

You are strongly encouraged to update your qemu-kvm
packages, especially on CentOS :)

See below for details


 Original-Nachricht 
Betreff: [RHSA-2014:0421-01] Moderate: qemu-kvm-rhev security update
Datum: Tue, 22 Apr 2014 17:52:38 +
Von: bugzi...@redhat.com
An: rhsa-annou...@redhat.com, rhev-watch-l...@redhat.com

=
   Red Hat Security Advisory

Synopsis:  Moderate: qemu-kvm-rhev security update
Advisory ID:   RHSA-2014:0421-01
Product:   Red Hat Enterprise Virtualization
Advisory URL:  https://rhn.redhat.com/errata/RHSA-2014-0421.html
Issue date:2014-04-22
CVE Names: CVE-2014-0142 CVE-2014-0143 CVE-2014-0144
   CVE-2014-0145 CVE-2014-0146 CVE-2014-0147
   CVE-2014-0148 CVE-2014-0150
=

1. Summary:

Updated qemu-kvm-rhev packages that fix several security issues are now
available for Red Hat Enterprise Virtualization.

The Red Hat Security Response Team has rated this update as having
Moderate
security impact. Common Vulnerability Scoring System (CVSS) base scores,
which give detailed severity ratings, are available for each vulnerability
from the CVE links in the References section.

2. Relevant releases/architectures:

RHEV Agents (vdsm) - x86_64

3. Description:

KVM (Kernel-based Virtual Machine) is a full virtualization solution for
Linux on AMD64 and Intel 64 systems. The qemu-kvm-rhev package
provides the
user-space component for running virtual machines using KVM in
environments
managed by Red Hat Enterprise Virtualization Manager.

Multiple integer overflow, input validation, logic error, and buffer
overflow flaws were discovered in various QEMU block drivers. An attacker
able to modify a disk image file loaded by a guest could use these
flaws to
crash the guest, or corrupt QEMU process memory on the host, potentially
resulting in arbitrary code execution on the host with the privileges of
the QEMU process. (CVE-2014-0143, CVE-2014-0144, CVE-2014-0145,
CVE-2014-0147)

A buffer overflow flaw was found in the way the virtio_net_handle_mac()
function of QEMU processed guest requests to update the table of MAC
addresses. A privileged guest user could use this flaw to corrupt QEMU
process memory on the host, potentially resulting in arbitrary code
execution on the host with the privileges of the QEMU process.
(CVE-2014-0150)

A divide-by-zero flaw was found in the seek_to_sector() function of the
parallels block driver in QEMU. An attacker able to modify a disk image
file loaded by a guest could use this flaw to crash the guest.
(CVE-2014-0142)

A NULL pointer dereference flaw was found in the QCOW2 block driver in
QEMU. An attacker able to modify a disk image file loaded by a guest could
use this flaw to crash the guest. (CVE-2014-0146)

It was found that the block driver for Hyper-V VHDX images did not
correctly calculate BAT (Block Allocation Table) entries due to a missing
bounds check. An attacker able to modify a disk image file loaded by a
guest could use this flaw to crash the guest. (CVE-2014-0148)

The CVE-2014-0143 issues were discovered by Kevin Wolf and Stefan Hajnoczi
of Red Hat, the CVE-2014-0144 issues were discovered by Fam Zheng, Jeff
Cody, Kevin Wolf, and Stefan Hajnoczi of Red Hat, the CVE-2014-0145 issues
were discovered by Stefan Hajnoczi of Red Hat, the CVE-2014-0150 issue was
discovered by Michael S. Tsirkin of Red Hat, the CVE-2014-0142,
CVE-2014-0146, and CVE-2014-0147 issues were discovered by Kevin Wolf of
Red Hat, and the CVE-2014-0148 issue was discovered by Jeff Cody of
Red Hat.

All users of qemu-kvm-rhev are advised to upgrade to these updated
packages, which contain backported patches to correct these issues. After
installing this update, shut down all running virtual machines. Once all
virtual machines have shut down, start them again for this update to take
effect.

4. Solution:

Before applying this update, make sure all previously released errata
relevant to your system have been applied.

This update is available via the Red Hat Network. Details on how to
use the Red Hat Network to apply this update are available at
https://access.redhat.com/site/articles/11258

5. Bugs fixed (https://bugzilla.redhat.com/):

1078201 - CVE-2014-0142 qemu: crash by possible division by zero
1078212 - CVE-2014-0148 Qemu: vhdx: bounds checking for block_size and
logical_sector_size
1078232 - CVE-2014-0146 Qemu: qcow2: NULL dereference in qcow2_open()
error path
1078846 - CVE-2014-0150 qemu: virtio-net: buffer overflow in
virtio_net_handle_mac() function
1078848 - CVE-2014-0147 Qemu: block: possible crash due signed types or
logic error
1078885 - CVE-2014-0145 Qemu: prevent possible buffer overflows
1079140 - CVE-2014-0143 Qemu: block: multiple integer overflow flaws
1079240 -