Re: [ovirt-users] 3.6.7 engine: compatible with host centos 7.3/vdsm 4.18/qemu 2.6?

2016-12-15 Thread Николаев Алексей
Hi, community! 15.12.2016, 20:16, "Yaniv Dary" :It may work, but we don't support 3.6 for a while and I'm not sure what issues you may encounter.  We are using oVirt engine 3.6. How can I prevent upgrade my hosts with CentOS 7.2 to unsupported 7.3? Thx.  It is safer to upgrade the engine first and the hosts later. 4.0 supports 3.6 compatibility, just make sure all cluster are 3.6 with el7 prior to updating.   Yaniv DaryTechnical Product ManagerRed Hat Israel Ltd.34 Jerusalem RoadBuilding A, 4th floorRa'anana, Israel 4350109Tel : +972 (9) 7692306    8272306Email: yd...@redhat.comIRC : ydary On Dec 15, 2016 6:49 PM, "Richard Chan"  wrote:Planning and upgrade from 3.6.8 to 4.0.6: 4.0.6 mentions supporting CentOS 7.3 and qemu-kvm-ev 2.6 Is it possible in a 3.6.7 environment to upgrade all the hosts first to CentOS 7.3, vdsm 4.18, and qemu-kvm-ev 2.6 before tackling the engine? In other words, is the 4.0.6 host stack compatible with a 3.6.7 engine?   --Richard Chan ___Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users ,___Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

2016-12-15 Thread Ramesh Nachimuthu




- Original Message -
> From: "Giuseppe Ragusa" 
> To: "Ramesh Nachimuthu" 
> Cc: users@ovirt.org
> Sent: Friday, December 16, 2016 2:42:18 AM
> Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring 
> GlusterFS volumes in HC HE oVirt 3.6.7 /
> GlusterFS 3.7.17
> 
> Giuseppe Ragusa ha condiviso un file di OneDrive. Per visualizzarlo, fare
> clic sul collegamento seguente.
> 
> 
> 
> [https://r1.res.office365.com/owa/prem/images/dc-generic_20.png]
> 
> vols.tar.gz
> 
> 
> 
> Da: Ramesh Nachimuthu 
> Inviato: lunedì 12 dicembre 2016 09.32
> A: Giuseppe Ragusa
> Cc: users@ovirt.org
> Oggetto: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17
> 
> On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:
> > Hi all,
> >
> > I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7
> > GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup all on
> > CentOS 7.2):
> >
> >  From /var/log/messages:
> >
> > Dec  9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal
> > server error#012Traceback (most recent call last):#012  File
> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > _serveRequest#012res = method(**params)#012  File
> > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result
> > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > 117, in status#012return self._gluster.volumeStatus(volumeName, brick,
> > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > wrapper#012rv = func(*args, **kwargs)#012  File
> > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > __call__#012return callMethod()#012  File
> > "/usr/share/vdsm/supervdsm.py", line 48, in #012**kwargs)#012
> > File "", line 2, in glusterVolumeStatus#012  File
> > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> >   llmethod#012raise convert_to_error(kind, result)#012KeyError:
> >   'device'
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting Engine
> > VM OVF from the OVF_STORE
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE volume
> > path:
> > /rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Found
> > an OVF for HE VM, trying to convert
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Got
> > vm.conf from OVF_STORE
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Current state
> > EngineUp (score: 3400)
> > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Best remote
> > host read.mgmt.private (id: 2, score: 3400)
> > Dec  9 15:27:48 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal
> > server error#012Traceback (most recent call last):#012  File
> > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > _serveRequest#012res = method(**params)#012  File
> > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result
> > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line
> > 117, in status#012return self._gluster.volumeStatus(volumeName, brick,
> > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > wrapper#012rv = func(*args, **kwargs)#012  File
> > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > __call__#012return callMethod()#012  File
> > "/usr/share/vdsm/supervdsm.py", line 48, in #012**kwargs)#012
> > File "", line 2, in glusterVolumeStatus#012  File
> > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> >   llmethod#012raise convert_to_error(kind, result)#012KeyError:
> >   'device'
> > Dec  9 15:27:48 shockley ovirt-ha-broker:
> > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > established
> > Dec  9 15:27:48 shockley ovirt-ha-broker:
> > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > closed
> > Dec  9 15:27:48 shockley ovirt-ha-broker:
> > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > established

Re: [ovirt-users] 3.6.7 engine: compatible with host centos 7.3/vdsm 4.18/qemu 2.6?

2016-12-15 Thread Richard Chan
Thanks - I have one more question about overlapping repositories from
ovirt proper and centos virt SIG

ovirt-4.0.repo:

A. http://resources.ovirt.org/pub/ovirt-4.0/rpm/el7/ (from mirrorlist)

ovirt-4.0-dependencies.repo:

B. http://mirror.centos.org/centos/7/virt/$basearch/ovirt-4.0/

For auditing purposes, is the following understanding correct?
a. B is a subset of A (meant only for host/node)
b. B RPMs are identical with  A RPMs except signed with a different key.
c. A default installation (i.e. don't edit *.repo files)  will pull RPMs
from ovirt rather than centos.



Richard




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM Import Fails [Solved]

2016-12-15 Thread Gary Pedretty
Solved, must have just been a bad export to begin with.  Repeated the Export 
and then did the Import without any issues.  Deleted the bad export without 
issue.

thanks

gary



Gary Pedrettyg...@ravnalaska.net 

Systems Manager  www.flyravn.com 

Ravn Alaska   /\907-450-7251
5245 Airport Industrial Road /  \/\ 907-450-7238 fax
Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
Really loving the record green up date! Summmer!!   yourself” Matt 22:39




> On Dec 15, 2016, at 3:06 PM, Gary Pedretty  wrote:
> 
> I cannot seem to get a VM to import.  It was exported first to an export 
> domain and shows up in the list of VMs on the domain when starting the import 
> process, but when you get to the last dialog that should start the import you 
> simply get an error message “Cannot import the VM.  VM’s Image does not 
> exist.  No related event is logged in the overall event log when this 
> happens.   Does not matter what data storage domain you pick for the new VM.
> 
> 
> Gary
> 
> 
> Gary Pedrettyg...@ravnalaska.net 
> 
> Systems Manager  www.flyravn.com 
> 
> Ravn Alaska   /\907-450-7251
> 5245 Airport Industrial Road /  \/\ 907-450-7238 fax
> Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
> Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
> Really loving the record green up date! Summmer!!   yourself” Matt 22:39
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VM Import Fails

2016-12-15 Thread Gary Pedretty
I cannot seem to get a VM to import.  It was exported first to an export domain 
and shows up in the list of VMs on the domain when starting the import process, 
but when you get to the last dialog that should start the import you simply get 
an error message “Cannot import the VM.  VM’s Image does not exist.  No related 
event is logged in the overall event log when this happens.   Does not matter 
what data storage domain you pick for the new VM.


Gary


Gary Pedrettyg...@ravnalaska.net 

Systems Manager  www.flyravn.com 

Ravn Alaska   /\907-450-7251
5245 Airport Industrial Road /  \/\ 907-450-7238 fax
Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
Really loving the record green up date! Summmer!!   yourself” Matt 22:39













___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

2016-12-15 Thread Giuseppe Ragusa
Giuseppe Ragusa ha condiviso un file di OneDrive. Per visualizzarlo, fare clic 
sul collegamento seguente.



[https://r1.res.office365.com/owa/prem/images/dc-generic_20.png]

vols.tar.gz



Da: Ramesh Nachimuthu 
Inviato: lunedì 12 dicembre 2016 09.32
A: Giuseppe Ragusa
Cc: users@ovirt.org
Oggetto: Re: [ovirt-users] Python stack trace for VDSM while monitoring 
GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:
> Hi all,
>
> I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7 
> GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup all on 
> CentOS 7.2):
>
>  From /var/log/messages:
>
> Dec  9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal 
> server error#012Traceback (most recent call last):#012  File 
> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in 
> _serveRequest#012res = method(**params)#012  File 
> "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result = 
> fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, 
> in status#012return self._gluster.volumeStatus(volumeName, brick, 
> statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in 
> wrapper#012rv = func(*args, **kwargs)#012  File 
> "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in 
> __call__#012return callMethod()#012  File "/usr/share/vdsm/supervdsm.py", 
> line 48, in #012**kwargs)#012  File "", line 2, in 
> glusterVolumeStatus#012  File 
> "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
>   llmethod#012raise convert_to_error(kind, result)#012KeyError: 'device'
> Dec  9 15:27:47 shockley ovirt-ha-agent: 
> INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting Engine VM 
> OVF from the OVF_STORE
> Dec  9 15:27:47 shockley ovirt-ha-agent: 
> INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE volume path: 
> /rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
> Dec  9 15:27:47 shockley ovirt-ha-agent: 
> INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Found an 
> OVF for HE VM, trying to convert
> Dec  9 15:27:47 shockley ovirt-ha-agent: 
> INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Got 
> vm.conf from OVF_STORE
> Dec  9 15:27:47 shockley ovirt-ha-agent: 
> INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Current state 
> EngineUp (score: 3400)
> Dec  9 15:27:47 shockley ovirt-ha-agent: 
> INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Best remote host 
> read.mgmt.private (id: 2, score: 3400)
> Dec  9 15:27:48 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR Internal 
> server error#012Traceback (most recent call last):#012  File 
> "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in 
> _serveRequest#012res = method(**params)#012  File 
> "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012result = 
> fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, 
> in status#012return self._gluster.volumeStatus(volumeName, brick, 
> statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in 
> wrapper#012rv = func(*args, **kwargs)#012  File 
> "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in 
> __call__#012return callMethod()#012  File "/usr/share/vdsm/supervdsm.py", 
> line 48, in #012**kwargs)#012  File "", line 2, in 
> glusterVolumeStatus#012  File 
> "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
>   llmethod#012raise convert_to_error(kind, result)#012KeyError: 'device'
> Dec  9 15:27:48 shockley ovirt-ha-broker: 
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
> established
> Dec  9 15:27:48 shockley ovirt-ha-broker: 
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
> closed
> Dec  9 15:27:48 shockley ovirt-ha-broker: 
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
> established
> Dec  9 15:27:48 shockley ovirt-ha-broker: 
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
> closed
> Dec  9 15:27:48 shockley ovirt-ha-broker: 
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
> established
> Dec  9 15:27:48 shockley ovirt-ha-broker: 
> INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection 
> closed
> Dec  9 15:27:48 shockley ovirt-ha-broker: 
> 

[ovirt-users] oVirt multiips hook

2016-12-15 Thread Bill Bill
Hello,

Following up on the users list as opposed to Bugzilla.

Thanks for helping out with this, much appreciated. I was able to get the 
custom property added in the engine & I can select the property, then enter in 
the IP’s.

I’m not sure if I created the hook correctly, as it doesn’t appear to have made 
any changes so far, only one IP communicates.

I created a file called “multiips” in the 
/usr/libexec/vdsm/hooks/before_vm_start/ directory containing the info from the 
Bugzilla thread.

Is there another step I should take or perhaps I’m missing something?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Cumulus Switch

2016-12-15 Thread Matt Wells
I've seen some of the cool stuff coming with OVN and even a co-worker has
done some great things with it.  However I was wondering if anyone had
experience with Cumulus as the external provider for networks.
It's just a "weekend project" I'm picking up and thought to ask on the
list.  I've not found other posts on it yet but will continue to look.
I've just made a fresh lab with the latest and greatest oVirt on CentOS 7.
Thanks to all and a happy holiday season ( if you're into the holiday thing
).
:-)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6.7 engine: compatible with host centos 7.3/vdsm 4.18/qemu 2.6?

2016-12-15 Thread Christophe TREFOIS
Hi Yaniv,

Thanks!

Just to be 100% sure, on the hosts we upgrade to 7.3 first and then to 4.0? Or 
can i add the 4.0 repo and update everything together?

For the engine it is quite clear now and makes sense I guess. New CentOS 
requires oVirt 4, so we should upgrade to it first.

Thanks for your help,
Christophe

Sent from my iPhone

On 15 Dec 2016, at 18:51, Yaniv Dary 
> wrote:

Backup, upgrade oVirt to 4, upgrade engine OS to 7.3, start upgrading 
hypervisors to 4 on 7.3 as well.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary

On Dec 15, 2016 7:33 PM, "Christophe TREFOIS" 
> wrote:
Hi Yaniv,

Would you recommend first to upgrade to CentOS 7.3, and then to oVirt 4 or ?

We are currently running CentOS 7.2 (sometimes 7.1) with oVirt 3.6.

Thanks,
Christophe

--

Dr Christophe Trefois, Dipl.-Ing.
Technical Specialist / Post-Doc

UNIVERSITÉ DU LUXEMBOURG

LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
Campus Belval | House of Biomedicine
6, avenue du Swing
L-4367 Belvaux
T: +352 46 66 44 6124
F: +352 46 66 44 6949
http://www.uni.lu/lcsb

[Facebook]  [Twitter] 
   [Google Plus] 
   [Linkedin] 
   [skype] 



This message is confidential and may contain privileged information.
It is intended for the named recipient only.
If you receive it in error please notify me and permanently delete the original 
message and any copies.




On 15 Dec 2016, at 18:16, Yaniv Dary 
> wrote:

It may work, but we don't support 3.6 for a while and I'm not sure what issues 
you may encounter.

It is safer to upgrade the engine first and the hosts later. 4.0 supports 3.6 
compatibility, just make sure all cluster are 3.6 with el7 prior to updating.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary

On Dec 15, 2016 6:49 PM, "Richard Chan" 
> wrote:
Planning and upgrade from 3.6.8 to 4.0.6:

4.0.6 mentions supporting CentOS 7.3 and qemu-kvm-ev 2.6

Is it possible in a 3.6.7 environment to upgrade all the hosts first to CentOS 
7.3, vdsm 4.18, and qemu-kvm-ev 2.6 before tackling the engine? In other words, 
is the 4.0.6 host stack compatible with a 3.6.7 engine?



--
Richard Chan


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] vdsm 4.17

2016-12-15 Thread Pavel Gashev
Hello,

I’ve found that vdsm 4.17.32 doesn’t work well in CentOS 7.3 due to bugs like
https://bugzilla.redhat.com/1364339 or https://bugzilla.redhat.com/1368258
However it seems like vdsm 4.17.35 has all related fixes.

Is it safe to use non-released version of vdsm?
Is there a plan to release oVirt 3.6.8?

Thanks

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6.7 engine: compatible with host centos 7.3/vdsm 4.18/qemu 2.6?

2016-12-15 Thread Yaniv Dary
Backup, upgrade oVirt to 4, upgrade engine OS to 7.3, start upgrading
hypervisors to 4 on 7.3 as well.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary

On Dec 15, 2016 7:33 PM, "Christophe TREFOIS" 
wrote:

> Hi Yaniv,
>
> Would you recommend first to upgrade to CentOS 7.3, and then to oVirt 4 or
> ?
>
> We are currently running CentOS 7.2 (sometimes 7.1) with oVirt 3.6.
>
> Thanks,
> Christophe
>
> --
>
> Dr Christophe Trefois, Dipl.-Ing.
> Technical Specialist / Post-Doc
>
> UNIVERSITÉ DU LUXEMBOURG
>
> LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
> Campus Belval | House of Biomedicine
> 6, avenue du Swing
> L-4367 Belvaux
> T: +352 46 66 44 6124 <+352%2046%2066%2044%206124>
> F: +352 46 66 44 6949 <+352%2046%2066%2044%206949>
> http://www.uni.lu/lcsb
>
> [image: Facebook]   [image: Twitter]
>   [image: Google Plus]
>   [image: Linkedin]
>   [image: skype]
> 
>
> 
> This message is confidential and may contain privileged information.
> It is intended for the named recipient only.
> If you receive it in error please notify me and permanently delete the
> original message and any copies.
> 
>
>
> On 15 Dec 2016, at 18:16, Yaniv Dary  wrote:
>
> It may work, but we don't support 3.6 for a while and I'm not sure what
> issues you may encounter.
>
> It is safer to upgrade the engine first and the hosts later. 4.0 supports
> 3.6 compatibility, just make sure all cluster are 3.6 with el7 prior to
> updating.
>
> Yaniv Dary
> Technical Product Manager
> Red Hat Israel Ltd.
> 34 Jerusalem Road
> Building A, 4th floor
> Ra'anana, Israel 4350109
>
> Tel : +972 (9) 7692306 <+972%209-769-2306>
> 8272306
> Email: yd...@redhat.com
> IRC : ydary
>
> On Dec 15, 2016 6:49 PM, "Richard Chan" 
> wrote:
>
>> Planning and upgrade from 3.6.8 to 4.0.6:
>>
>> 4.0.6 mentions supporting CentOS 7.3 and qemu-kvm-ev 2.6
>>
>> Is it possible in a 3.6.7 environment to upgrade all the hosts first to
>> CentOS 7.3, vdsm 4.18, and qemu-kvm-ev 2.6 before tackling the engine? In
>> other words, is the 4.0.6 host stack compatible with a 3.6.7 engine?
>>
>>
>>
>> --
>> Richard Chan
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6.7 engine: compatible with host centos 7.3/vdsm 4.18/qemu 2.6?

2016-12-15 Thread Christophe TREFOIS
Hi Yaniv,

Would you recommend first to upgrade to CentOS 7.3, and then to oVirt 4 or ?

We are currently running CentOS 7.2 (sometimes 7.1) with oVirt 3.6.

Thanks,
Christophe
-- 

Dr Christophe Trefois, Dipl.-Ing.  
Technical Specialist / Post-Doc

UNIVERSITÉ DU LUXEMBOURG

LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
Campus Belval | House of Biomedicine  
6, avenue du Swing 
L-4367 Belvaux  
T: +352 46 66 44 6124 
F: +352 46 66 44 6949  
http://www.uni.lu/lcsb 
       
   
   

This message is confidential and may contain privileged information. 
It is intended for the named recipient only. 
If you receive it in error please notify me and permanently delete the original 
message and any copies. 


  

> On 15 Dec 2016, at 18:16, Yaniv Dary  wrote:
> 
> It may work, but we don't support 3.6 for a while and I'm not sure what 
> issues you may encounter. 
> 
> It is safer to upgrade the engine first and the hosts later. 4.0 supports 3.6 
> compatibility, just make sure all cluster are 3.6 with el7 prior to updating. 
>  
> 
> Yaniv Dary
> Technical Product Manager
> Red Hat Israel Ltd.
> 34 Jerusalem Road
> Building A, 4th floor
> Ra'anana, Israel 4350109
> 
> Tel : +972 (9) 7692306
> 8272306
> Email: yd...@redhat.com 
> IRC : ydary
> 
> On Dec 15, 2016 6:49 PM, "Richard Chan"  > wrote:
> Planning and upgrade from 3.6.8 to 4.0.6:
> 
> 4.0.6 mentions supporting CentOS 7.3 and qemu-kvm-ev 2.6
> 
> Is it possible in a 3.6.7 environment to upgrade all the hosts first to 
> CentOS 7.3, vdsm 4.18, and qemu-kvm-ev 2.6 before tackling the engine? In 
> other words, is the 4.0.6 host stack compatible with a 3.6.7 engine?
> 
> 
> 
> -- 
> Richard Chan
> 
> 
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



smime.p7s
Description: S/MIME cryptographic signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 3.6.7 engine: compatible with host centos 7.3/vdsm 4.18/qemu 2.6?

2016-12-15 Thread Yaniv Dary
It may work, but we don't support 3.6 for a while and I'm not sure what
issues you may encounter.

It is safer to upgrade the engine first and the hosts later. 4.0 supports
3.6 compatibility, just make sure all cluster are 3.6 with el7 prior to
updating.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary

On Dec 15, 2016 6:49 PM, "Richard Chan" 
wrote:

> Planning and upgrade from 3.6.8 to 4.0.6:
>
> 4.0.6 mentions supporting CentOS 7.3 and qemu-kvm-ev 2.6
>
> Is it possible in a 3.6.7 environment to upgrade all the hosts first to
> CentOS 7.3, vdsm 4.18, and qemu-kvm-ev 2.6 before tackling the engine? In
> other words, is the 4.0.6 host stack compatible with a 3.6.7 engine?
>
>
>
> --
> Richard Chan
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] 3.6.7 engine: compatible with host centos 7.3/vdsm 4.18/qemu 2.6?

2016-12-15 Thread Richard Chan
Planning and upgrade from 3.6.8 to 4.0.6:

4.0.6 mentions supporting CentOS 7.3 and qemu-kvm-ev 2.6

Is it possible in a 3.6.7 environment to upgrade all the hosts first to
CentOS 7.3, vdsm 4.18, and qemu-kvm-ev 2.6 before tackling the engine? In
other words, is the 4.0.6 host stack compatible with a 3.6.7 engine?



-- 
Richard Chan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [HEADS UP] CentOS 7.3 is rolling out, need qemu-kvm-ev 2.6

2016-12-15 Thread Paolo Bonzini


On 15/12/2016 17:07, InterNetX - Juergen Gotteswinter wrote:
> Am 15.12.2016 um 16:46 schrieb Sandro Bonazzola:
>>
>>
>> Il 15/Dic/2016 16:17, "InterNetX - Juergen Gotteswinter"
>> > ha scritto:
>>
>> Am 15.12.2016 um 15:51 schrieb Sandro Bonazzola:
>> >
>> >
>> > On Thu, Dec 15, 2016 at 3:02 PM, InterNetX - Juergen Gotteswinter
>> > 
>> >> wrote:
>> >
>> > i can confirm that it will break ...
>> >
>> > Dec 15 14:58:43 vm1 journal: internal error: qemu unexpectedly
>> closed
>> > the monitor: Unexpected error in object_property_find() at
>> > qom/object.c:1003:#0122016-12-15T13:58:43.140073Z qemu-kvm:
>> can't apply
>> > global Opteron_G4-x86_64-cpu.x1apic=off: Property '.x1apic'
>> not found
>> >
>> >
>> > Just an heads up that qemu-kvm-ev 2.6 is now
>> > in http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/
>> 
>> > > >
>>
>> [16:16:47][root@vm1:/var/log]$rpm -aq |grep qemu-kvm-ev
>> qemu-kvm-ev-2.6.0-27.1.el7.x86_64
>> [16:16:52][root@vm1:/var/log]$
>>
>> this message is from 2.6
>>
>>
>> Adding Paolo and Michal.
> 
> sorry, theres a little bit more in the startup log which might be helpful
> 
> Unexpected error in object_property_find() at qom/object.c:1003:
> 2016-12-15T13:58:43.140073Z qemu-kvm: can't apply global
> Opteron_G4-x86_64-cpu.x1apic=off: Property '.x1apic' not found

This is now bug 1405123.

Paolo

> 
> the complete startup parameters in that case are
> 
> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
> guest=jg123_vm1_loadtest,debug-threads=on -S -object
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-jg123_vm1_loadtest/master-key.aes
> -machine rhel6.5.0,accel=kvm,usb=off -cpu Opteron_G4 -m 65536 -realtime
> mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 -uuid
> 20047459-7e48-4160-ac77-0e26a4f99472 -smbios
> 'type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-3.1611.el7.centos,serial=4C4C4544-0039-3310-8043-B2C04F463032,uuid=20047459-7e48-4160-ac77-0e26a4f99472'
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-2-jg123_vm1_loadtest/monitor.sock,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2016-12-15T13:58:41,driftfix=slew -global
> kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on
> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
> -drive if=none,id=drive-ide0-1-0,readonly=on -device
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
> file=/rhev/data-center/0002-0002-0002-0002-02f7/d5b56ea4-782e-4002-bb9a-478b337b5c9f/images/f022eca0-1af3-43ad-acad-4731ceceed3e/94b35a95-c80b-434c-afe7-e8ab4391395c,format=qcow2,if=none,id=drive-scsi0-0-0-0,serial=f022eca0-1af3-43ad-acad-4731ceceed3e,cache=none,werror=stop,rerror=stop,aio=native
> -device
> scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1
> -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:5e:43:04,bus=pci.0,addr=0x3,bootindex=2
> -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/20047459-7e48-4160-ac77-0e26a4f99472.com.redhat.rhevm.vdsm,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/20047459-7e48-4160-ac77-0e26a4f99472.org.qemu.guest_agent.0,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -chardev spicevmc,id=charchannel2,name=vdagent -device
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> -spice
> tls-port=5900,addr=192.168.210.80,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
> -k en-us -device
> qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2
> -msg timestamp=on
> 
> 
> 
>>
>>
>>
>>
>>
>> >
>> >
>> >
>> >
>> > cheers,
>> >
>> > Juergen
>> >
>> > Am 13.12.2016 um 10:30 schrieb Ralf Schenk:
>> > > Hello
>> 

Re: [ovirt-users] [HEADS UP] CentOS 7.3 is rolling out, need qemu-kvm-ev 2.6

2016-12-15 Thread Paolo Bonzini


On 15/12/2016 16:46, Sandro Bonazzola wrote:
> 
> 
> Il 15/Dic/2016 16:17, "InterNetX - Juergen Gotteswinter"
> > ha scritto:
> 
> Am 15.12.2016 um 15:51 schrieb Sandro Bonazzola:
> >
> >
> > On Thu, Dec 15, 2016 at 3:02 PM, InterNetX - Juergen Gotteswinter
> > 
> >> wrote:
> >
> > i can confirm that it will break ...
> >
> > Dec 15 14:58:43 vm1 journal: internal error: qemu unexpectedly
> closed
> > the monitor: Unexpected error in object_property_find() at
> > qom/object.c:1003:#0122016-12-15T13:58:43.140073Z qemu-kvm:
> can't apply
> > global Opteron_G4-x86_64-cpu.x1apic=off: Property '.x1apic'
> not found
> >
> >
> > Just an heads up that qemu-kvm-ev 2.6 is now
> > in http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/
> 
> >  >
> 
> [16:16:47][root@vm1:/var/log]$rpm -aq |grep qemu-kvm-ev
> qemu-kvm-ev-2.6.0-27.1.el7.x86_64
> [16:16:52][root@vm1:/var/log]$
> 
> this message is from 2.6
> 
> 
> Adding Paolo and Michal.

The message is ugly, but that "x1apic" should have read "x2apic".

Paolo
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [HEADS UP] CentOS 7.3 is rolling out, need qemu-kvm-ev 2.6

2016-12-15 Thread InterNetX - Juergen Gotteswinter
Am 15.12.2016 um 16:46 schrieb Sandro Bonazzola:
> 
> 
> Il 15/Dic/2016 16:17, "InterNetX - Juergen Gotteswinter"
> > ha scritto:
> 
> Am 15.12.2016 um 15:51 schrieb Sandro Bonazzola:
> >
> >
> > On Thu, Dec 15, 2016 at 3:02 PM, InterNetX - Juergen Gotteswinter
> > 
> >> wrote:
> >
> > i can confirm that it will break ...
> >
> > Dec 15 14:58:43 vm1 journal: internal error: qemu unexpectedly
> closed
> > the monitor: Unexpected error in object_property_find() at
> > qom/object.c:1003:#0122016-12-15T13:58:43.140073Z qemu-kvm:
> can't apply
> > global Opteron_G4-x86_64-cpu.x1apic=off: Property '.x1apic'
> not found
> >
> >
> > Just an heads up that qemu-kvm-ev 2.6 is now
> > in http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/
> 
> >  >
> 
> [16:16:47][root@vm1:/var/log]$rpm -aq |grep qemu-kvm-ev
> qemu-kvm-ev-2.6.0-27.1.el7.x86_64
> [16:16:52][root@vm1:/var/log]$
> 
> this message is from 2.6
> 
> 
> Adding Paolo and Michal.

sorry, theres a little bit more in the startup log which might be helpful

Unexpected error in object_property_find() at qom/object.c:1003:
2016-12-15T13:58:43.140073Z qemu-kvm: can't apply global
Opteron_G4-x86_64-cpu.x1apic=off: Property '.x1apic' not found


the complete startup parameters in that case are

LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
guest=jg123_vm1_loadtest,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-jg123_vm1_loadtest/master-key.aes
-machine rhel6.5.0,accel=kvm,usb=off -cpu Opteron_G4 -m 65536 -realtime
mlock=off -smp 8,maxcpus=64,sockets=16,cores=4,threads=1 -uuid
20047459-7e48-4160-ac77-0e26a4f99472 -smbios
'type=1,manufacturer=oVirt,product=oVirt
Node,version=7-3.1611.el7.centos,serial=4C4C4544-0039-3310-8043-B2C04F463032,uuid=20047459-7e48-4160-ac77-0e26a4f99472'
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-2-jg123_vm1_loadtest/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2016-12-15T13:58:41,driftfix=slew -global
kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
-drive if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/0002-0002-0002-0002-02f7/d5b56ea4-782e-4002-bb9a-478b337b5c9f/images/f022eca0-1af3-43ad-acad-4731ceceed3e/94b35a95-c80b-434c-afe7-e8ab4391395c,format=qcow2,if=none,id=drive-scsi0-0-0-0,serial=f022eca0-1af3-43ad-acad-4731ceceed3e,cache=none,werror=stop,rerror=stop,aio=native
-device
scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1
-netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:5e:43:04,bus=pci.0,addr=0x3,bootindex=2
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/20047459-7e48-4160-ac77-0e26a4f99472.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/20047459-7e48-4160-ac77-0e26a4f99472.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice
tls-port=5900,addr=192.168.210.80,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-k en-us -device
qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2
-msg timestamp=on



> 
> 
> 
> 
> 
> >
> >
> >
> >
> > cheers,
> >
> > Juergen
> >
> > Am 13.12.2016 um 10:30 schrieb Ralf Schenk:
> > > Hello
> > >
> > > by browsing the repository on
> > > http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/
> 
> > 

Re: [ovirt-users] Ovirt DB

2016-12-15 Thread Yaniv Dary
Be sure to use engine backup tool. After this add new host to clusters and
remove the old ones.
Storage should be replicated and and available on the new setup for any new
hosts that are added to the cluster.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Thu, Dec 15, 2016 at 10:43 AM, Koen Vanoppen 
wrote:

> Dear All,
>
> I'm working on a disaster recovery procedure for ovirt. My question is the
> following:
> In worst case we completely lost our ovirt environment.
> So we setup a new ovirt management host and restore the db. (I do a daily
> backup of the ovirtdb (we are at 4.0.4.4-1.el7.centos). ).
>
> What will I restore from this? All my hypervisors (which will be down of
> course) and settings from the hypervisors? VM's (settings)?
>
> What other things do I need to add to the DR to be completely save?
> There was this project about ovirt DR, but it seems that the repo isn't
> working...
> https://github.com/xandradx/ovirt-engine-disaster-recovery
>
> Kind regards,
>
> Koen
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [HEADS UP] CentOS 7.3 is rolling out, need qemu-kvm-ev 2.6

2016-12-15 Thread Sandro Bonazzola
Il 15/Dic/2016 16:17, "InterNetX - Juergen Gotteswinter" 
ha scritto:

Am 15.12.2016 um 15:51 schrieb Sandro Bonazzola:
>
>
> On Thu, Dec 15, 2016 at 3:02 PM, InterNetX - Juergen Gotteswinter
> > wrote:
>
> i can confirm that it will break ...
>
> Dec 15 14:58:43 vm1 journal: internal error: qemu unexpectedly closed
> the monitor: Unexpected error in object_property_find() at
> qom/object.c:1003:#0122016-12-15T13:58:43.140073Z qemu-kvm: can't
apply
> global Opteron_G4-x86_64-cpu.x1apic=off: Property '.x1apic' not found
>
>
> Just an heads up that qemu-kvm-ev 2.6 is now
> in http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/
> 

[16:16:47][root@vm1:/var/log]$rpm -aq |grep qemu-kvm-ev
qemu-kvm-ev-2.6.0-27.1.el7.x86_64
[16:16:52][root@vm1:/var/log]$

this message is from 2.6


Adding Paolo and Michal.





>
>
>
>
> cheers,
>
> Juergen
>
> Am 13.12.2016 um 10:30 schrieb Ralf Schenk:
> > Hello
> >
> > by browsing the repository on
> > http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/
>  I can't
see
> > any qemu-kvm-ev-2.6.* RPM.
> >
> > I think this will break if I update the Ovirt-Hosts...
> >
> > [root@microcloud21 yum.repos.d]# yum check-update | grep libvirt
> > libvirt.x86_64  2.0.0-10.el7_3.2
> > updates
> > libvirt-client.x86_64   2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon.x86_64   2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-config-network.x86_642.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-config-nwfilter.x86_64   2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-interface.x86_64  2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-lxc.x86_642.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-network.x86_642.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-nodedev.x86_642.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-nwfilter.x86_64   2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-qemu.x86_64   2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-secret.x86_64 2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-storage.x86_642.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-kvm.x86_64   2.0.0-10.el7_3.2
> > updates
> > libvirt-lock-sanlock.x86_64 2.0.0-10.el7_3.2
> > updates
> > libvirt-python.x86_64   2.0.0-2.el7
> > base
> >
> > [root@microcloud21 yum.repos.d]# yum check-update | grep qemu*
> > ipxe-roms-qemu.noarch   20160127-5.git6366fa7a.el7
> > base
> > libvirt-daemon-driver-qemu.x86_64   2.0.0-10.el7_3.2
> > updates
> >
> >
> > Am 13.12.2016 um 08:43 schrieb Sandro Bonazzola:
> >>
> >>
> >> On Mon, Dec 12, 2016 at 6:38 PM, Chris Adams  
> >> >> wrote:
> >>
> >> Once upon a time, Sandro Bonazzola 
> >> >>
said:
> >> > In terms of ovirt repositories, qemu-kvm-ev 2.6 is available
> >> right now in
> >> > ovirt-master-snapshot-static, ovirt-4.0-snapshot-static, and
> >> ovirt-4.0-pre
> >> > (contains 4.0.6 RC4 rpms going to be announced in a few
minutes.)
> >>
> >> Will qemu-kvm-ev 2.6 be added to any of the oVirt repos for
prior
> >> versions (such as 3.5 or 3.6)?
> >>
> >>
> >> You can enable CentOS Virt SIG repo by running "yum install
> >> centos-release-qemu-ev" on your CentOS 7 systems.
> >> and you'll have updated qemu-kvm-ev.
> >>
> >>
> >>
> >> --
> >> Chris Adams 
> >>
> >> ___
> >> Users mailing list
> >> Users@ovirt.org 
> >
> >> http://lists.phx.ovirt.org/mailman/listinfo/users
> 
> >>  >
> >>
> >>
> >>
> >>
> >> --
> >> Sandro Bonazzola
> >> Better technology. Faster innovation. Powered by community
collaboration.
> >> See how it works at redhat.com 
> 
> >>
> >>
> >> ___
> >> Users 

Re: [ovirt-users] Hosted Engine won't deploy

2016-12-15 Thread Martin Sivak
Hi,

Thanks for the info. The hosted engine domain should not be the master one
indeed. I will add some people that know the storage aspect better to the
thread to figure out how this can be solved.

Simone, Nir: Is there a way to force another domain to take over the master
role? I think the current situation is not how it should work, the hosted
engine storage should have never gotten the master storage duties.

Martin

On Thu, Dec 15, 2016 at 4:18 PM, Gervais de Montbrun  wrote:

> Hi Martin,
>
> I do see the hosted_engine storage domain. Should it be listed as
> (Master)? If not, how can I force my "proper" Data domain to take over as
> master?
>
> I also see my hosted engine showing up:
>
> At some point I renamed it to match the name I am using for it, but now
> when I try to change settings on it, I get an error:
>
>
> It's great that there will be GUI ability to setup a hosted engine. That's
> not great for me if they are not working -- which seems to be the case
> right now :-( and doubly so if the ability to make it work goes away on the
> command line.
>
> I truly appreciate the help and hope there are more good suggestions
> coming my way.
>
> Cheers,
> Gervais
>
>
>
> On Dec 15, 2016, at 4:30 AM, Martin Sivak  wrote:
>
> I am running oVirt 4.0.5 and have a hosted engine and Cluster and Data
> Center say that they are running in 4.0 compatibility mode, so I don't
> understand this error.
>
>
> Do you see the hosted engine storage domain and the hosted engine VM
> in the webadmin? Both should be imported automatically on 3.6+
> compatibility level when a master storage domain is added to the
> system.
>
> Alarmingly, I was
> warned that this is deprecated and will not be possible in oVirt 4.1.
>
>
> We have a nice UI that allows to control the hosted engine deployment
> to additional hosts directly from the webadmin. You will be able to
> add a hosted engine capable host by just marking it as such in the Add
> host dialog.
>
> --
> Martin Sivak
> SLA / oVirt
>
> On Wed, Dec 14, 2016 at 11:05 PM, Gervais de Montbrun
>  wrote:
>
> Hi all,
>
> I had to reinstall one of my hosts today and I noticed an issue. The error
> message was:
>
> Ovirt2:
>
> Cannot edit Host. You are using an unmanaged hosted engine VM. Please
> upgrade the cluster level to 3.6 and wait for the hosted engine storage
> domain to be properly imported.
>
> I am running oVirt 4.0.5 and have a hosted engine and Cluster and Data
> Center say that they are running in 4.0 compatibility mode, so I don't
> understand this error. I did get the host setup by running `hosted-engine
> --deploy` and walking through the command line options. Alarmingly, I was
> warned that this is deprecated and will not be possible in oVirt 4.1.
>
> Any suggestions as to what I should do to sort out my issue?
>
> Cheers,
> Gervais
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.phx.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine won't deploy

2016-12-15 Thread Gervais de Montbrun
Hi Martin,

I do see the hosted_engine storage domain. Should it be listed as (Master)? If 
not, how can I force my "proper" Data domain to take over as master?


I also see my hosted engine showing up:


At some point I renamed it to match the name I am using for it, but now when I 
try to change settings on it, I get an error:



It's great that there will be GUI ability to setup a hosted engine. That's not 
great for me if they are not working -- which seems to be the case right now 
:-( and doubly so if the ability to make it work goes away on the command line.

I truly appreciate the help and hope there are more good suggestions coming my 
way.

Cheers,
Gervais



> On Dec 15, 2016, at 4:30 AM, Martin Sivak  wrote:
> 
>> I am running oVirt 4.0.5 and have a hosted engine and Cluster and Data
>> Center say that they are running in 4.0 compatibility mode, so I don't
>> understand this error.
> 
> Do you see the hosted engine storage domain and the hosted engine VM
> in the webadmin? Both should be imported automatically on 3.6+
> compatibility level when a master storage domain is added to the
> system.
> 
>> Alarmingly, I was
>> warned that this is deprecated and will not be possible in oVirt 4.1.
> 
> We have a nice UI that allows to control the hosted engine deployment
> to additional hosts directly from the webadmin. You will be able to
> add a hosted engine capable host by just marking it as such in the Add
> host dialog.
> 
> --
> Martin Sivak
> SLA / oVirt
> 
> On Wed, Dec 14, 2016 at 11:05 PM, Gervais de Montbrun
>  wrote:
>> Hi all,
>> 
>> I had to reinstall one of my hosts today and I noticed an issue. The error
>> message was:
>> 
>> Ovirt2:
>> 
>> Cannot edit Host. You are using an unmanaged hosted engine VM. Please
>> upgrade the cluster level to 3.6 and wait for the hosted engine storage
>> domain to be properly imported.
>> 
>> I am running oVirt 4.0.5 and have a hosted engine and Cluster and Data
>> Center say that they are running in 4.0 compatibility mode, so I don't
>> understand this error. I did get the host setup by running `hosted-engine
>> --deploy` and walking through the command line options. Alarmingly, I was
>> warned that this is deprecated and will not be possible in oVirt 4.1.
>> 
>> Any suggestions as to what I should do to sort out my issue?
>> 
>> Cheers,
>> Gervais
>> 
>> 
>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.phx.ovirt.org/mailman/listinfo/users
>> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [HEADS UP] CentOS 7.3 is rolling out, need qemu-kvm-ev 2.6

2016-12-15 Thread InterNetX - Juergen Gotteswinter
Am 15.12.2016 um 15:51 schrieb Sandro Bonazzola:
> 
> 
> On Thu, Dec 15, 2016 at 3:02 PM, InterNetX - Juergen Gotteswinter
> > wrote:
> 
> i can confirm that it will break ...
> 
> Dec 15 14:58:43 vm1 journal: internal error: qemu unexpectedly closed
> the monitor: Unexpected error in object_property_find() at
> qom/object.c:1003:#0122016-12-15T13:58:43.140073Z qemu-kvm: can't apply
> global Opteron_G4-x86_64-cpu.x1apic=off: Property '.x1apic' not found
> 
> 
> Just an heads up that qemu-kvm-ev 2.6 is now
> in http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/
>  

[16:16:47][root@vm1:/var/log]$rpm -aq |grep qemu-kvm-ev
qemu-kvm-ev-2.6.0-27.1.el7.x86_64
[16:16:52][root@vm1:/var/log]$

this message is from 2.6

> 
> 
>  
> 
> cheers,
> 
> Juergen
> 
> Am 13.12.2016 um 10:30 schrieb Ralf Schenk:
> > Hello
> >
> > by browsing the repository on
> > http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/
>  I can't see
> > any qemu-kvm-ev-2.6.* RPM.
> >
> > I think this will break if I update the Ovirt-Hosts...
> >
> > [root@microcloud21 yum.repos.d]# yum check-update | grep libvirt
> > libvirt.x86_64  2.0.0-10.el7_3.2
> > updates
> > libvirt-client.x86_64   2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon.x86_64   2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-config-network.x86_642.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-config-nwfilter.x86_64   2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-interface.x86_64  2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-lxc.x86_642.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-network.x86_642.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-nodedev.x86_642.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-nwfilter.x86_64   2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-qemu.x86_64   2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-secret.x86_64 2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-storage.x86_642.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-kvm.x86_64   2.0.0-10.el7_3.2
> > updates
> > libvirt-lock-sanlock.x86_64 2.0.0-10.el7_3.2
> > updates
> > libvirt-python.x86_64   2.0.0-2.el7
> > base
> >
> > [root@microcloud21 yum.repos.d]# yum check-update | grep qemu*
> > ipxe-roms-qemu.noarch   20160127-5.git6366fa7a.el7
> > base
> > libvirt-daemon-driver-qemu.x86_64   2.0.0-10.el7_3.2
> > updates
> >
> >
> > Am 13.12.2016 um 08:43 schrieb Sandro Bonazzola:
> >>
> >>
> >> On Mon, Dec 12, 2016 at 6:38 PM, Chris Adams  
> >> >> wrote:
> >>
> >> Once upon a time, Sandro Bonazzola  
> >> >> said:
> >> > In terms of ovirt repositories, qemu-kvm-ev 2.6 is available
> >> right now in
> >> > ovirt-master-snapshot-static, ovirt-4.0-snapshot-static, and
> >> ovirt-4.0-pre
> >> > (contains 4.0.6 RC4 rpms going to be announced in a few minutes.)
> >>
> >> Will qemu-kvm-ev 2.6 be added to any of the oVirt repos for prior
> >> versions (such as 3.5 or 3.6)?
> >>
> >>
> >> You can enable CentOS Virt SIG repo by running "yum install
> >> centos-release-qemu-ev" on your CentOS 7 systems.
> >> and you'll have updated qemu-kvm-ev.
> >>
> >>
> >>
> >> --
> >> Chris Adams 
> >>
> >> ___
> >> Users mailing list
> >> Users@ovirt.org 
> >
> >> http://lists.phx.ovirt.org/mailman/listinfo/users
> 
> >>  >
> >>
> >>
> >>
> >>
> >> --
> >> Sandro Bonazzola
> >> Better technology. Faster innovation. Powered by community 
> collaboration.
> >> See how it works at redhat.com 
> 
> >>
> >>
> >> ___
> >> Users mailing list
> >> Users@ovirt.org 
> >> 

Re: [ovirt-users] Hosted Engine won't deploy

2016-12-15 Thread Gervais de Montbrun
Hi Kasturi,

They were imported automatically. I see the hosted_engine domain and the 
hosted_engine vm in my list of vm's. 

Cheers,
Gervais



> On Dec 15, 2016, at 3:06 AM, knarra  wrote:
> 
> On 12/15/2016 03:35 AM, Gervais de Montbrun wrote:
>> Hi all,
>> 
>> I had to reinstall one of my hosts today and I noticed an issue. The error 
>> message was:
>> 
>> Ovirt2:
>> Cannot edit Host. You are using an unmanaged hosted engine VM. Please 
>> upgrade the cluster level to 3.6 and wait for the hosted engine storage 
>> domain to be properly imported.
>> I am running oVirt 4.0.5 and have a hosted engine and Cluster and Data 
>> Center say that they are running in 4.0 compatibility mode, so I don't 
>> understand this error. I did get the host setup by running `hosted-engine 
>> --deploy` and walking through the command line options. Alarmingly, I was 
>> warned that this is deprecated and will not be possible in oVirt 4.1. 
>> 
>> Any suggestions as to what I should do to sort out my issue?
>> 
>> Cheers,
>> Gervais
> Hi Gervais,
> 
> Have you imported hosted_storage into your environment. I have hit this 
> issue when i did not have hosted_storage domain and hosted_engine vm imported 
> into my setup. 
> 
> Thanks
> kasturi
>> 
>> 
>> 
>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.phx.ovirt.org/mailman/listinfo/users 
>> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted-Engine Fails to Start

2016-12-15 Thread Simone Tiraboschi
On Wed, Dec 14, 2016 at 6:30 PM, Nate T. Llaneza 
wrote:

> Hey Simone,
>
>
>
> Here is the output you requested.
>
>
>
> [root@cits002 ~]# ls -l /rhev/data-center/mnt/backup.citsnc.com
> \:_export_Hosted-Engine/45aaa81e-4047-4209-ae4c-8504a243d16e/dom_md/ids
>
> -rwxr-xr-x 1 vdsm kvm 1048576 Dec 13 06:29 /rhev/data-center/mnt/backup.
> citsnc.com:_export_Hosted-Engine/45aaa81e-4047-4209-
> ae4c-8504a243d16e/dom_md/ids
>


Thanks, I think you have just to run
sanlock client renewal
-s 
45aaa81e-4047-4209-ae4c-8504a243d16e:2:/rhev/data-center/mnt/backup.citsnc.com:
_export_Hosted-Engine/45aaa81e-4047-4209-ae4c-8504a243d16e/dom_md/ids:0
to manually force a renew of the sanlock lease


>
>
>
> *From: *Simone Tiraboschi 
> *Sent: *Wednesday, December 14, 2016 11:57 AM
> *To: *Nate T. Llaneza 
> *Cc: *users 
> *Subject: *Re: Re[2]: [ovirt-users] Hosted-Engine Fails to Start
>
>
>
>
> On Wed, Dec 14, 2016 at 5:09 PM, Nate T. Llaneza 
> wrote:
>
>> Hey Simone,
>>
>> Thanks for looking at it! I am attaching the requested sanlock log file.
>> I see alot of errors in the past two days.
>>
>
> Could you please paste the output of:
> ls -l /rhev/data-center/mnt/backup.citsnc.com:_export_
> Hosted-Engine/45aaa81e-4047-4209-ae4c-8504a243d16e/dom_md/ids
>
>
>>
>> Regards,
>>
>> Nate
>>
>>
>> -- Original Message --
>> From: "Simone Tiraboschi" 
>> To: "Nate T. Llaneza" 
>> Cc: "users" 
>> Sent: 12/14/2016 10:54:20 AM
>> Subject: Re: [ovirt-users] Hosted-Engine Fails to Start
>>
>> hi Nate,
>> your issue is here:
>> Thread-1300::ERROR::2016-12-14 
>> 08:35:43,818::vm::773::virt.vm::(_startUnderlyingVm)
>> vmId=`ef7b601b-1ae6-4adf-bde6-1455bdb03f52`::The vm start process failed
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/virt/vm.py", line 714, in _startUnderlyingVm
>> self._run()
>>   File "/usr/share/vdsm/virt/vm.py", line 2026, in _run
>> self._connection.createXML(domxml, flags),
>>   File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
>> line 123, in wrapper
>> ret = f(*args, **kwargs)
>>   File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 917, in
>> wrapper
>> return func(inst, *args, **kwargs)
>>   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3782, in
>> createXML
>> if ret is None:raise libvirtError('virDomainCreateXML() failed',
>> conn=self)
>> libvirtError: Failed to acquire lock: No space left on device
>>
>> Could you please attach the output of
>>  sanlock client status
>>
>>
>>
>> On Wed, Dec 14, 2016 at 2:55 PM, Nate T. Llaneza 
>> wrote:
>>
>>> Hey Guys,
>>>
>>> I just performed the update to ovirt-4.0.6-pre on my hosted engine
>>> hypervisor (making sure to pull from the baseurl and not the mirrorlist).
>>> This cleaned up the Glib error message I sent earlier. Awesome! The
>>> downside is that the hosted-engine is still not starting. I'm not sure
>>> where to look, so I am attaching the vdsm log and Hosted-Engine log. Let me
>>> know if there is anything else you need. Thanks for y'alls help.
>>>
>>> Regards,
>>>
>>> Nathan
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.phx.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [HEADS UP] CentOS 7.3 is rolling out, need qemu-kvm-ev 2.6

2016-12-15 Thread Sandro Bonazzola
On Thu, Dec 15, 2016 at 3:02 PM, InterNetX - Juergen Gotteswinter <
j...@internetx.com> wrote:

> i can confirm that it will break ...
>
> Dec 15 14:58:43 vm1 journal: internal error: qemu unexpectedly closed
> the monitor: Unexpected error in object_property_find() at
> qom/object.c:1003:#0122016-12-15T13:58:43.140073Z qemu-kvm: can't apply
> global Opteron_G4-x86_64-cpu.x1apic=off: Property '.x1apic' not found
>
>
Just an heads up that qemu-kvm-ev 2.6 is now in http://mirror.centos.org/
centos/7/virt/x86_64/kvm-common/




> cheers,
>
> Juergen
>
> Am 13.12.2016 um 10:30 schrieb Ralf Schenk:
> > Hello
> >
> > by browsing the repository on
> > http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/ I can't see
> > any qemu-kvm-ev-2.6.* RPM.
> >
> > I think this will break if I update the Ovirt-Hosts...
> >
> > [root@microcloud21 yum.repos.d]# yum check-update | grep libvirt
> > libvirt.x86_64  2.0.0-10.el7_3.2
> > updates
> > libvirt-client.x86_64   2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon.x86_64   2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-config-network.x86_642.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-config-nwfilter.x86_64   2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-interface.x86_64  2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-lxc.x86_642.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-network.x86_642.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-nodedev.x86_642.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-nwfilter.x86_64   2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-qemu.x86_64   2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-secret.x86_64 2.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-driver-storage.x86_642.0.0-10.el7_3.2
> > updates
> > libvirt-daemon-kvm.x86_64   2.0.0-10.el7_3.2
> > updates
> > libvirt-lock-sanlock.x86_64 2.0.0-10.el7_3.2
> > updates
> > libvirt-python.x86_64   2.0.0-2.el7
> > base
> >
> > [root@microcloud21 yum.repos.d]# yum check-update | grep qemu*
> > ipxe-roms-qemu.noarch   20160127-5.git6366fa7a.el7
> > base
> > libvirt-daemon-driver-qemu.x86_64   2.0.0-10.el7_3.2
> > updates
> >
> >
> > Am 13.12.2016 um 08:43 schrieb Sandro Bonazzola:
> >>
> >>
> >> On Mon, Dec 12, 2016 at 6:38 PM, Chris Adams  >> > wrote:
> >>
> >> Once upon a time, Sandro Bonazzola  >> > said:
> >> > In terms of ovirt repositories, qemu-kvm-ev 2.6 is available
> >> right now in
> >> > ovirt-master-snapshot-static, ovirt-4.0-snapshot-static, and
> >> ovirt-4.0-pre
> >> > (contains 4.0.6 RC4 rpms going to be announced in a few minutes.)
> >>
> >> Will qemu-kvm-ev 2.6 be added to any of the oVirt repos for prior
> >> versions (such as 3.5 or 3.6)?
> >>
> >>
> >> You can enable CentOS Virt SIG repo by running "yum install
> >> centos-release-qemu-ev" on your CentOS 7 systems.
> >> and you'll have updated qemu-kvm-ev.
> >>
> >>
> >>
> >> --
> >> Chris Adams >
> >> ___
> >> Users mailing list
> >> Users@ovirt.org 
> >> http://lists.phx.ovirt.org/mailman/listinfo/users
> >> 
> >>
> >>
> >>
> >>
> >> --
> >> Sandro Bonazzola
> >> Better technology. Faster innovation. Powered by community
> collaboration.
> >> See how it works at redhat.com 
> >>
> >>
> >> ___
> >> Users mailing list
> >> Users@ovirt.org
> >> http://lists.phx.ovirt.org/mailman/listinfo/users
> >
> > --
> >
> >
> > *Ralf Schenk*
> > fon +49 (0) 24 05 / 40 83 70
> > fax +49 (0) 24 05 / 40 83 759
> > mail *r...@databay.de* 
> >
> > *Databay AG*
> > Jens-Otto-Krag-Straße 11
> > D-52146 Würselen
> > *www.databay.de* 
> >
> > Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> > Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
> > Philipp Hermanns
> > Aufsichtsratsvorsitzender: Wilhelm Dohmen
> >
> > 
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.phx.ovirt.org/mailman/listinfo/users
> >
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host to VM Affinity rules

2016-12-15 Thread Martin Sivak
Hi Luca,

we have couple of features (present and coming) that might be related
to what you need:

1) Preferred hosts and host pinning

This is controlled by the Start Running On list. A VM will start on
one of those hosts of possible, but it will still start somewhere else
if the selected hosts are not capable of running the VM (down, not
enough memory, ...)

You can combine this with disabled migrations (the same dialog) to get
what we call VM pinning. VMs configured like that will only start on
one of the listed hosts and will never migrate.

2) Affinity labels

There is a new feature we introduced in oVirt 4.0 that allows you to
specify a "sub-cluster". It is a stronger version of the Preferred
hosts setting, because it makes sure a labelled VM is only started or
migrated to hosts with the same label.

You can read more about it in my blog post:
https://www.ovirt.org/blog/2016/07/affinity-labels/

3) VM to Host affinity

We are working on an enhancement to the current VM to VM affinity
groups for oVirt 4.1. This update will add host support to the
affinity groups and allow you the same flexibility you currently have
with VM to VM affinity groups (soft/strong positive/negative
relationship).


Best regards

--
Martin Sivak
SLA / oVirt

On Thu, Dec 15, 2016 at 2:27 PM, Luca 'remix_tj' Lorenzetto
 wrote:
> Hi,
>
> i'm looking for more information about affinity rules in oVirt 4. The
> only rules i've been able to set is between vm. I have not found the
> way to specify an affinity rule between host and guests.
> The only option available is "Start Running On" on the Host tab while
> editing the vm. Does this option refers to the possible host where the
> vm starts or is more restrictive and defines where the vm can be
> executed (and is somewhat like an affinity rule)?
>
> Thank you
>
> Luca
>
> --
> "E' assurdo impiegare gli uomini di intelligenza eccellente per fare
> calcoli che potrebbero essere affidati a chiunque se si usassero delle
> macchine"
> Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
>
> "Internet è la più grande biblioteca del mondo.
> Ma il problema è che i libri sono tutti sparsi sul pavimento"
> John Allen Paulos, Matematico (1945-vivente)
>
> Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [HEADS UP] CentOS 7.3 is rolling out, need qemu-kvm-ev 2.6

2016-12-15 Thread InterNetX - Juergen Gotteswinter
i can confirm that it will break ...

Dec 15 14:58:43 vm1 journal: internal error: qemu unexpectedly closed
the monitor: Unexpected error in object_property_find() at
qom/object.c:1003:#0122016-12-15T13:58:43.140073Z qemu-kvm: can't apply
global Opteron_G4-x86_64-cpu.x1apic=off: Property '.x1apic' not found

cheers,

Juergen

Am 13.12.2016 um 10:30 schrieb Ralf Schenk:
> Hello
> 
> by browsing the repository on
> http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/ I can't see
> any qemu-kvm-ev-2.6.* RPM.
> 
> I think this will break if I update the Ovirt-Hosts...
> 
> [root@microcloud21 yum.repos.d]# yum check-update | grep libvirt
> libvirt.x86_64  2.0.0-10.el7_3.2
> updates
> libvirt-client.x86_64   2.0.0-10.el7_3.2
> updates
> libvirt-daemon.x86_64   2.0.0-10.el7_3.2
> updates
> libvirt-daemon-config-network.x86_642.0.0-10.el7_3.2
> updates
> libvirt-daemon-config-nwfilter.x86_64   2.0.0-10.el7_3.2
> updates
> libvirt-daemon-driver-interface.x86_64  2.0.0-10.el7_3.2
> updates
> libvirt-daemon-driver-lxc.x86_642.0.0-10.el7_3.2
> updates
> libvirt-daemon-driver-network.x86_642.0.0-10.el7_3.2
> updates
> libvirt-daemon-driver-nodedev.x86_642.0.0-10.el7_3.2
> updates
> libvirt-daemon-driver-nwfilter.x86_64   2.0.0-10.el7_3.2
> updates
> libvirt-daemon-driver-qemu.x86_64   2.0.0-10.el7_3.2
> updates
> libvirt-daemon-driver-secret.x86_64 2.0.0-10.el7_3.2
> updates
> libvirt-daemon-driver-storage.x86_642.0.0-10.el7_3.2
> updates
> libvirt-daemon-kvm.x86_64   2.0.0-10.el7_3.2
> updates
> libvirt-lock-sanlock.x86_64 2.0.0-10.el7_3.2
> updates
> libvirt-python.x86_64   2.0.0-2.el7 
> base
> 
> [root@microcloud21 yum.repos.d]# yum check-update | grep qemu*
> ipxe-roms-qemu.noarch   20160127-5.git6366fa7a.el7  
> base
> libvirt-daemon-driver-qemu.x86_64   2.0.0-10.el7_3.2
> updates
> 
> 
> Am 13.12.2016 um 08:43 schrieb Sandro Bonazzola:
>>
>>
>> On Mon, Dec 12, 2016 at 6:38 PM, Chris Adams > > wrote:
>>
>> Once upon a time, Sandro Bonazzola > > said:
>> > In terms of ovirt repositories, qemu-kvm-ev 2.6 is available
>> right now in
>> > ovirt-master-snapshot-static, ovirt-4.0-snapshot-static, and
>> ovirt-4.0-pre
>> > (contains 4.0.6 RC4 rpms going to be announced in a few minutes.)
>>
>> Will qemu-kvm-ev 2.6 be added to any of the oVirt repos for prior
>> versions (such as 3.5 or 3.6)?
>>
>>
>> You can enable CentOS Virt SIG repo by running "yum install
>> centos-release-qemu-ev" on your CentOS 7 systems.
>> and you'll have updated qemu-kvm-ev.
>>
>>  
>>
>> --
>> Chris Adams >
>> ___
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.phx.ovirt.org/mailman/listinfo/users
>> 
>>
>>
>>
>>
>> -- 
>> Sandro Bonazzola
>> Better technology. Faster innovation. Powered by community collaboration.
>> See how it works at redhat.com 
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.phx.ovirt.org/mailman/listinfo/users
> 
> -- 
> 
> 
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70
> fax +49 (0) 24 05 / 40 83 759
> mail *r...@databay.de* 
>   
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* 
> 
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
> Philipp Hermanns
> Aufsichtsratsvorsitzender: Wilhelm Dohmen
> 
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.phx.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Set grace period for host fencing

2016-12-15 Thread Luca 'remix_tj' Lorenzetto
Hello,

i was doing some tests on how ovirt reacts to host failures. I've seen
that in case of network failure an host stais in non responding state
for 60 seconds before being fenced. Event log reports:

Host XXX is not responding. It will stay in Connecting state for a
grace period of 61 seconds and after that an attempt to fence the host
will be issued.

Does this value can be set to a lower/different value?

I've seen that this becomes of at least 1 minute for restarting the
HA-protected vms of that host. If that host, additionally, hosts also
the engine, a vm (different from the engine) will start not before 6
minutes, which is a very long time.

Luca

-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How can I import VMWare OVA file to oVirt 4.0?

2016-12-15 Thread Tom Gamull
I'm not going to lie I never had much luck. I end up using qemu-img convert on 
the command line to convert a vmdk to a raw file. The ova should just be a 
compressed file. Plenty of blogs on this. 

I know that's not the answer you're looking for but if you run out of options 
you can do this. Look for articles on converting vmdk or ova or qcow2. You will 
need to create the vm in advance. Then check the image name on the storage 
subtab. Then do a find for it. 
Find / -name [long name here]

I've had virtual appliances built for kvm just not work (like Citrix 
Netscaler). Using the esx appliance and doing the steps above it works on 
ovirt/rhev. I am not speaking for ovirt or red hat here though, just as someone 
who muddled through


> On Dec 15, 2016, at 4:02 AM, Shahar Havivi  wrote:
> 
> Hi,
> There is a patch and its working with you ova
> you can apply it to your environment if you want it to work now
> https://gerrit.ovirt.org/#/c/68510/
> 
>  Shahar.
> 
>> On Wed, Dec 14, 2016 at 7:30 PM,  wrote:
>> Derek, thank you for participating.
>> 
>> In the end, I completed the task (OVA imported).
>> The root of the problem is that I tried to use the import with thin 
>> provision.
>> But there is a known issue 
>> https://bugzilla.redhat.com/show_bug.cgi?id=1382404
>> Workaround: Import VM from OVA as preallocated
>> It's not a very good solution, but it is better than nothing
>> 
>> 14.12.2016, 17:50, "Derek Atkins" :
>> > Hi Aleksey,
>> >
>> > One more question for debugging purposes: How long does the import go
>> > before it dies? Do you have enough time to run:
>> >
>> >   ps aux | grep virt-v2v
>> >
>> > On the import host while it's running? This might help us determine
>> > where it's trying to store the data. It should be storing it in the
>> > target storage, but it's possible that it's using temp space and then
>> > running out.
>> >
>> > -derek
>> >
>> > aleksey.maksi...@it-kb.ru writes:
>> >
>> >>  Hi Shahar.
>> >>
>> >>  Look at the attached screenshot
>> >>
>> >>  14.12.2016, 12:15, "Shahar Havivi" :
>> >>
>> >>  Hi,
>> >>  I was able to import your VMs with storage domain that had 350GB 
>> >> free.
>> >>  Your ova have a disk with 1GB actual size and 256 virtual size - 
>> >> when I
>> >>  try to import to a storage with 50G I got the error that you had.
>> >>
>> >>  if you will look at /var/log/vdsm/import/... you will see the logs 
>> >> of each
>> >>  import,
>> >>  In the one that fail I found this line:
>> >>  qemu-img: error while writing sector 423174528: No space left on 
>> >> device
>> >>
>> >>  virt-v2v try to convert via qemu-img the img from vmdk to qcow and
>> >>  encountered free space issue in the storage domain.
>> >>
>> >>  Please consider asking the mailing list  libgues...@redhat.com about 
>> >> this
>> >>  issue or try to increase your storage domain if you want a quick fix.
>> >>
>> >>  The reason that I think you where able to import via the virt-v2v 
>> >> command
>> >>  is the usage of 'virt-v2v -o null' which is not writing to the disk.
>> >>
>> >>  Shahar.
>> >>
>> >>  On Tue, Dec 13, 2016 at 12:54 PM,  wrote:
>> >>
>> >>  Engine - oVirt Engine Version: 4.0.5.5-1.el7.centos (CentOS 7.2)
>> >>
>> >>  All Hosts:
>> >>  OS Version:RHEL - 7 - 2.1511.el7.centos.2.10
>> >>  OS Description:CentOS Linux 7 (Core)
>> >>  Kernel Version:3.10.0 - 327.36.3.el7.x86_64
>> >>  KVM Version:2.3.0 - 31.el7.16.1
>> >>  LIBVIRT Version:libvirt-2.0.0-10.el7_3.2
>> >>  VDSM Version:vdsm-4.18.15.3-1.el7.centos
>> >>  SPICE Version:0.12.4 - 15.el7_2.2
>> >>  GlusterFS Version:[N/A]
>> >>  CEPH Version:librbd1-0.80.7-3.el7
>> >>
>> >>  [root@KOM-AD01-VM31 ~]# virt-v2v -V
>> >>  virt-v2v 1.28.1
>> >>
>> >>  13.12.2016, 13:20, "Shahar Havivi" :
>> >>
>> >>  version of Engine and Host and virt-v2v (which is running on 
>> >> your
>> >>  host)
>> >>  and are you running on Fedora, Centos ext?
>> >>
>> >>  On Tue, Dec 13, 2016 at 12:08 PM, 
>> >>  wrote:
>> >>
>> >>  1. No. This is an unacceptable option for me
>> >>  2. No. This is my first experience
>> >>  3. Versions where? On Engine or on Host ?
>> >>  4. Yes.
>> >>
>> >>  13.12.2016, 13:03, "Shahar Havivi" :
>> >>
>> >>  Thanks you,
>> >>  Several questions:
>> >>  1. did you try to import the disk target as 
>> >> preallocate?
>> >>  2. did you try to import other ova files?
>> >>  3. can you please send us version of virt-v2v, vdsm 
>> >> and
>> >> 

[ovirt-users] Host to VM Affinity rules

2016-12-15 Thread Luca 'remix_tj' Lorenzetto
Hi,

i'm looking for more information about affinity rules in oVirt 4. The
only rules i've been able to set is between vm. I have not found the
way to specify an affinity rule between host and guests.
The only option available is "Start Running On" on the Host tab while
editing the vm. Does this option refers to the possible host where the
vm starts or is more restrictive and defines where the vm can be
executed (and is somewhat like an affinity rule)?

Thank you

Luca

-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to change compatibility version due to hosted engine

2016-12-15 Thread Renout Gerrits
Thanks, will give that a go

Karma++ :)



On Thu, Dec 15, 2016 at 12:13 PM, Simone Tiraboschi 
wrote:

>
>
> On Thu, Dec 15, 2016 at 12:04 PM, Renout Gerrits  wrote:
>
>> Hi Simone,
>>
>> Do you mean the following?
>>
>> - create a new cluster with version 3.6.
>> - Migrate HE to new cluster
>> - Shutdown all VM's in old cluster
>> - change compatibility version of old cluster to 3.6
>> - migrate HE back to old old cluster
>>
>> In this case the old cluster still thinks that the HE is running in it
>> due to the ChangeVmCluster action that fails. From what I see this is fixed
>> in 4.02: https://bugzilla.redhat.com/show_bug.cgi?id=1351533
>> But I can't to upgrade 4 yet. Do you know if this fix has been back
>> ported to 3.6?
>>
>
> Yes, it has been backported to 3.6.9:
> https://gerrit.ovirt.org/#/c/63377/2
>
> Another option is just to add a new hosted-engine host to the new 3.6
> cluster and restart the engine VM there from the hosted-engine CLI.
>
> For me it's hard to just try as I will need a maintenance window to
>> shutdown all vm's.
>>
>> Or do you mean something completely different?
>>
>> Thanks,
>> Renout
>>
>> On Thu, Dec 15, 2016 at 11:01 AM, Simone Tiraboschi 
>> wrote:
>>
>>>
>>>
>>> On Thu, Dec 15, 2016 at 10:33 AM, Renout Gerrits  wrote:
>>>
 Hi All,

 We have an environment which we want to upgrade to ovirt 4.0. This was
 initially installed at 3.5, then upgraded to 3.6.
 Problem we're facing is that for an upgrade to 4.0 a compatibility
 version of 3.6 is required.
 When changing the cluster compatibility version of the 'Default'
 cluster from 3.5 to 3.6 we get the error in the gui: "Cannot change cluster
 compatibility version when a VM is active. please shutdown all VMs in the
 cluster."
 Even when we shutdown all vm's, except for the Hosted Engine we get
 this error.
 On the hosts a 'vdsClient -s 0 list' is done which will return the HE.
 In the engine logs we have the following error: "2016-12-08
 13:00:18,139 WARN  [org.ovirt.engine.core.bll.st
 orage.UpdateStoragePoolCommand] (default task-25) [77a50037]
 CanDoAction of action 'UpdateStoragePool' failed for user admin@internal.
 Reasons: VAR__TYPE__STORAGE__POOL,VAR__ACTION__UPDATE,$ClustersList
 Default,ERROR_CANNOT_UPDATE_STORAGE_POOL_COMPATIBILITY_VERSI
 ON_BIGGER_THAN_CLUSTERS"

 So problem would be that the HE is in the Default cluster. But how does
 one change the compatibility version when the HE is down?
 I've tried shutting down the engine, changing the version in the DB:
 "UPDATE vds_groups SET compatibility_version='3.6';" and starting the
 engine again.

 When I do that and try to start a VM:
 2016-12-09T13:30:21.346740Z qemu-kvm: warning: CPU(s) not present in
 any NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
 2016-12-09T13:30:21.346883Z qemu-kvm: warning: All CPU(s) up to maxcpus
 should be described in NUMA config
 2016-12-09T13:30:21.355699Z qemu-kvm: "-memory 'slots|maxmem'" is not
 supported by: rhel6.5.0

 So that change was rolled back to compatibilty 3.5. After that we we're
 able to start vm's again.
 Please note that all hosts and HE are EL7.

 To me this doesn't seem like a strange set-up or upgrade path. Would it
 be possible to start the HE in another cluster than Default or is there a
 way to bypass the vdsClient list check?
 What is the recommended way of upgrading the HE in this case?

>>>
>>> Please take a look here:
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1364557
>>>
>>>

 Kind regards,
 Renout

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [HEADS UP] CentOS 7.3 is rolling out, need qemu-kvm-ev 2.6

2016-12-15 Thread Gianluca Cecchi
On Wed, Dec 14, 2016 at 5:33 PM, Paolo Bonzini  wrote:

>
>
> On 13/12/2016 18:28, Gianluca Cecchi wrote:
> > - So I have to try the mix of 7.3 kernel and qemu 2.6, correct?
>
> Yes, please.  If it works, the problem is transient.
>
> Thanks,
>
> Paolo
>
> > Perhaps it was a problem only during install and not happening now that
> > the VM has been deployed?
> > Gianluca
>


OK. So I reconfigured the kernel used during "host-engine --deploy", that
is  3.10.0-514.el7.x86_64
I also maintained the qemu-kvm-ev-2.6.0-27.1.el7.x86_64 version

After reboot and exiting maintenance, also with this configuration the
engine vm was able to start.
What I see in qemu logfile on host

2016-12-15 09:43:41.365+: starting up libvirt version: 2.0.0, package:
10.el7 (CentOS BuildSystem <
http://bugs.centos.org>, 2016-11-12-02:15:12, c1bm.rdu2.centos.org), qemu
version: 2.6.0 (qemu-kvm-ev-2
.6.0-27.1.el7), hostname: ovirt41.localdomain.local
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name
guest=HostedEngine,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngine/master-key.aes
-machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Broadwell,+rtm,+hle -m
6184 -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1
-uuid 2a262cdc-9102-4061-841f-ec64333cdad2 -smbios
'type=1,manufacturer=oVirt,product=oVirt
Node,version=7-2.1511.el7.centos.2.10,serial=564D3726-E55D-5C11-DC45-CA1A50480E83,uuid=2a262cdc-9102-4061-841f-ec64333cdad2'
-nographic -no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-HostedEngine/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2016-12-15T09:43:41,driftfix=slew -global
kvm-pit.lost_tick_policy=discard -no-hpet -no-reboot -boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive
file=/var/run/vdsm/storage/3e7d4336-c2e1-4fdc-99e7-81a0e69cf3a3/286a8fda-b77d-48b8-80a9-15b63e5321a2/63bfeca6-dc92-4145-845d-e785a18de949,format=raw,if=none,id=drive-virtio-disk0,serial=286a8fda-b77d-48b8-80a9-15b63e5321a2,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
tap,fd=30,id=hostnet0,vhost=on,vhostfd=32 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:08:cc:5a,bus=pci.0,addr=0x2
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/2a262cdc-9102-4061-841f-ec64333cdad2.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/2a262cdc-9102-4061-841f-ec64333cdad2.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev
socket,id=charchannel2,path=/var/lib/libvirt/qemu/channels/2a262cdc-9102-4061-841f-ec64333cdad2.org.ovirt.hosted-engine-setup.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0
-msg timestamp=on

Perhaps during host-deploy the engine VM is launched with other kind of
options that can have caused problems?

I was also able to install a "normal" VM (CentOS 6.8 netinstall) into the
infra.
During the initial "run once" phase the command executed has been

qemu  12862  1 27 11:19 ?00:00:08 /usr/libexec/qemu-kvm
-name guest=centos68,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-centos68/master-key.aes
-machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Broadwell,+rtm,+hle -m
size=2097152k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
1,maxcpus=16,sockets=16,cores=1,threads=1 -numa
node,nodeid=0,cpus=0,mem=2048 -uuid 5c178328-2114-49bc-9acf-e7d93e06c0a7
-smbios type=1,manufacturer=oVirt,product=oVirt
Node,version=7-2.1511.el7.centos.2.10,serial=564D3726-E55D-5C11-DC45-CA1A50480E83,uuid=5c178328-2114-49bc-9acf-e7d93e06c0a7
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-2-centos68/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2016-12-15T10:19:40,driftfix=slew -global
kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot
menu=on,splash-time=1,strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive
if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive

Re: [ovirt-users] Unable to change compatibility version due to hosted engine

2016-12-15 Thread Simone Tiraboschi
On Thu, Dec 15, 2016 at 12:04 PM, Renout Gerrits  wrote:

> Hi Simone,
>
> Do you mean the following?
>
> - create a new cluster with version 3.6.
> - Migrate HE to new cluster
> - Shutdown all VM's in old cluster
> - change compatibility version of old cluster to 3.6
> - migrate HE back to old old cluster
>
> In this case the old cluster still thinks that the HE is running in it due
> to the ChangeVmCluster action that fails. From what I see this is fixed in
> 4.02: https://bugzilla.redhat.com/show_bug.cgi?id=1351533
> But I can't to upgrade 4 yet. Do you know if this fix has been back ported
> to 3.6?
>

Yes, it has been backported to 3.6.9:
https://gerrit.ovirt.org/#/c/63377/2

Another option is just to add a new hosted-engine host to the new 3.6
cluster and restart the engine VM there from the hosted-engine CLI.

For me it's hard to just try as I will need a maintenance window to
> shutdown all vm's.
>
> Or do you mean something completely different?
>
> Thanks,
> Renout
>
> On Thu, Dec 15, 2016 at 11:01 AM, Simone Tiraboschi 
> wrote:
>
>>
>>
>> On Thu, Dec 15, 2016 at 10:33 AM, Renout Gerrits  wrote:
>>
>>> Hi All,
>>>
>>> We have an environment which we want to upgrade to ovirt 4.0. This was
>>> initially installed at 3.5, then upgraded to 3.6.
>>> Problem we're facing is that for an upgrade to 4.0 a compatibility
>>> version of 3.6 is required.
>>> When changing the cluster compatibility version of the 'Default' cluster
>>> from 3.5 to 3.6 we get the error in the gui: "Cannot change cluster
>>> compatibility version when a VM is active. please shutdown all VMs in the
>>> cluster."
>>> Even when we shutdown all vm's, except for the Hosted Engine we get this
>>> error.
>>> On the hosts a 'vdsClient -s 0 list' is done which will return the HE.
>>> In the engine logs we have the following error: "2016-12-08 13:00:18,139
>>> WARN  [org.ovirt.engine.core.bll.storage.UpdateStoragePoolCommand]
>>> (default task-25) [77a50037] CanDoAction of action 'UpdateStoragePool'
>>> failed for user admin@internal. Reasons: 
>>> VAR__TYPE__STORAGE__POOL,VAR__ACTION__UPDATE,$ClustersList
>>> Default,ERROR_CANNOT_UPDATE_STORAGE_POOL_COMPATIBILITY_VERSI
>>> ON_BIGGER_THAN_CLUSTERS"
>>>
>>> So problem would be that the HE is in the Default cluster. But how does
>>> one change the compatibility version when the HE is down?
>>> I've tried shutting down the engine, changing the version in the DB:
>>> "UPDATE vds_groups SET compatibility_version='3.6';" and starting the
>>> engine again.
>>>
>>> When I do that and try to start a VM:
>>> 2016-12-09T13:30:21.346740Z qemu-kvm: warning: CPU(s) not present in any
>>> NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
>>> 2016-12-09T13:30:21.346883Z qemu-kvm: warning: All CPU(s) up to maxcpus
>>> should be described in NUMA config
>>> 2016-12-09T13:30:21.355699Z qemu-kvm: "-memory 'slots|maxmem'" is not
>>> supported by: rhel6.5.0
>>>
>>> So that change was rolled back to compatibilty 3.5. After that we we're
>>> able to start vm's again.
>>> Please note that all hosts and HE are EL7.
>>>
>>> To me this doesn't seem like a strange set-up or upgrade path. Would it
>>> be possible to start the HE in another cluster than Default or is there a
>>> way to bypass the vdsClient list check?
>>> What is the recommended way of upgrading the HE in this case?
>>>
>>
>> Please take a look here:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1364557
>>
>>
>>>
>>> Kind regards,
>>> Renout
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to change compatibility version due to hosted engine

2016-12-15 Thread Renout Gerrits
Hi Simone,

Do you mean the following?

- create a new cluster with version 3.6.
- Migrate HE to new cluster
- Shutdown all VM's in old cluster
- change compatibility version of old cluster to 3.6
- migrate HE back to old old cluster

In this case the old cluster still thinks that the HE is running in it due
to the ChangeVmCluster action that fails. From what I see this is fixed in
4.02: https://bugzilla.redhat.com/show_bug.cgi?id=1351533
But I can't to upgrade 4 yet. Do you know if this fix has been back ported
to 3.6? For me it's hard to just try as I will need a maintenance window to
shutdown all vm's.

Or do you mean something completely different?

Thanks,
Renout

On Thu, Dec 15, 2016 at 11:01 AM, Simone Tiraboschi 
wrote:

>
>
> On Thu, Dec 15, 2016 at 10:33 AM, Renout Gerrits  wrote:
>
>> Hi All,
>>
>> We have an environment which we want to upgrade to ovirt 4.0. This was
>> initially installed at 3.5, then upgraded to 3.6.
>> Problem we're facing is that for an upgrade to 4.0 a compatibility
>> version of 3.6 is required.
>> When changing the cluster compatibility version of the 'Default' cluster
>> from 3.5 to 3.6 we get the error in the gui: "Cannot change cluster
>> compatibility version when a VM is active. please shutdown all VMs in the
>> cluster."
>> Even when we shutdown all vm's, except for the Hosted Engine we get this
>> error.
>> On the hosts a 'vdsClient -s 0 list' is done which will return the HE.
>> In the engine logs we have the following error: "2016-12-08 13:00:18,139
>> WARN  [org.ovirt.engine.core.bll.storage.UpdateStoragePoolCommand]
>> (default task-25) [77a50037] CanDoAction of action 'UpdateStoragePool'
>> failed for user admin@internal. Reasons: 
>> VAR__TYPE__STORAGE__POOL,VAR__ACTION__UPDATE,$ClustersList
>> Default,ERROR_CANNOT_UPDATE_STORAGE_POOL_COMPATIBILITY_VERSI
>> ON_BIGGER_THAN_CLUSTERS"
>>
>> So problem would be that the HE is in the Default cluster. But how does
>> one change the compatibility version when the HE is down?
>> I've tried shutting down the engine, changing the version in the DB:
>> "UPDATE vds_groups SET compatibility_version='3.6';" and starting the
>> engine again.
>>
>> When I do that and try to start a VM:
>> 2016-12-09T13:30:21.346740Z qemu-kvm: warning: CPU(s) not present in any
>> NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
>> 2016-12-09T13:30:21.346883Z qemu-kvm: warning: All CPU(s) up to maxcpus
>> should be described in NUMA config
>> 2016-12-09T13:30:21.355699Z qemu-kvm: "-memory 'slots|maxmem'" is not
>> supported by: rhel6.5.0
>>
>> So that change was rolled back to compatibilty 3.5. After that we we're
>> able to start vm's again.
>> Please note that all hosts and HE are EL7.
>>
>> To me this doesn't seem like a strange set-up or upgrade path. Would it
>> be possible to start the HE in another cluster than Default or is there a
>> way to bypass the vdsClient list check?
>> What is the recommended way of upgrading the HE in this case?
>>
>
> Please take a look here:
> https://bugzilla.redhat.com/show_bug.cgi?id=1364557
>
>
>>
>> Kind regards,
>> Renout
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] dedicated gluster storage network

2016-12-15 Thread Sahina Bose
On Wed, Dec 14, 2016 at 9:48 PM, Nathanaël Blanchet 
wrote:

> Hi,
>
> I changed the previous all in one network to a dedicated gluster storage
> at the network level. But when doing netstat (or gluster peer status), I
> can see that listening connection still use the previous vlan and gluster
> bricks are probed on this vlan.
>
> I detached them with ovirt or manually to probe them an other time, but
> they are still probed on the initial ovirtmgmt vlan.
>
> What is this gluster network supposed to do in reality?
>

Gluster does not have full support for network segregation. What we achieve
with gluster network in oVirt is to have the gluster cluster peer probed
with the management (ovirtmgmt) network, and while creating volumes the
bricks are addressed using the IP address assigned to gluster network. This
ensures that the brick data traffic is on a separate network.


>
>
> --
> Nathanaël Blanchet
>
> Supervision réseau
> Pôle Infrastrutures Informatiques
> 227 avenue Professeur-Jean-Louis-Viala
> 34193 MONTPELLIER CEDEX 5
> Tél. 33 (0)4 67 54 84 55
> Fax  33 (0)4 67 54 84 14
> blanc...@abes.fr
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.phx.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to change compatibility version due to hosted engine

2016-12-15 Thread Simone Tiraboschi
On Thu, Dec 15, 2016 at 10:33 AM, Renout Gerrits  wrote:

> Hi All,
>
> We have an environment which we want to upgrade to ovirt 4.0. This was
> initially installed at 3.5, then upgraded to 3.6.
> Problem we're facing is that for an upgrade to 4.0 a compatibility version
> of 3.6 is required.
> When changing the cluster compatibility version of the 'Default' cluster
> from 3.5 to 3.6 we get the error in the gui: "Cannot change cluster
> compatibility version when a VM is active. please shutdown all VMs in the
> cluster."
> Even when we shutdown all vm's, except for the Hosted Engine we get this
> error.
> On the hosts a 'vdsClient -s 0 list' is done which will return the HE.
> In the engine logs we have the following error: "2016-12-08 13:00:18,139
> WARN  [org.ovirt.engine.core.bll.storage.UpdateStoragePoolCommand]
> (default task-25) [77a50037] CanDoAction of action 'UpdateStoragePool'
> failed for user admin@internal. Reasons: 
> VAR__TYPE__STORAGE__POOL,VAR__ACTION__UPDATE,$ClustersList
> Default,ERROR_CANNOT_UPDATE_STORAGE_POOL_COMPATIBILITY_
> VERSION_BIGGER_THAN_CLUSTERS"
>
> So problem would be that the HE is in the Default cluster. But how does
> one change the compatibility version when the HE is down?
> I've tried shutting down the engine, changing the version in the DB:
> "UPDATE vds_groups SET compatibility_version='3.6';" and starting the
> engine again.
>
> When I do that and try to start a VM:
> 2016-12-09T13:30:21.346740Z qemu-kvm: warning: CPU(s) not present in any
> NUMA nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
> 2016-12-09T13:30:21.346883Z qemu-kvm: warning: All CPU(s) up to maxcpus
> should be described in NUMA config
> 2016-12-09T13:30:21.355699Z qemu-kvm: "-memory 'slots|maxmem'" is not
> supported by: rhel6.5.0
>
> So that change was rolled back to compatibilty 3.5. After that we we're
> able to start vm's again.
> Please note that all hosts and HE are EL7.
>
> To me this doesn't seem like a strange set-up or upgrade path. Would it be
> possible to start the HE in another cluster than Default or is there a way
> to bypass the vdsClient list check?
> What is the recommended way of upgrading the HE in this case?
>

Please take a look here:
https://bugzilla.redhat.com/show_bug.cgi?id=1364557


>
> Kind regards,
> Renout
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt uploader

2016-12-15 Thread Luca 'remix_tj' Lorenzetto
On Thu, Dec 15, 2016 at 9:41 AM, Gomez Asier  wrote:
> If I run centos~$showmount -e 172.16.8.169
>
> Output:
> rpc mount export: RPC: Unable to receive; errno = No route to host
>
> What does it mean?
>

It means you can't reach that IP from your manager VM. Check your
network configuration of the manager. I had the same issue and i
solved by adding a secondary NIC to the manager VM in the same network
as the NFS storage (172.16.8.x in your setup).

Luca

-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Ovirt DB

2016-12-15 Thread Koen Vanoppen
Dear All,

I'm working on a disaster recovery procedure for ovirt. My question is the
following:
In worst case we completely lost our ovirt environment.
So we setup a new ovirt management host and restore the db. (I do a daily
backup of the ovirtdb (we are at 4.0.4.4-1.el7.centos). ).

What will I restore from this? All my hypervisors (which will be down of
course) and settings from the hypervisors? VM's (settings)?

What other things do I need to add to the DR to be completely save?
There was this project about ovirt DR, but it seems that the repo isn't
working...
https://github.com/xandradx/ovirt-engine-disaster-recovery

Kind regards,

Koen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine won't deploy

2016-12-15 Thread Martin Sivak
> I am running oVirt 4.0.5 and have a hosted engine and Cluster and Data
> Center say that they are running in 4.0 compatibility mode, so I don't
> understand this error.

Do you see the hosted engine storage domain and the hosted engine VM
in the webadmin? Both should be imported automatically on 3.6+
compatibility level when a master storage domain is added to the
system.

> Alarmingly, I was
> warned that this is deprecated and will not be possible in oVirt 4.1.

We have a nice UI that allows to control the hosted engine deployment
to additional hosts directly from the webadmin. You will be able to
add a hosted engine capable host by just marking it as such in the Add
host dialog.

--
Martin Sivak
SLA / oVirt

On Wed, Dec 14, 2016 at 11:05 PM, Gervais de Montbrun
 wrote:
> Hi all,
>
> I had to reinstall one of my hosts today and I noticed an issue. The error
> message was:
>
> Ovirt2:
>
> Cannot edit Host. You are using an unmanaged hosted engine VM. Please
> upgrade the cluster level to 3.6 and wait for the hosted engine storage
> domain to be properly imported.
>
> I am running oVirt 4.0.5 and have a hosted engine and Cluster and Data
> Center say that they are running in 4.0 compatibility mode, so I don't
> understand this error. I did get the host setup by running `hosted-engine
> --deploy` and walking through the command line options. Alarmingly, I was
> warned that this is deprecated and will not be possible in oVirt 4.1.
>
> Any suggestions as to what I should do to sort out my issue?
>
> Cheers,
> Gervais
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.phx.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users