[ovirt-users] Live Migration FAIL

2022-07-05 Thread m . rohweder
Hi,

on my setup the live migration looks greate but,

all sytems finnished migration tasks and running on other Host, but most of 
them (random instances not every time the same) going into hung (100%cpu insode 
VM an no response), that only reset can fix.

and i cannot find annything.

Greetings 
Michael
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2TNQ2F6JEK6MZ377CMAD7CYUDIXSSDSD/


[ovirt-users] Live Migration Support with oVirt 4.0.4

2019-05-14 Thread Anantha Raghava

Hi,

In version 4.0.2, Live Migration was supported with Open Virtual Switch 
and was informed that migration support with OVS will be included in 
version 4.0.4. Is live migration supported with OVS in current version 
that is 4.0.4?


--

Thanks & Regards,


Anantha Raghava

eXza Technology Consulting & Services


Do not print this e-mail unless required. Save Paper & trees.


--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se



--
IMPORTANT!
This message has been scanned for viruses and phishing links.
However, it is your responsibility to evaluate the links and attachments you 
choose to click.
If you are uncertain, we always try to help.
Greetings helpd...@actnet.se


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4KOAVCAUP5CIDJF5CSRGWEMY747EHI2B/


[ovirt-users] Live migration failed

2019-03-12 Thread Bong Shau Fui
Hi:
   I deployed 2 ovirt hosts and an ovirt engine in a nested KVM server.  I've a 
windows vm setup and tried to perform live migration but failed.  I checked on 
the hosts and found them meeting the live migration requirements, or at least 
that's what I thought.  I took the requirement from the below document.
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.5/html/Administration_Guide/sect-Migrating_Virtual_Machines_Between_Hosts.html
The hosts, both source and destination are quite empty, with only the hosted 
engine, 1 centos vm and the windows VM in the cluster.  I can do a live 
migration for the centos vm successfully.  But when I tried live migration on 
the hosted-engine vm it failed immediately with a message "No available host to 
migrate VMs to".  When I tried to migrate the windows VM the message box that 
let me choose the destination host popped up but failed after a while.
   I'd like to ask where can I get more information with regards to 
live-migration apart from /var/log/ovirt-engine/engine.log ?  I also checked on 
the ovirt hosts' /var/log/vdsm/vdsm.log but found nothing pointing to the 
reason why it failed.
   Below is the extract from /var/log/ovirt-engine/engine.log when the 
live-migration took place

2019-03-12 14:37:58,159+08 INFO  
[org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-131) [] 
User admin@internal successfully logged in with scopes: ovirt-app-api 
ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search 
ovirt-ext=token-info:validate ovirt-ext=token:password-access
2019-03-12 14:37:58,450+08 INFO  
[org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-59) [7730] Lock freed to 
object 
'EngineLock:{exclusiveLocks='[d113be83-2740-4246-a1f2-b9344889c3cf=PROVIDER]', 
sharedLocks=''}'
2019-03-12 14:38:02,544+08 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] 
(EE-ManagedThreadFactory-engineScheduled-Thread-50) [] 
BaseAsyncTask::onTaskEndSuccess: Task '67631cf6-4c75-4681-88ef-fd4af56c0363' 
(Parent Command 'RemoveDisk', Parameters Type 
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended 
successfully.
2019-03-12 14:38:12,677+08 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] 
(EE-ManagedThreadFactory-engineScheduled-Thread-16) [] 
BaseAsyncTask::onTaskEndSuccess: Task '67631cf6-4c75-4681-88ef-fd4af56c0363' 
(Parent Command 'RemoveDisk', Parameters Type 
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended 
successfully.
2019-03-12 14:38:21,650+08 INFO  
[org.ovirt.engine.core.bll.aaa.SessionDataContainer] 
(EE-ManagedThreadFactory-engineScheduled-Thread-51) [] Not removing session 
'xDiHqqa6l+g8cngM26TTCfW7NeLN3WgWChsx28wUM391vAngSxwtyCkLbQxZR1AbJ5I+2bkPZNQijMUk0jLZcA==',
 session has running commands for user 'admin@internal-authz'.
2019-03-12 14:38:22,782+08 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] 
(EE-ManagedThreadFactory-engineScheduled-Thread-49) [] 
BaseAsyncTask::onTaskEndSuccess: Task '67631cf6-4c75-4681-88ef-fd4af56c0363' 
(Parent Command 'RemoveDisk', Parameters Type 
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended 
successfully.
2019-03-12 14:38:33,018+08 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] 
(EE-ManagedThreadFactory-engineScheduled-Thread-74) [] 
BaseAsyncTask::onTaskEndSuccess: Task '67631cf6-4c75-4681-88ef-fd4af56c0363' 
(Parent Command 'RemoveDisk', Parameters Type 
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended 
successfully.
2019-03-12 14:38:43,261+08 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] 
(EE-ManagedThreadFactory-engineScheduled-Thread-59) [7730] 
BaseAsyncTask::onTaskEndSuccess: Task '67631cf6-4c75-4681-88ef-fd4af56c0363' 
(Parent Command 'RemoveDisk', Parameters Type 
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended 
successfully.
2019-03-12 14:38:53,528+08 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] 
(EE-ManagedThreadFactory-engineScheduled-Thread-13) [] 
BaseAsyncTask::onTaskEndSuccess: Task '67631cf6-4c75-4681-88ef-fd4af56c0363' 
(Parent Command 'RemoveDisk', Parameters Type 
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended 
successfully.
2019-03-12 14:39:03,759+08 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] 
(EE-ManagedThreadFactory-engineScheduled-Thread-43) [] 
BaseAsyncTask::onTaskEndSuccess: Task '67631cf6-4c75-4681-88ef-fd4af56c0363' 
(Parent Command 'RemoveDisk', Parameters Type 
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended 
successfully.
2019-03-12 14:39:14,011+08 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] 
(EE-ManagedThreadFactory-engineScheduled-Thread-60) [] 
BaseAsyncTask::onTaskEndSuccess: Task '67631cf6-4c75-4681-88ef-fd4af56c0363' 
(Parent Command 'RemoveDisk', Parameters Type 
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended 
successfully.

[ovirt-users] Live Migration broken in 4.2.6 under OVS/OVN networking

2018-09-24 Thread Davide Butti
Hello; despite many tests and tentative adjustments, I'm currently unable to 
live migrate VMs on oVirt 4.2.6.

The vdsm.log contains a "failed to migrate" error, that points to an attempt to 
access a non-existent network port "TestOne". This is in fact the name of the 
(externally defined) network, and isn't anywhere to be seen as OVS port.

2018-09-24 14:32:57,059+ ERROR (migsrc/4c0255b5) [virt.vm] 
(vmId='4c0255b5-0f52-4da7-ac97-d54d815cd6ab') Cannot get interface MTU on 
'TestOne': No such device (migration:290)
2018-09-24 14:32:57,793+ ERROR (migsrc/4c0255b5) [virt.vm] 
(vmId='4c0255b5-0f52-4da7-ac97-d54d815cd6ab') Failed to migrate (migration:455)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 437, in 
_regular_run
self._startUnderlyingMigration(time.time())
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 509, in 
_startUnderlyingMigration
self._perform_with_conv_schedule(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 587, in 
_perform_with_conv_schedule
self._perform_migration(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 529, in 
_perform_migration
self._migration_flags)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 98, in f
ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", 
line 130, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92, in 
wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1746, in 
migrateToURI3
if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', 
dom=self)
libvirtError: Cannot get interface MTU on 'TestOne': No such device

The setup was previously working under OVS, and I have 
"migration_ovs_hook_enabled = true" under /etc/vdsm/vdsm.conf

Do I need to change anything? Is this supposed to work in the first place?

Many thanks for your help, have a nice day
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HAQ3PDEUUKSXKED7ET6EKKOV4XYYOLUD/


[ovirt-users] live migration of hosted engine between two hosts

2018-08-21 Thread Douglas Duckworth
Hi

I am trying to live migrate my hosted engine between two hosts.

Both hosts are now up.

The hosted engine exists on shared NFS storage mounted on both hypervisors.

Though when I tried to migrate the VM I am told that's not possible.

Could this be since I never defined migration network?  If so I tried doing
that in the oVirt UI as described
https://ovirt.org/documentation/admin-guide/chap-Logical_Networks/ though
many of these options have changed.

Thanks,

Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit
Weill Cornell Medicine
1300 York - LC-502
E: d...@med.cornell.edu
O: 212-746-6305
F: 212-746-8690
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HLK76I7RS6DU5U7MXZHH4R6CWF7N2S6F/


[ovirt-users] Live Migration via NFS

2018-08-07 Thread Douglas Duckworth
Hi

I haven't used oVirt in several years so wanted to ask how live migration
may have changed.

Can NFS facilitate live migration in cluster of two hosts each which have
NFS mounted?

The hosts would have local attached storage which would be the original
location of VMs.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y37AJHCXVHYJPH3R4HY3FWXXBVKBCTWW/


Re: [ovirt-users] Live migration of VM(0 downtime) while Hypervisor goes down in ovirt

2018-02-11 Thread Luca 'remix_tj' Lorenzetto
What you're looking at is called fault tolerance in other hypervisors.

As far as i know, ovirt doesn't implement such solution.

But if your system doesn't support failure recovery done by high
availability options, you should take in account to revise your application
architecture if you want to keep running on ovirt.

Luca

Il 10 feb 2018 8:31 AM, "Ranjith P"  ha scritto:

Hi,

>>Who's shutting down the hypervisor? (Or perhaps it is shutdown
externally, due to overheating or otherwise?)

We need a continuous availability of VM's in our production setup. If the
hypervisor goes down due to any hardware failure or work load then VM's
above hypervisor will reboot and started on available hypervisors. This is
normally happening but it disrupting VM's. Can you suggest a solution in
this case? Can we achieve this challenge using glusterfs?

Thanks & Regards
Ranjith

Sent from Yahoo Mail on Android


On Sat, Feb 10, 2018 at 2:07 AM, Yaniv Kaul
 wrote:


On Fri, Feb 9, 2018 at 9:25 PM, ranjithsp...@yahoo.com <
ranjithsp...@yahoo.com> wrote:

Hi,
Anyone can suggest how to setup VM Live migration (without restart vm)
while Hypervisor goes down in ovirt?


I think there are two parts to achieving this:
1. Have a script that migrates VMs off a specific host. This should be easy
to write using the Python/Ruby/Java SDK, Ansible or using REST directly.
2. Having this script run as a service when a host shuts down, in the right
order - well before libvirt and VDSM shut down, and would be fast enough
not to be terminated by systemd.
This is a bit more challenging.

Who's shutting down the hypervisor? (Or perhaps it is shutdown externally,
due to overheating or otherwise?)
Y.


Using glusterfs is it possible? Then how?

Thanks & Regards
Ranjith

Sent from Yahoo Mail on Android


__ _
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/ mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration of VM(0 downtime) while Hypervisor goes down in ovirt

2018-02-09 Thread Ranjith P
Hi,
>>Who's shutting down the hypervisor? (Or perhaps it is shutdown externally, 
>>due to overheating or otherwise?)
We need a continuous availability of VM's in our production setup. If the 
hypervisor goes down due to any hardware failure or work load then VM's above 
hypervisor will reboot and started on available hypervisors. This is normally 
happening but it disrupting VM's. Can you suggest a solution in this case? Can 
we achieve this challenge using glusterfs?
Thanks & RegardsRanjith

Sent from Yahoo Mail on Android 
 
  On Sat, Feb 10, 2018 at 2:07 AM, Yaniv Kaul wrote:   

On Fri, Feb 9, 2018 at 9:25 PM, ranjithsp...@yahoo.com  
wrote:

Hi,Anyone can suggest how to setup VM Live migration (without restart vm) while 
Hypervisor goes down in ovirt?

I think there are two parts to achieving this:1. Have a script that migrates 
VMs off a specific host. This should be easy to write using the 
Python/Ruby/Java SDK, Ansible or using REST directly.2. Having this script run 
as a service when a host shuts down, in the right order - well before libvirt 
and VDSM shut down, and would be fast enough not to be terminated by 
systemd.This is a bit more challenging.
Who's shutting down the hypervisor? (Or perhaps it is shutdown externally, due 
to overheating or otherwise?)Y. 
Using glusterfs is it possible? Then how?
Thanks & RegardsRanjith

Sent from Yahoo Mail on Android
__ _
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/ mailman/listinfo/users



  
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration of VM(0 downtime) while Hypervisor goes down in ovirt

2018-02-09 Thread Yaniv Kaul
On Fri, Feb 9, 2018 at 9:25 PM, ranjithsp...@yahoo.com <
ranjithsp...@yahoo.com> wrote:

> Hi,
> Anyone can suggest how to setup VM Live migration (without restart vm)
> while Hypervisor goes down in ovirt?
>

I think there are two parts to achieving this:
1. Have a script that migrates VMs off a specific host. This should be easy
to write using the Python/Ruby/Java SDK, Ansible or using REST directly.
2. Having this script run as a service when a host shuts down, in the right
order - well before libvirt and VDSM shut down, and would be fast enough
not to be terminated by systemd.
This is a bit more challenging.

Who's shutting down the hypervisor? (Or perhaps it is shutdown externally,
due to overheating or otherwise?)
Y.


> Using glusterfs is it possible? Then how?
>
> Thanks & Regards
> Ranjith
>
> Sent from Yahoo Mail on Android
> 
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Live migration of VM(0 downtime) while Hypervisor goes down in ovirt

2018-02-09 Thread ranjithsp...@yahoo.com
Hi,Anyone can suggest how to setup VM Live migration (without restart vm) while 
Hypervisor goes down in ovirt?Using glusterfs is it possible? Then how?
Thanks & RegardsRanjith

Sent from Yahoo Mail on Android___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration without Shared Storage

2017-12-28 Thread Michal Skrivanek


> On 28 Dec 2017, at 20:37, FERNANDO FREDIANI  wrote:
> 
> Are you talking about all kinds of Storage (iSCSI, FC, NFS and 
> Localstorage/POSIX) ?
> 
yes

> Because I believe you may be able to specify the destination path on the 
> destination Host and when working with Localstorage/POSIX that may be simpler.
> 
yes, it is indeed more simple, but still it’s not going to work out of the box 
right now. It’s a non-trivial feature to do that properly
> Fernando
> 
> On 28/12/2017 17:32, Michal Skrivanek wrote:
>> 
>> 
>>> On 28 Dec 2017, at 19:56, FERNANDO FREDIANI >> > wrote:
>>> 
>>> Has anyone tried the command below under the hood between two oVirt Node 
>>> (in the same Datacenter or between two different (local) ones) ? Does it 
>>> work ?
>> 
>> no, it does not with ovirt. ovirt manages storage differently than plain 
>> libvirt
>> 
>>> virsh migrate --live --persistent --undefinesource --copy-storage-all \
>>> --verbose --desturi  
>>> This is such a fantastic features for certain scenarios that may help a lot 
>>> maintenance or even migration between hosts with Local Storage to minimize 
>>> Downtime and mainly all the hassle of having to Poweroff a VM, Export to an 
>>> Export Datastore, umount it, mount on the other 
>>> Host/Datacenter, Import and Power On.
>>> 
>>> Thanks
>>> Regards
>>> 
>>> Fernando
>>> 
>>> [1] Ref: 
>>> https://hgj.hu/live-migrating-a-virtual-machine-with-libvirt-without-a-shared-storage/
>>>  
>>> 
>>> ___
>>> Users mailing list
>>> Users@ovirt.org 
>>> http://lists.ovirt.org/mailman/listinfo/users 
>>> 
>> 
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration without Shared Storage

2017-12-28 Thread FERNANDO FREDIANI
Are you talking about all kinds of Storage (iSCSI, FC, NFS and 
Localstorage/POSIX) ?


Because I believe you may be able to specify the destination path on the 
destination Host and when working with Localstorage/POSIX that may be 
simpler.


Fernando


On 28/12/2017 17:32, Michal Skrivanek wrote:



On 28 Dec 2017, at 19:56, FERNANDO FREDIANI 
> wrote:


Has anyone tried the command below under the hood between two oVirt 
Node (in the same Datacenter or between two different (local) ones) ? 
Does it work ?


no, it does not with ovirt. ovirt manages storage differently than 
plain libvirt



virsh migrate --live --persistent --undefinesource --copy-storage-all \
     --verbose --desturi  
This is such a fantastic features for certain scenarios that may help 
a lot maintenance or even migration between hosts with Local Storage 
to minimize Downtime and mainly all the hassle of having to Poweroff 
a VM, Export to an Export Datastore, umount it, mount on the other 
Host/Datacenter, Import and Power On.


Thanks
Regards

Fernando

[1] Ref: 
https://hgj.hu/live-migrating-a-virtual-machine-with-libvirt-without-a-shared-storage/

___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration without Shared Storage

2017-12-28 Thread Michal Skrivanek


> On 28 Dec 2017, at 19:56, FERNANDO FREDIANI  wrote:
> 
> Has anyone tried the command below under the hood between two oVirt Node (in 
> the same Datacenter or between two different (local) ones) ? Does it work ?

no, it does not with ovirt. ovirt manages storage differently than plain libvirt

> virsh migrate --live --persistent --undefinesource --copy-storage-all \
> --verbose --desturi  
> This is such a fantastic features for certain scenarios that may help a lot 
> maintenance or even migration between hosts with Local Storage to minimize 
> Downtime and mainly all the hassle of having to Poweroff a VM, Export to an 
> Export Datastore, umount it, mount on the other Host/Datacenter, Import and 
> Power On.
> 
> Thanks
> Regards
> 
> Fernando
> 
> [1] Ref: 
> https://hgj.hu/live-migrating-a-virtual-machine-with-libvirt-without-a-shared-storage/
>  
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Live migration without Shared Storage

2017-12-28 Thread FERNANDO FREDIANI
Has anyone tried the command below under the hood between two oVirt Node 
(in the same Datacenter or between two different (local) ones) ? Does it 
work ?


virsh migrate --live --persistent --undefinesource --copy-storage-all \
    --verbose --desturi  

This is such a fantastic features for certain scenarios that may help a 
lot maintenance or even migration between hosts with Local Storage to 
minimize Downtime and mainly all the hassle of having to Poweroff a VM, 
Export to an Export Datastore, umount it, mount on the other 
Host/Datacenter, Import and Power On.


Thanks
Regards

Fernando

[1] Ref: 
https://hgj.hu/live-migrating-a-virtual-machine-with-libvirt-without-a-shared-storage/
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration without shared storage

2017-12-22 Thread Michal Skrivanek

> On 21 Dec 2017, at 17:14, FERNANDO FREDIANI  wrote:
> 
> That is going certainly to be a very welcome feature and if not yet should be 
> on the top of th roadmap. For planned maintenances it solves mostly all 
> downtime problems.
> 
> 

It is in the roadmap[1] but there’s no active work on it yet, oVirt is 
primarily built around shared storage

Thanks,
michal

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1326857
> Fernando
> 
> On 21/12/2017 12:19, Pujan Shah wrote:
>> We have a bit odd setup where some of our clients have dedicated hosts and 
>> we also have some shared hosts. We can migrate client VMs from their 
>> dedicated host to shared host if we need ot do some maintenance. We don't 
>> have shared storage and currently we are using XenServer which supports live 
>> migration without shared storage. We recently started looking into KVM as an 
>> alternative and decided to try ovirt. To our surprise KVM supports live 
>> migration without shared storage but ovirt does not. 
>> (https://hgj.hu/live-migrating-a-virtual-machine-with-libvirt-without-a-shared-storage/
>>  
>> )
>>   
>> 
>> ​I wanted to know if anyone has dealt with such situation and is this 
>> something others are also looking for?​
>> 
>> 
>> ​Regards,
>> Pujan Shah
>> Systemadministration
>> 
>> --
>> tel.: +49 (0) 221 / 95 168 - 74
>> mail: ​ ​ p...@dom.de 
>> DOM Digital Online Media GmbH,
>> Bismarck Str. 60
>> 50672 Köln
>> 
>> http://www.dom.de/ 
>> 
>> Geschäftsführer: Markus Schulte
>> Handelsregister-Nr.: Amtsgericht Köln HRB 55347
>> UST.-Ident.Nr. DE 814 416 951
>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org 
>> http://lists.ovirt.org/mailman/listinfo/users 
>> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration without shared storage

2017-12-21 Thread FERNANDO FREDIANI
That is going certainly to be a very welcome feature and if not yet 
should be on the top of th roadmap. For planned maintenances it solves 
mostly all downtime problems.


Fernando


On 21/12/2017 12:19, Pujan Shah wrote:
We have a bit odd setup where some of our clients have dedicated hosts 
and we also have some shared hosts. We can migrate client VMs from 
their dedicated host to shared host if we need ot do some maintenance. 
We don't have shared storage and currently we are using XenServer 
which supports live migration without shared storage. We recently 
started looking into KVM as an alternative and decided to try ovirt. 
To our surprise KVM supports live migration without shared storage but 
ovirt does not. 
(https://hgj.hu/live-migrating-a-virtual-machine-with-libvirt-without-a-shared-storage/) 



​I wanted to know if anyone has dealt with such situation and is this 
something others are also looking for?​



​Regards,
Pujan Shah
Systemadministration

--
tel.: +49 (0) 221 / 95 168 - 74
mail:
​ ​
p...@dom.de 
DOM Digital Online Media GmbH,
Bismarck Str. 60
50672 Köln

http://www.dom.de/

Geschäftsführer: Markus Schulte
Handelsregister-Nr.: Amtsgericht Köln HRB 55347
UST.-Ident.Nr. DE 814 416 951


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Live migration without shared storage

2017-12-21 Thread Pujan Shah
We have a bit odd setup where some of our clients have dedicated hosts and
we also have some shared hosts. We can migrate client VMs from their
dedicated host to shared host if we need ot do some maintenance. We don't
have shared storage and currently we are using XenServer which supports
live migration without shared storage. We recently started looking into KVM
as an alternative and decided to try ovirt. To our surprise KVM supports
live migration without shared storage but ovirt does not. (
https://hgj.hu/live-migrating-a-virtual-machine-with-libvirt-without-a-shared-storage/)


​I wanted to know if anyone has dealt with such situation and is this
something others are also looking for?​


​Regards,
Pujan Shah
Systemadministration

--
tel.: +49 (0) 221 / 95 168 - 74
mail:
​ ​
p...@dom.de
DOM Digital Online Media GmbH,
Bismarck Str. 60
50672 Köln

http://www.dom.de/

Geschäftsführer: Markus Schulte
Handelsregister-Nr.: Amtsgericht Köln HRB 55347
UST.-Ident.Nr. DE 814 416 951
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Live migration error in 4.1.2 (next attempt)

2017-06-06 Thread Vadim

Can anybody help me to solve this.

I'm having trouble in live migration. Migration always finished with error. May 
be it is essential, on dashboard cluster status always N/A. VM can run on both 
hosts. After turn on debug log for libvirt i'v got such errors

2017-06-06 09:41:04.842+: 1302: error : qemuDomainObjBeginJobInternal:3107 
: Timed out during operation: cannot acquire state change lock (held by 
remoteDispatchDomainMigratePrepare3Params)
2017-06-06 09:42:04.847+: 1305: error : qemuDomainObjBeginJobInternal:3107 
: Timed out during operation: cannot acquire state change lock (held by 
remoteDispatchDomainMigratePrepare3Params)
2017-06-06 09:43:04.850+: 1304: error : qemuDomainObjBeginJobInternal:3107 
: Timed out during operation: cannot acquire state change lock (held by 
remoteDispatchDomainMigratePrepare3Params)
2017-06-06 09:44:04.841+: 1301: error : qemuDomainObjBeginJobInternal:3107 
: Timed out during operation: cannot acquire state change lock (held by 
remoteDispatchDomainMigratePrepare3Params)
2017-06-06 09:44:25.373+: 10320: error : qemuDomainObjBeginJobInternal:3107 
: Timed out during operation: cannot acquire state change lock (held by 
remoteDispatchDomainMigratePrepare3Params)
2017-06-06 09:44:55.373+: 10320: error : qemuDomainObjBeginJobInternal:3107 
: Timed out during operation: cannot acquire state change lock (held by 
remoteDispatchDomainMigratePrepare3Params)
2017-06-06 09:45:04.851+: 1303: error : qemuDomainObjBeginJobInternal:3107 
: Timed out during operation: cannot acquire state change lock (held by 
remoteDispatchDomainMigratePrepare3Params)
2017-06-06 09:45:25.373+: 10320: error : qemuDomainObjBeginJobInternal:3107 
: Timed out during operation: cannot acquire state change lock (held by 
remoteDispatchDomainMigratePrepare3Params)
2017-06-06 09:46:04.852+: 1302: error : qemuDomainObjBeginJobInternal:3107 
: Timed out during operation: cannot acquire state change lock (held by 
remoteDispatchDomainMigratePrepare3Params)
2017-06-06 09:47:04.858+: 1305: error : qemuDomainObjBeginJobInternal:3107 
: Timed out during operation: cannot acquire state change lock (held by 
remoteDispatchDomainMigratePrepare3Params)
2017-06-06 09:47:19.950+: 1263: error : qemuMonitorIO:695 : internal error: 
End of file from monitor
2017-06-06 09:47:19.951+: 1263: error : qemuProcessReportLogError:1810 : 
internal error: qemu unexpectedly closed the monitor: 
2017-06-06T09:40:26.681446Z qemu-kvm: warning: CPU(s) not present in any NUMA 
nodes: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15


Ovirt 4.1.1 clean install ugraded to 4.1.2

I tried different migration policies but all of them ended by error.

libvirt debug log attached.

#rpm- qa | grep -e libvirt -e qemu | sort

centos-release-qemu-ev-1.0-1.el7.noarch
ipxe-roms-qemu-20160127-5.git6366fa7a.el7.noarch
libvirt-client-2.0.0-10.el7_3.9.x86_64
libvirt-daemon-2.0.0-10.el7_3.9.x86_64
libvirt-daemon-config-nwfilter-2.0.0-10.el7_3.9.x86_64
libvirt-daemon-driver-interface-2.0.0-10.el7_3.9.x86_64
libvirt-daemon-driver-network-2.0.0-10.el7_3.9.x86_64
libvirt-daemon-driver-nodedev-2.0.0-10.el7_3.9.x86_64
libvirt-daemon-driver-nwfilter-2.0.0-10.el7_3.9.x86_64
libvirt-daemon-driver-qemu-2.0.0-10.el7_3.9.x86_64
libvirt-daemon-driver-secret-2.0.0-10.el7_3.9.x86_64
libvirt-daemon-driver-storage-2.0.0-10.el7_3.9.x86_64
libvirt-daemon-kvm-2.0.0-10.el7_3.9.x86_64
libvirt-lock-sanlock-2.0.0-10.el7_3.9.x86_64
libvirt-python-2.0.0-2.el7.x86_64
qemu-guest-agent-2.5.0-3.el7.x86_64
qemu-img-ev-2.6.0-28.el7_3.9.1.x86_64
qemu-kvm-common-ev-2.6.0-28.el7_3.9.1.x86_64
qemu-kvm-ev-2.6.0-28.el7_3.9.1.x86_64
qemu-kvm-tools-ev-2.6.0-28.el7_3.9.1.x86_64
 

--
Thanks,
Vadim





migration.tar.bz2
Description: application/bzip
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration between datacenters with shared storage

2017-06-01 Thread Charles Kozler
My only real concern with a detach and attach is "what if" the upgrade of
the storage domain does not go well, I will have to recover my entire
storage from backup

On Thu, Jun 1, 2017 at 2:50 PM, Yaniv Kaul  wrote:

>
>
> On Thu, Jun 1, 2017 at 4:55 PM, Adam Litke  wrote:
>
>> You cannot migrate VMs between Datacenters.  I think an export domain
>> will be your easiest option but there may be a way to upgrade in-place (ie.
>> upgrade engine while vms are running, then upgrade cluster) but I am not an
>> expert in this area.
>>
>
> Why is an export domain better than detach and attach a storage domain?
> Y.
>
>
>>
>> On Wed, May 31, 2017 at 4:08 PM, Charles Kozler 
>> wrote:
>>
>>> I couldnt find a definitive on this so I would like to inquire here
>>>
>>> I have gluster on my storage backend exporting the volume from a single
>>> node via NFS
>>>
>>> I have a DC of 4.0 and I would like to upgrade to 4.1. I would ideally
>>> like to take one node out of the cluster and build a 4.1 datacenter. Then
>>> live migrate VMs from the 4.0 DC over to the 4.1 DC with zero downtime to
>>> the VMs
>>>
>>> Is this possible? Or would I be safer to export/import VMs?
>>>
>>> Thanks!
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>>
>> --
>> Adam Litke
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration between datacenters with shared storage

2017-06-01 Thread Yaniv Kaul
On Thu, Jun 1, 2017 at 4:55 PM, Adam Litke  wrote:

> You cannot migrate VMs between Datacenters.  I think an export domain will
> be your easiest option but there may be a way to upgrade in-place (ie.
> upgrade engine while vms are running, then upgrade cluster) but I am not an
> expert in this area.
>

Why is an export domain better than detach and attach a storage domain?
Y.


>
> On Wed, May 31, 2017 at 4:08 PM, Charles Kozler 
> wrote:
>
>> I couldnt find a definitive on this so I would like to inquire here
>>
>> I have gluster on my storage backend exporting the volume from a single
>> node via NFS
>>
>> I have a DC of 4.0 and I would like to upgrade to 4.1. I would ideally
>> like to take one node out of the cluster and build a 4.1 datacenter. Then
>> live migrate VMs from the 4.0 DC over to the 4.1 DC with zero downtime to
>> the VMs
>>
>> Is this possible? Or would I be safer to export/import VMs?
>>
>> Thanks!
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
> Adam Litke
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration between datacenters with shared storage

2017-06-01 Thread Adam Litke
You cannot migrate VMs between Datacenters.  I think an export domain will
be your easiest option but there may be a way to upgrade in-place (ie.
upgrade engine while vms are running, then upgrade cluster) but I am not an
expert in this area.

On Wed, May 31, 2017 at 4:08 PM, Charles Kozler 
wrote:

> I couldnt find a definitive on this so I would like to inquire here
>
> I have gluster on my storage backend exporting the volume from a single
> node via NFS
>
> I have a DC of 4.0 and I would like to upgrade to 4.1. I would ideally
> like to take one node out of the cluster and build a 4.1 datacenter. Then
> live migrate VMs from the 4.0 DC over to the 4.1 DC with zero downtime to
> the VMs
>
> Is this possible? Or would I be safer to export/import VMs?
>
> Thanks!
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Adam Litke
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] live migration between datacenters with shared storage

2017-05-31 Thread Charles Kozler
I couldnt find a definitive on this so I would like to inquire here

I have gluster on my storage backend exporting the volume from a single
node via NFS

I have a DC of 4.0 and I would like to upgrade to 4.1. I would ideally like
to take one node out of the cluster and build a 4.1 datacenter. Then live
migrate VMs from the 4.0 DC over to the 4.1 DC with zero downtime to the VMs

Is this possible? Or would I be safer to export/import VMs?

Thanks!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration error in 4.1.2

2017-05-30 Thread Francesco Romani
Hi,


On 05/30/2017 02:25 PM, Vadim wrote:
> Hi,
>
> Ovirt 4.1.1 clean install ugraded to 4.1.2
>
>
> I'm having trouble migrating VMs in a 2-node cluster. VM can run on both hosts
>
> I tried different migration policies but all of them ended by error.
>
> vdsm and qemu logs with post-copy policy of source and destination attached.
>
> #rpm- qa | grep -e libvirt -e qemu | sort
>
> centos-release-qemu-ev-1.0-1.el7.noarch
> ipxe-roms-qemu-20160127-5.git6366fa7a.el7.noarch
> libvirt-client-2.0.0-10.el7_3.9.x86_64
> libvirt-daemon-2.0.0-10.el7_3.9.x86_64
> libvirt-daemon-config-nwfilter-2.0.0-10.el7_3.9.x86_64
> libvirt-daemon-driver-interface-2.0.0-10.el7_3.9.x86_64
> libvirt-daemon-driver-network-2.0.0-10.el7_3.9.x86_64
> libvirt-daemon-driver-nodedev-2.0.0-10.el7_3.9.x86_64
> libvirt-daemon-driver-nwfilter-2.0.0-10.el7_3.9.x86_64
> libvirt-daemon-driver-qemu-2.0.0-10.el7_3.9.x86_64
> libvirt-daemon-driver-secret-2.0.0-10.el7_3.9.x86_64
> libvirt-daemon-driver-storage-2.0.0-10.el7_3.9.x86_64
> libvirt-daemon-kvm-2.0.0-10.el7_3.9.x86_64
> libvirt-lock-sanlock-2.0.0-10.el7_3.9.x86_64
> libvirt-python-2.0.0-2.el7.x86_64
> qemu-guest-agent-2.5.0-3.el7.x86_64
> qemu-img-ev-2.6.0-28.el7_3.9.1.x86_64
> qemu-kvm-common-ev-2.6.0-28.el7_3.9.1.x86_64
> qemu-kvm-ev-2.6.0-28.el7_3.9.1.x86_64
> qemu-kvm-tools-ev-2.6.0-28.el7_3.9.1.x86_64

This:
2017-05-30T10:14:34.426783Z qemu-kvm: warning: All CPU(s) up to maxcpus
should be described in NUMA config
2017-05-30 10:40:06.805+: initiating migration
qemu-kvm: hw/display/qxl.c:2133: qxl_pre_save: Assertion
`d->last_release_offset < d->vga.vram_size' failed.
2017-05-30 10:46:44.664+: shutting down

is a QEMU issue. Please file a bug[1] against qemu

+++

[1] As usual, it is advised to check on bugzilla first of the same issue
was already reporte

Bests,

-- 
Francesco Romani
Senior SW Eng., Virtualization R
Red Hat
IRC: fromani github: @fromanirh

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration Centos 7.3 -> Centos 7.2

2017-01-16 Thread Markus Stockhausen
Hi Yaniv,

for better tracking I opened BZ 1413847.

Best regards.

Markus


Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration Centos 7.3 -> Centos 7.2

2017-01-16 Thread Yaniv Kaul
On Jan 16, 2017 9:33 PM, "Markus Stockhausen" 
wrote:

Hi there,

maybe i missed the discussion on the mailing list. Today we installed
a new centos host. Of course it has 7.3 and qemu 2.6 after a yum update.
It can be attached to our cluster wihtout problems. We are running Ovirt
4.0.6 but the cluster compatibility level is still 3.6.

We can migrate a VM from qemu 2.3 to 2.6
We cannot migrate a VM from qemu 2.6 to 2.3

What happens:

- qemu is started on the target host (centos 7.2)
- source qemu says: "initiating migration"
- dominfo on target gives:
Id: 21
Name:   testvm
UUID:   d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b
OS Typ: hvm
Status: pausiert
CPU(s): 2
CPU-Zeit:   48,5s
Max Speicher:   8388608 KiB
Verwendeter Speicher: 8388608 KiB
Bleibend:   nein
Automatischer Start: deaktiviert
Verwaltete Sicherung: nein
Sicherheits-Modell: selinux
Sicherheits-DOI: 0
Sicherheitskennung: system_u:system_r:svirt_t:s0:c344,c836 (enforcing)

Anyone experienced this behaviour? Maybe desired?


It's not desired.
VDSM logs from both sides may help.
Y.


Current software versions:

centos 7.2 host:
- libvirt 1.2.17-13.el7_2.6
- qemu 2.3.0-31.el7.21.1

centos 7.3 host:
- libvirt 2.0.0-10.el7_3.2
- qemu 2.6.0-27.1.el7

Ovirt engine
- ovirt 4.0.6

Thanks in advance.

Markus
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Live migration Centos 7.3 -> Centos 7.2

2017-01-16 Thread Markus Stockhausen
Hi there,

maybe i missed the discussion on the mailing list. Today we installed
a new centos host. Of course it has 7.3 and qemu 2.6 after a yum update.
It can be attached to our cluster wihtout problems. We are running Ovirt 
4.0.6 but the cluster compatibility level is still 3.6.

We can migrate a VM from qemu 2.3 to 2.6 
We cannot migrate a VM from qemu 2.6 to 2.3

What happens:

- qemu is started on the target host (centos 7.2)
- source qemu says: "initiating migration"
- dominfo on target gives:
Id: 21
Name:   testvm
UUID:   d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b
OS Typ: hvm
Status: pausiert
CPU(s): 2
CPU-Zeit:   48,5s
Max Speicher:   8388608 KiB
Verwendeter Speicher: 8388608 KiB
Bleibend:   nein
Automatischer Start: deaktiviert
Verwaltete Sicherung: nein
Sicherheits-Modell: selinux
Sicherheits-DOI: 0
Sicherheitskennung: system_u:system_r:svirt_t:s0:c344,c836 (enforcing)

Anyone experienced this behaviour? Maybe desired?

Current software versions:

centos 7.2 host:
- libvirt 1.2.17-13.el7_2.6
- qemu 2.3.0-31.el7.21.1

centos 7.3 host:
- libvirt 2.0.0-10.el7_3.2
- qemu 2.6.0-27.1.el7

Ovirt engine
- ovirt 4.0.6

Thanks in advance.

Markus
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Live Migration Support with oVirt 4.0.4

2016-10-01 Thread Anantha Raghava

Hi,

In version 4.0.2, Live Migration was supported with Open Virtual Switch 
and was informed that migration support with OVS will be included in 
version 4.0.4. Is live migration supported with OVS in current version 
that is 4.0.4?


--

Thanks & Regards,


Anantha Raghava

eXza Technology Consulting & Services


Do not print this e-mail unless required. Save Paper & trees.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration with openvswitch

2016-09-19 Thread Milan Zamazal
Michal Skrivanek  writes:

>> > I'm afraid that we are not yet ready to backport it to 4.0 - we found
>> > out that as it is, it break migration for vmfex and external network
>> > providers; it also breaks when a buggy Engine db does not send a
>> > displayNetwork. But we plan to fix these issues quite soon.
>
> which “buggy” engine? There were changes in parameters, most of these issues
> are not relevant anymore since we ditched <3.6 though.
> Again it’s ok as long as it is clearly mentioned like "3.6 engine sends it in
> such and such parameter, we can drop it once we support 4.0+"

I think Edward means the problem when there is no display (and
migration) network set for a cluster in Engine.  This may happen due to
a former bug in Engine db scripts.  Vdsm apparently falls back on
ovirtmgmt in most cases so the problem is typically unnoticed.  But when
you look for displayNetwork explicitly in Vdsm, it's not there.

The bug may affect 4.0 installations until a db upgrade fix is created
and backported.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration with openvswitch

2016-09-16 Thread Michal Skrivanek

> On 16 Sep 2016, at 14:36, Michal Skrivanek  
> wrote:
> 
> 
>> On 15 Sep 2016, at 21:46, Edward Haas > > wrote:
>> 
>> 
>> 
>> On Thu, Sep 15, 2016 at 1:30 PM, Michal Skrivanek 
>> > wrote:
>> 
>> > On 15 Sep 2016, at 10:11, Dan Kenigsberg > > > wrote:
>> >
>> > On Wed, Sep 14, 2016 at 03:04:14PM +0200, Michal Skrivanek wrote:
>> >>
>> >>> On 09 Sep 2016, at 13:09, Edward Haas > >>> > wrote:
>> >>>
>> >>>
>> >>>
>> >>> On Thu, Sep 8, 2016 at 11:27 AM, Pavel Levshin > >>>  > >>> >> wrote:
>> >>> Hi.
>> >>>
>> >>> I'm trying to learn Ovirt 4 and have a problem with it.
>> >>>
>> >>> My cluster consists of 3 nodes. I use Openvswitch for network 
>> >>> connectivity. I have a HostedEngine and one additional VM in the cluster.
>> >>>
>> >>> When I try to migrate the VM to another node, it fails. From vdsm and 
>> >>> libvirtd logs I see that proper network interface on destination node 
>> >>> cannot be found. Libvirt tries to find Openvswitch bridge with name like 
>> >>> "vdsmbr_AOYiPtcT". It exists on source node, but it is unique on every 
>> >>> node, because it contains random part. Additionally, it changes on every 
>> >>> reboot.
>> >>>
>> >>> How this is supposed to work?
>> >>>
>> >>> --
>> >>> Pavel Levshin
>> >>>
>> >>>
>> >>>
>> >>> Hi Pavel,
>> >>>
>> >>> VM migration is supported on the master branch, however it has not been 
>> >>> ported to 4.0 yet.
>> >>
>> >>> You can either build VDSM from source (from master branch) or try to 
>> >>> apply this patch on what you have:
>> >>> https://gerrit.ovirt.org/#/c/59645  
>> >>> >
>> >>
>> >> That’s quite a horrible solution right now. I certainly would not like to 
>> >> see it in 4.0 (given the hacks around display).
>> 
>> What is horrible exactly?
>> It's not too late to propose other solutions.
> 
> if OVS is the next great feature it should fit into the code accordingly. 
> I.e. using hooks only when it’s absolutely necessary and as a temporary 
> measure only until the respective proper RFEs are implemented and available. 
> E.g. when there is a libvirt support missing we can add a qemu command line 
> parameter ourselves bypassing libvirt but we always should have a clear plan 
> (i.e. a bug) to move away from there as soon as the support is 
> there(requested back then when we went with the hack)
> 
> Such things should be reviewed as soon as we get to a similar area, so while 
> modifying libvirt-hook.sh we can see the original reason for the hook is not 
> valid anymore as everything is addressed and the hacky code should have been 
> removed
> It was easy to see that because there is a clear comment about dependent bugs 
> and issues (though missed by all the reviewers, unfortunately!)
> Your new code doesn’t have anything like that and I have no idea what kind of 
> API or behavior we actually need, whether appropriate requests has been filed 
> on e.g. libvirt. That makes it very hard to revisit in the future by the next 
> random person.
> 
>> 
>> Display uses libvirt to resolve a network name to an IP address for it to 
>> bound to. But that works only for linux bridges.
>> That is limiting, especially now that we do not have a Linux bridge, but 
>> something else.
> 
> that’s ok, whatever needs to be done. But then please make sure you’re not 
> breaking existing features, at least again not without a plan(==bug) to fix 
> it.
> 
>> 
>> >> Do we have a bug/plan to improve it?
>> >
>> > We have Bug 1362495 - [OVS] - Add support for live migration
>> > to track that.

oh, and yes, that’s exactly the tracking I wanted to make sure exists. There’s 
just no link in the gerrit commit itself so I didn’t find it (but I wasn’t 
really looking hard either;-)

Thanks,
michal

>> >
>> > I'm afraid that we are not yet ready to backport it to 4.0 - we found
>> > out that as it is, it break migration for vmfex and external network
>> > providers; it also breaks when a buggy Engine db does not send a
>> > displayNetwork. But we plan to fix these issues quite soon.
> 
> which “buggy” engine? There were changes in parameters, most of these issues 
> are not relevant anymore since we ditched <3.6 though.
> Again it’s ok as long as it is clearly mentioned like "3.6 engine sends it in 
> such and such parameter, we can drop it once we support 4.0+"
> 
>> >
>> > The hacks arround display are an actual imporovement. For "legacy"
>> > switchType, we maintain an on-host libvirt-side database of all networks
>> > only to keep libvirt happy. Having a database copy has all the known
>> > troubles of mismatches and being out of sync. 

Re: [ovirt-users] live migration with openvswitch

2016-09-16 Thread Michal Skrivanek

> On 15 Sep 2016, at 21:46, Edward Haas  wrote:
> 
> 
> 
> On Thu, Sep 15, 2016 at 1:30 PM, Michal Skrivanek 
> > wrote:
> 
> > On 15 Sep 2016, at 10:11, Dan Kenigsberg  > > wrote:
> >
> > On Wed, Sep 14, 2016 at 03:04:14PM +0200, Michal Skrivanek wrote:
> >>
> >>> On 09 Sep 2016, at 13:09, Edward Haas  >>> > wrote:
> >>>
> >>>
> >>>
> >>> On Thu, Sep 8, 2016 at 11:27 AM, Pavel Levshin  >>>   >>> >> wrote:
> >>> Hi.
> >>>
> >>> I'm trying to learn Ovirt 4 and have a problem with it.
> >>>
> >>> My cluster consists of 3 nodes. I use Openvswitch for network 
> >>> connectivity. I have a HostedEngine and one additional VM in the cluster.
> >>>
> >>> When I try to migrate the VM to another node, it fails. From vdsm and 
> >>> libvirtd logs I see that proper network interface on destination node 
> >>> cannot be found. Libvirt tries to find Openvswitch bridge with name like 
> >>> "vdsmbr_AOYiPtcT". It exists on source node, but it is unique on every 
> >>> node, because it contains random part. Additionally, it changes on every 
> >>> reboot.
> >>>
> >>> How this is supposed to work?
> >>>
> >>> --
> >>> Pavel Levshin
> >>>
> >>>
> >>>
> >>> Hi Pavel,
> >>>
> >>> VM migration is supported on the master branch, however it has not been 
> >>> ported to 4.0 yet.
> >>
> >>> You can either build VDSM from source (from master branch) or try to 
> >>> apply this patch on what you have:
> >>> https://gerrit.ovirt.org/#/c/59645  
> >>> >
> >>
> >> That’s quite a horrible solution right now. I certainly would not like to 
> >> see it in 4.0 (given the hacks around display).
> 
> What is horrible exactly?
> It's not too late to propose other solutions.

if OVS is the next great feature it should fit into the code accordingly. I.e. 
using hooks only when it’s absolutely necessary and as a temporary measure only 
until the respective proper RFEs are implemented and available. E.g. when there 
is a libvirt support missing we can add a qemu command line parameter ourselves 
bypassing libvirt but we always should have a clear plan (i.e. a bug) to move 
away from there as soon as the support is there(requested back then when we 
went with the hack)

Such things should be reviewed as soon as we get to a similar area, so while 
modifying libvirt-hook.sh we can see the original reason for the hook is not 
valid anymore as everything is addressed and the hacky code should have been 
removed
It was easy to see that because there is a clear comment about dependent bugs 
and issues (though missed by all the reviewers, unfortunately!)
Your new code doesn’t have anything like that and I have no idea what kind of 
API or behavior we actually need, whether appropriate requests has been filed 
on e.g. libvirt. That makes it very hard to revisit in the future by the next 
random person.

> 
> Display uses libvirt to resolve a network name to an IP address for it to 
> bound to. But that works only for linux bridges.
> That is limiting, especially now that we do not have a Linux bridge, but 
> something else.

that’s ok, whatever needs to be done. But then please make sure you’re not 
breaking existing features, at least again not without a plan(==bug) to fix it.

> 
> >> Do we have a bug/plan to improve it?
> >
> > We have Bug 1362495 - [OVS] - Add support for live migration
> > to track that.
> >
> > I'm afraid that we are not yet ready to backport it to 4.0 - we found
> > out that as it is, it break migration for vmfex and external network
> > providers; it also breaks when a buggy Engine db does not send a
> > displayNetwork. But we plan to fix these issues quite soon.

which “buggy” engine? There were changes in parameters, most of these issues 
are not relevant anymore since we ditched <3.6 though.
Again it’s ok as long as it is clearly mentioned like "3.6 engine sends it in 
such and such parameter, we can drop it once we support 4.0+"

> >
> > The hacks arround display are an actual imporovement. For "legacy"
> > switchType, we maintain an on-host libvirt-side database of all networks
> > only to keep libvirt happy. Having a database copy has all the known
> > troubles of mismatches and being out of sync. For "ovs" switchType, we
> > do not (we don't use a bridge, but a port group so there's no natural
> > way to define our network in libvirt). Modifying the listening address
> > on destination is the flexible and quick way to do it - I wish we had
> > the libvirt migrate hook years ago.
> 
> doesn’t it prevent seamless virti-viewer console connection?
> 
> The end result is the same, we listen on the address of a specific network.
> Previously it contained a 

Re: [ovirt-users] live migration with openvswitch

2016-09-15 Thread Edward Haas
On Thu, Sep 15, 2016 at 1:30 PM, Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
> > On 15 Sep 2016, at 10:11, Dan Kenigsberg  wrote:
> >
> > On Wed, Sep 14, 2016 at 03:04:14PM +0200, Michal Skrivanek wrote:
> >>
> >>> On 09 Sep 2016, at 13:09, Edward Haas  wrote:
> >>>
> >>>
> >>>
> >>> On Thu, Sep 8, 2016 at 11:27 AM, Pavel Levshin  > wrote:
> >>> Hi.
> >>>
> >>> I'm trying to learn Ovirt 4 and have a problem with it.
> >>>
> >>> My cluster consists of 3 nodes. I use Openvswitch for network
> connectivity. I have a HostedEngine and one additional VM in the cluster.
> >>>
> >>> When I try to migrate the VM to another node, it fails. From vdsm and
> libvirtd logs I see that proper network interface on destination node
> cannot be found. Libvirt tries to find Openvswitch bridge with name like
> "vdsmbr_AOYiPtcT". It exists on source node, but it is unique on every
> node, because it contains random part. Additionally, it changes on every
> reboot.
> >>>
> >>> How this is supposed to work?
> >>>
> >>> --
> >>> Pavel Levshin
> >>>
> >>>
> >>>
> >>> Hi Pavel,
> >>>
> >>> VM migration is supported on the master branch, however it has not
> been ported to 4.0 yet.
> >>
> >>> You can either build VDSM from source (from master branch) or try to
> apply this patch on what you have:
> >>> https://gerrit.ovirt.org/#/c/59645  >
> >>
> >> That’s quite a horrible solution right now. I certainly would not like
> to see it in 4.0 (given the hacks around display).
>

What is horrible exactly?
It's not too late to propose other solutions.

Display uses libvirt to resolve a network name to an IP address for it to
bound to. But that works only for linux bridges.
That is limiting, especially now that we do not have a Linux bridge, but
something else.

>> Do we have a bug/plan to improve it?
> >
> > We have Bug 1362495 - [OVS] - Add support for live migration
> > to track that.
> >
> > I'm afraid that we are not yet ready to backport it to 4.0 - we found
> > out that as it is, it break migration for vmfex and external network
> > providers; it also breaks when a buggy Engine db does not send a
> > displayNetwork. But we plan to fix these issues quite soon.
> >
> > The hacks arround display are an actual imporovement. For "legacy"
> > switchType, we maintain an on-host libvirt-side database of all networks
> > only to keep libvirt happy. Having a database copy has all the known
> > troubles of mismatches and being out of sync. For "ovs" switchType, we
> > do not (we don't use a bridge, but a port group so there's no natural
> > way to define our network in libvirt). Modifying the listening address
> > on destination is the flexible and quick way to do it - I wish we had
> > the libvirt migrate hook years ago.
>
> doesn’t it prevent seamless virti-viewer console connection?
>

The end result is the same, we listen on the address of a specific network.
Previously it contained a network name and libvirt converted it to the
correct IP it should bind to, now vdsm resolves it.

also the “TODO” in the code about multiple graphics is worrying (we fully
> support it and are considering to make it a default)
>

Supported where? virt networking code in VDSM which creates an interface
for domxml does not support it at the moment.
Or am I missing something?

If we have an idea of what API would work well? we should raise or
> contribute that to libvirt. Surely it takes time but it is the only way how
> to improve the code eventually.
>

If using libvirt can allow us to drop some persisted data and logic from
vdsm, then it makes sense, but I do not think this is the case.
As it stands today, depending on libvirt persisted data is limiting us, at
least in the networking area. I also do not see the advantage of using it
as a DB.

Thaks,
Edy.


>
> Thanks,
> michal
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration with openvswitch

2016-09-15 Thread Michal Skrivanek

> On 15 Sep 2016, at 10:11, Dan Kenigsberg  wrote:
> 
> On Wed, Sep 14, 2016 at 03:04:14PM +0200, Michal Skrivanek wrote:
>> 
>>> On 09 Sep 2016, at 13:09, Edward Haas  wrote:
>>> 
>>> 
>>> 
>>> On Thu, Sep 8, 2016 at 11:27 AM, Pavel Levshin >> > wrote:
>>> Hi.
>>> 
>>> I'm trying to learn Ovirt 4 and have a problem with it.
>>> 
>>> My cluster consists of 3 nodes. I use Openvswitch for network connectivity. 
>>> I have a HostedEngine and one additional VM in the cluster.
>>> 
>>> When I try to migrate the VM to another node, it fails. From vdsm and 
>>> libvirtd logs I see that proper network interface on destination node 
>>> cannot be found. Libvirt tries to find Openvswitch bridge with name like 
>>> "vdsmbr_AOYiPtcT". It exists on source node, but it is unique on every 
>>> node, because it contains random part. Additionally, it changes on every 
>>> reboot.
>>> 
>>> How this is supposed to work?
>>> 
>>> --
>>> Pavel Levshin
>>> 
>>> 
>>> 
>>> Hi Pavel,
>>> 
>>> VM migration is supported on the master branch, however it has not been 
>>> ported to 4.0 yet.
>> 
>>> You can either build VDSM from source (from master branch) or try to apply 
>>> this patch on what you have:
>>> https://gerrit.ovirt.org/#/c/59645 
>> 
>> That’s quite a horrible solution right now. I certainly would not like to 
>> see it in 4.0 (given the hacks around display). 
>> Do we have a bug/plan to improve it?
> 
> We have Bug 1362495 - [OVS] - Add support for live migration
> to track that.
> 
> I'm afraid that we are not yet ready to backport it to 4.0 - we found
> out that as it is, it break migration for vmfex and external network
> providers; it also breaks when a buggy Engine db does not send a
> displayNetwork. But we plan to fix these issues quite soon.
> 
> The hacks arround display are an actual imporovement. For "legacy"
> switchType, we maintain an on-host libvirt-side database of all networks
> only to keep libvirt happy. Having a database copy has all the known
> troubles of mismatches and being out of sync. For "ovs" switchType, we
> do not (we don't use a bridge, but a port group so there's no natural
> way to define our network in libvirt). Modifying the listening address
> on destination is the flexible and quick way to do it - I wish we had
> the libvirt migrate hook years ago.

doesn’t it prevent seamless virti-viewer console connection?
also the “TODO” in the code about multiple graphics is worrying (we fully 
support it and are considering to make it a default)
If we have an idea of what API would work well? we should raise or contribute 
that to libvirt. Surely it takes time but it is the only way how to improve the 
code eventually.

Thanks,
michal

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration with openvswitch

2016-09-15 Thread Dan Kenigsberg
On Wed, Sep 14, 2016 at 03:04:14PM +0200, Michal Skrivanek wrote:
> 
> > On 09 Sep 2016, at 13:09, Edward Haas  wrote:
> > 
> > 
> > 
> > On Thu, Sep 8, 2016 at 11:27 AM, Pavel Levshin  > > wrote:
> > Hi.
> > 
> > I'm trying to learn Ovirt 4 and have a problem with it.
> > 
> > My cluster consists of 3 nodes. I use Openvswitch for network connectivity. 
> > I have a HostedEngine and one additional VM in the cluster.
> > 
> > When I try to migrate the VM to another node, it fails. From vdsm and 
> > libvirtd logs I see that proper network interface on destination node 
> > cannot be found. Libvirt tries to find Openvswitch bridge with name like 
> > "vdsmbr_AOYiPtcT". It exists on source node, but it is unique on every 
> > node, because it contains random part. Additionally, it changes on every 
> > reboot.
> > 
> > How this is supposed to work?
> > 
> > --
> > Pavel Levshin
> > 
> > 
> > 
> > Hi Pavel,
> > 
> > VM migration is supported on the master branch, however it has not been 
> > ported to 4.0 yet.
> 
> > You can either build VDSM from source (from master branch) or try to apply 
> > this patch on what you have:
> > https://gerrit.ovirt.org/#/c/59645 
> 
> That’s quite a horrible solution right now. I certainly would not like to see 
> it in 4.0 (given the hacks around display). 
> Do we have a bug/plan to improve it?

We have Bug 1362495 - [OVS] - Add support for live migration
to track that.

I'm afraid that we are not yet ready to backport it to 4.0 - we found
out that as it is, it break migration for vmfex and external network
providers; it also breaks when a buggy Engine db does not send a
displayNetwork. But we plan to fix these issues quite soon.

The hacks arround display are an actual imporovement. For "legacy"
switchType, we maintain an on-host libvirt-side database of all networks
only to keep libvirt happy. Having a database copy has all the known
troubles of mismatches and being out of sync. For "ovs" switchType, we
do not (we don't use a bridge, but a port group so there's no natural
way to define our network in libvirt). Modifying the listening address
on destination is the flexible and quick way to do it - I wish we had
the libvirt migrate hook years ago.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration with openvswitch

2016-09-14 Thread Michal Skrivanek

> On 09 Sep 2016, at 13:09, Edward Haas  wrote:
> 
> 
> 
> On Thu, Sep 8, 2016 at 11:27 AM, Pavel Levshin  > wrote:
> Hi.
> 
> I'm trying to learn Ovirt 4 and have a problem with it.
> 
> My cluster consists of 3 nodes. I use Openvswitch for network connectivity. I 
> have a HostedEngine and one additional VM in the cluster.
> 
> When I try to migrate the VM to another node, it fails. From vdsm and 
> libvirtd logs I see that proper network interface on destination node cannot 
> be found. Libvirt tries to find Openvswitch bridge with name like 
> "vdsmbr_AOYiPtcT". It exists on source node, but it is unique on every node, 
> because it contains random part. Additionally, it changes on every reboot.
> 
> How this is supposed to work?
> 
> --
> Pavel Levshin
> 
> 
> 
> Hi Pavel,
> 
> VM migration is supported on the master branch, however it has not been 
> ported to 4.0 yet.

> You can either build VDSM from source (from master branch) or try to apply 
> this patch on what you have:
> https://gerrit.ovirt.org/#/c/59645 

That’s quite a horrible solution right now. I certainly would not like to see 
it in 4.0 (given the hacks around display). 
Do we have a bug/plan to improve it?

Thanks,
michal

> 
> (note that you'll need to restart vdsm service for this to take affect)
> 
> Thanks,
> Edy.
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration with openvswitch

2016-09-09 Thread Edward Haas
On Thu, Sep 8, 2016 at 11:27 AM, Pavel Levshin  wrote:

> Hi.
>
> I'm trying to learn Ovirt 4 and have a problem with it.
>
> My cluster consists of 3 nodes. I use Openvswitch for network
> connectivity. I have a HostedEngine and one additional VM in the cluster.
>
> When I try to migrate the VM to another node, it fails. From vdsm and
> libvirtd logs I see that proper network interface on destination node
> cannot be found. Libvirt tries to find Openvswitch bridge with name like
> "vdsmbr_AOYiPtcT". It exists on source node, but it is unique on every
> node, because it contains random part. Additionally, it changes on every
> reboot.
>
> How this is supposed to work?
>
> --
> Pavel Levshin
>
>
>
Hi Pavel,

VM migration is supported on the master branch, however it has not been
ported to 4.0 yet.
You can either build VDSM from source (from master branch) or try to apply
this patch on what you have:
https://gerrit.ovirt.org/#/c/59645

(note that you'll need to restart vdsm service for this to take affect)

Thanks,
Edy.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration fails

2016-04-14 Thread Charles Tassell

Hi Nick,

  I had this problem myself a while ago and it turned out the issue was 
DNS related (one of the hosts couldn't do a DNS lookup on the name 
registered to the other host so it failed with a strange error.)  The 
best way to diagnose a migration failure is probably with the 
/var/log/vdsm/vdsm.log file (might be vdsmd instead of vdsm)  I'd 
recommend ssh'ing into both hosts and run the following command:


 tail -f /var/log/vdsm/vdsm.log |egrep -v 'DEBUG|INFO' |tee 
/tmp/migrate.log


Then attempt the migration.  When the GUI says the migration has failed 
hit Control-C in both windows to stop capturing the log. You can then go 
through the logfiles (stored in /tmp/migrate.log) to find the actual 
error message and post it to the list.  If you can't find the error you 
might want to upload the logfiles somewhere and post the URLs to the 
list so some of the devs or power users can better diagnose the problem.


On 16-04-14 01:00 PM, users-requ...@ovirt.org wrote:

Date: Thu, 14 Apr 2016 16:35:34 +0200
From: Sandro Bonazzola <sbona...@redhat.com>
To: Nick Vercampt <nick.verca...@gmail.com>
Cc: users <users@ovirt.org>
Subject: Re: [ovirt-users] Live migration fails
Message-ID:

Re: [ovirt-users] Live migration fails

2016-04-14 Thread Sandro Bonazzola
On Thu, Apr 14, 2016 at 2:14 PM, Nick Vercampt 
wrote:

> Dear Sirs
>
> I'm writing to ask a question about the live migration on my oVirt setup.
>
> I'm currently running oVirt 3.6 on a virtual test enviroment with 1
> default cluster (2 hosts, CentOS 7)  and 1 Gluster enabled cluster (with 2
> virtual storage nodes, also CentOS7).
>
> My datacenter has a shared data and iso volume for the two hosts (both
> GlusterFS)
>
> Problem:
> When i try to migrate my VM (Tiny Linux) from host1 to host2 the operation
> fails.
>
> Question:
> What log should I check to find a more detailed error message or do you
> have an idea what the problem might be?
>
>
Googling around, I found:
- http://vaunaspada.babel.it/blog/?p=613
- http://comments.gmane.org/gmane.comp.emulators.ovirt.user/32963

I suggest to start from there. Maybe someone can write a page in ovirt
website about how to diagnose live migration issues.


>
> Kind Regards
>
> Nick Vercampt
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Live migration fails

2016-04-14 Thread Nick Vercampt
Dear Sirs

I'm writing to ask a question about the live migration on my oVirt setup.

I'm currently running oVirt 3.6 on a virtual test enviroment with 1 default
cluster (2 hosts, CentOS 7)  and 1 Gluster enabled cluster (with 2 virtual
storage nodes, also CentOS7).

My datacenter has a shared data and iso volume for the two hosts (both
GlusterFS)

Problem:
When i try to migrate my VM (Tiny Linux) from host1 to host2 the operation
fails.

Question:
What log should I check to find a more detailed error message or do you
have an idea what the problem might be?


Kind Regards

Nick Vercampt
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration

2015-08-18 Thread Yaniv Dary
Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Tue, Aug 18, 2015 at 7:48 AM, Demeter Tibor tdeme...@itsmart.hu wrote:

 Hi,

 Every host are different hostnames, it does not change since reinstall.

 Maybe node1 got diffrerent uuid than before.

 Ovirt had out-of-box live migration feature yet?



Yes. Should work.



 Thanks.

 Tibor

 - 2015. aug.. 17., 10:07, Matthew Lagoe matthew.la...@subrigo.net
 írta:

 Are all the hostnames of the machines different ive had it before where
 migrations fail because they have the same hostname or uuid for that matter



 *From:* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *On
 Behalf Of *Omer Frenkel
 *Sent:* Monday, August 17, 2015 01:02 AM
 *To:* Demeter Tibor
 *Cc:* users
 *Subject:* Re: [ovirt-users] live migration







 On Sun, Aug 16, 2015 at 10:31 PM, Demeter Tibor tdeme...@itsmart.hu
 wrote:

 Hi,

 I reinstalled one of my nodes (node1) because I have to replace my hdds.
 I installed centos 6.6 minimal, but on node re-adding procecure it
 installed newer qemu-kvm-rhev packages.
 Since reinstall I can run VMs on this node and I can do live migrate from
 this node to other, but not backwards.
 I remember, maybe one years ago it was required to install redhat's
 version of qemu-kvm-rhve package for this feature.
 Is it necessary yet?
 my versions:

 node0 KVM: 0.12.1.2 - 2.415.el6_5.14, LIBVIRT: libvirt-0.10.2-46.el6_6.1
 node1 KVM: 0.12.1.2 - 2.448.el6_6.4, LIBVIRT: libvirt-0.10.2-54.el6
 node2 KVM: 0.12.1.2 - 2.448.el6_6.4, LIBVIRT: libvirt-0.10.2-46.el6_6.1

 Can I upgrade manual these hosts?
 I haven't restart my hosts because node0 and node2 is gluster replicate.


 Thanks in advance,
 Tibor
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



 ​not sure about the versions,

 but what is the error you see in source host vdsm.log when migration
 fails?​





 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration

2015-08-18 Thread Demeter Tibor
Hi, 

what is the last recommended version of qemu-kvm-rhev,libvirt and vdsm 
packages? 
Can I upgrade them by hand? 

Thanks, 

Tibor 

- 2015. aug.. 18., 13:55, Yaniv Dary yd...@redhat.com írta: 

 Yaniv Dary
 Technical Product Manager
 Red Hat Israel Ltd.
 34 Jerusalem Road
 Building A, 4th floor
 Ra'anana, Israel 4350109

 Tel : +972 (9) 7692306
 8272306
 Email: yd...@redhat.com IRC : ydary

 On Tue, Aug 18, 2015 at 7:48 AM, Demeter Tibor  tdeme...@itsmart.hu  wrote:

 Hi,

 Every host are different hostnames, it does not change since reinstall.

 Maybe node1 got diffrerent uuid than before.

 Ovirt had out-of-box live migration feature yet?

 Yes. Should work.

 Thanks.

 Tibor

 - 2015. aug.. 17., 10:07, Matthew Lagoe  matthew.la...@subrigo.net  
 írta:

 Are all the hostnames of the machines different ive had it before where
 migrations fail because they have the same hostname or uuid for that matter

 From: users-boun...@ovirt.org [mailto: users-boun...@ovirt.org ] On Behalf 
 Of
 Omer Frenkel
 Sent: Monday, August 17, 2015 01:02 AM
 To: Demeter Tibor
 Cc: users
 Subject: Re: [ovirt-users] live migration

 On Sun, Aug 16, 2015 at 10:31 PM, Demeter Tibor  tdeme...@itsmart.hu  
 wrote:

 Hi,

 I reinstalled one of my nodes (node1) because I have to replace my hdds.
 I installed centos 6.6 minimal, but on node re-adding procecure it installed
 newer qemu-kvm-rhev packages.
 Since reinstall I can run VMs on this node and I can do live migrate from 
 this
 node to other, but not backwards.
 I remember, maybe one years ago it was required to install redhat's version 
 of
 qemu-kvm-rhve package for this feature.
 Is it necessary yet?
 my versions:

 node0 KVM: 0.12.1.2 - 2.415.el6_5.14, LIBVIRT: libvirt-0.10.2-46.el6_6.1
 node1 KVM: 0.12.1.2 - 2.448.el6_6.4, LIBVIRT: libvirt-0.10.2-54.el6
 node2 KVM: 0.12.1.2 - 2.448.el6_6.4, LIBVIRT: libvirt-0.10.2-46.el6_6.1

 Can I upgrade manual these hosts?
 I haven't restart my hosts because node0 and node2 is gluster replicate.

 Thanks in advance,
 Tibor
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

 ​not sure about the versions,

 but what is the error you see in source host vdsm.log when migration fails?​

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration

2015-08-17 Thread Demeter Tibor
Hi, 

Every host are different hostnames, it does not change since reinstall. 

Maybe node1 got diffrerent uuid than before. 

Ovirt had out-of-box live migration feature yet? 

Thanks. 

Tibor 

- 2015. aug.. 17., 10:07, Matthew Lagoe matthew.la...@subrigo.net írta: 





Are all the hostnames of the machines different ive had it before where 
migrations fail because they have the same hostname or uuid for that matter 



From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
Omer Frenkel 
Sent: Monday, August 17, 2015 01:02 AM 
To: Demeter Tibor 
Cc: users 
Subject: Re: [ovirt-users] live migration 










On Sun, Aug 16, 2015 at 10:31 PM, Demeter Tibor  tdeme...@itsmart.hu  wrote: 

Hi, 

I reinstalled one of my nodes (node1) because I have to replace my hdds. 
I installed centos 6.6 minimal, but on node re-adding procecure it installed 
newer qemu-kvm-rhev packages. 
Since reinstall I can run VMs on this node and I can do live migrate from this 
node to other, but not backwards. 
I remember, maybe one years ago it was required to install redhat's version of 
qemu-kvm-rhve package for this feature. 
Is it necessary yet? 
my versions: 

node0 KVM: 0.12.1.2 - 2.415.el6_5.14, LIBVIRT: libvirt-0.10.2-46.el6_6.1 
node1 KVM: 0.12.1.2 - 2.448.el6_6.4, LIBVIRT: libvirt-0.10.2-54.el6 
node2 KVM: 0.12.1.2 - 2.448.el6_6.4, LIBVIRT: libvirt-0.10.2-46.el6_6.1 

Can I upgrade manual these hosts? 
I haven't restart my hosts because node0 and node2 is gluster replicate. 


Thanks in advance, 
Tibor 
___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 





​not sure about the versions, 


but what is the error you see in source host vdsm.log when migration fails?​ 







___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration

2015-08-17 Thread Omer Frenkel
On Sun, Aug 16, 2015 at 10:31 PM, Demeter Tibor tdeme...@itsmart.hu wrote:

 Hi,

 I reinstalled one of my nodes (node1) because I have to replace my hdds.
 I installed centos 6.6 minimal, but on node re-adding procecure it
 installed newer qemu-kvm-rhev packages.
 Since reinstall I can run VMs on this node and I can do live migrate from
 this node to other, but not backwards.
 I remember, maybe one years ago it was required to install redhat's
 version of qemu-kvm-rhve package for this feature.
 Is it necessary yet?
 my versions:

 node0 KVM: 0.12.1.2 - 2.415.el6_5.14, LIBVIRT: libvirt-0.10.2-46.el6_6.1
 node1 KVM: 0.12.1.2 - 2.448.el6_6.4, LIBVIRT: libvirt-0.10.2-54.el6
 node2 KVM: 0.12.1.2 - 2.448.el6_6.4, LIBVIRT: libvirt-0.10.2-46.el6_6.1

 Can I upgrade manual these hosts?
 I haven't restart my hosts because node0 and node2 is gluster replicate.


 Thanks in advance,
 Tibor
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



​not sure about the versions,
but what is the error you see in source host vdsm.log when migration fails?​
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration

2015-08-17 Thread Matthew Lagoe
Are all the hostnames of the machines different ive had it before where 
migrations fail because they have the same hostname or uuid for that matter

 

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
Omer Frenkel
Sent: Monday, August 17, 2015 01:02 AM
To: Demeter Tibor
Cc: users
Subject: Re: [ovirt-users] live migration

 

 

 

On Sun, Aug 16, 2015 at 10:31 PM, Demeter Tibor tdeme...@itsmart.hu wrote:

Hi,

I reinstalled one of my nodes (node1) because I have to replace my hdds.
I installed centos 6.6 minimal, but on node re-adding procecure it installed 
newer qemu-kvm-rhev packages.
Since reinstall I can run VMs on this node and I can do live migrate from this 
node to other, but not backwards.
I remember, maybe one years ago it was required to install redhat's version of 
qemu-kvm-rhve package for this feature.
Is it necessary yet?
my versions:

node0 KVM: 0.12.1.2 - 2.415.el6_5.14, LIBVIRT: libvirt-0.10.2-46.el6_6.1
node1 KVM: 0.12.1.2 - 2.448.el6_6.4, LIBVIRT: libvirt-0.10.2-54.el6
node2 KVM: 0.12.1.2 - 2.448.el6_6.4, LIBVIRT: libvirt-0.10.2-46.el6_6.1

Can I upgrade manual these hosts?
I haven't restart my hosts because node0 and node2 is gluster replicate.


Thanks in advance,
Tibor
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

 

​not sure about the versions,

but what is the error you see in source host vdsm.log when migration fails?​

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] live migration

2015-08-16 Thread Demeter Tibor
Hi, 

I reinstalled one of my nodes (node1) because I have to replace my hdds. 
I installed centos 6.6 minimal, but on node re-adding procecure it installed 
newer qemu-kvm-rhev packages.
Since reinstall I can run VMs on this node and I can do live migrate from this 
node to other, but not backwards.
I remember, maybe one years ago it was required to install redhat's version of 
qemu-kvm-rhve package for this feature.
Is it necessary yet?
my versions:

node0 KVM: 0.12.1.2 - 2.415.el6_5.14, LIBVIRT: libvirt-0.10.2-46.el6_6.1
node1 KVM: 0.12.1.2 - 2.448.el6_6.4, LIBVIRT: libvirt-0.10.2-54.el6
node2 KVM: 0.12.1.2 - 2.448.el6_6.4, LIBVIRT: libvirt-0.10.2-46.el6_6.1

Can I upgrade manual these hosts? 
I haven't restart my hosts because node0 and node2 is gluster replicate.


Thanks in advance,
Tibor
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration qemu 2.1.2 - 2.1.3: Unknown savevm section

2015-05-15 Thread Markus Stockhausen
 Von: Markus Stockhausen
 Gesendet: Freitag, 10. April 2015 20:51
 An: users@ovirt.org
 Betreff: Live migration qemu 2.1.2 - 2.1.3: Unknown savevm section
 
 Hi,
 
 don't know what will be the best place for the following question. 
 So starting with the OVirt mailing list. 
 
 We are using OVirt with FC20 nodes with enabled virt-preview.
 Thus we are running qemu 2.1.2. Everything is working smoothly 
 including live merge.  
 
 For testing purposes we compiled qemu 2.1.3 from Fedora koji
 and updated one of the hosts. Trying to migrate a running VM to
 the new host fails with the message
 
 Unknown savevm section or instance 'kvm-tpr-opt' 0
 
 I guess some incompatibility between the versions. But qemu git
 history between 2.1.2 and 2.1.3 gives no hints about the reason.
 
 Any ideas - or is that migration scenario not supported at all?

The qemu venom vulnerability put me back to the topic. Just in case
someone is interested. The following patch breaks live migration
between 2.1.2 and 2.1.3.

pc: Fix disabling of vapic for compat PC models
http://git.qemu.org/?p=qemu.git;a=commit;h=8100812711ea480119f9796bd6c0895e6ac85d0f

I dropped this one during rebuild and now everything works fine again.

Markus

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Live migration fails - domain not found -

2015-03-18 Thread Markus Stockhausen
Hi,


although we already upgraded several hypervisor nodes to Ovirt 3.5.1 
the newest upgrade has left the host in a very strang state. We did:

- Host was removed from cluster
- Ovirt 3.5 repo was activated on host
- Host was reinstalled from enging

And we got:
- A host that is active and looks nice in the engine
- We can start/stop VMs on the host
- But we cannot live migrate machines to (or even away from) the host

Attached vdsm/libvirt/engine logs. Timestamps do not match as we
created them individually during different runs.

Somhow lost ...

Markus

*
libvirt on target host:

2015-03-18 16:18:48.691+: 2093: debug : qemuMonitorJSONCommandWithFd:286 : 
Send command '{execute:qmp_capabilities,id:libvirt-1}' for write with 
FD -1
2015-03-18 16:18:48.691+: 2092: debug : qemuMonitorJSONIOProcessLine:179 : 
Line [{QMP: {version: {qemu: {micro: 2, minor: 1, major: 2}, 
package: }, capabilities: []}}]
2015-03-18 16:18:48.691+: 2092: debug : qemuMonitorJSONIOProcess:248 : 
Total used 105 bytes out of 105 available in buffer
2015-03-18 16:18:48.692+: 2092: debug : qemuMonitorJSONIOProcessLine:179 : 
Line [{return: {}, id: libvirt-1}]
2015-03-18 16:18:48.692+: 2092: debug : qemuMonitorJSONIOProcessLine:199 : 
QEMU_MONITOR_RECV_REPLY: mon=0x7fb40c017670 reply={return: {}, id: 
libvirt-1}
2015-03-18 16:18:48.692+: 2092: debug : qemuMonitorJSONIOProcess:248 : 
Total used 35 bytes out of 35 available in buffer
2015-03-18 16:18:48.692+: 2093: debug : qemuMonitorJSONCommandWithFd:291 : 
Receive command reply ret=0 rxObject=0x7fb445fbdb10
2015-03-18 16:18:48.692+: 2093: debug : qemuMonitorJSONCommandWithFd:286 : 
Send command '{execute:query-chardev,id:libvirt-2}' for write with FD -1
2015-03-18 16:18:48.693+: 2092: debug : qemuMonitorJSONIOProcessLine:179 : 
Line [{return: [{frontend-open: false, filename: spicevmc, label: 
charchannel2}, {frontend-open: false, filename: 
unix:/var/lib/libvirt/qemu/channels/d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b.org.qemu.guest_agent.0,server,
 label: charchannel1}, {frontend-open: false, filename: 
unix:/var/lib/libvirt/qemu/channels/d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b.com.redhat.rhevm.vdsm,server,
 label: charchannel0}, {frontend-open: true, filename: 
unix:/var/lib/libvirt/qemu/colvm60.monitor,server, label: charmonitor}], 
id: libvirt-2}]
2015-03-18 16:18:48.693+: 2092: debug : qemuMonitorJSONIOProcessLine:199 : 
QEMU_MONITOR_RECV_REPLY: mon=0x7fb40c017670 reply={return: [{frontend-open: 
false, filename: spicevmc, label: charchannel2}, {frontend-open: 
false, filename: 
unix:/var/lib/libvirt/qemu/channels/d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b.org.qemu.guest_agent.0,server,
 label: charchannel1}, {frontend-open: false, filename: 
unix:/var/lib/libvirt/qemu/channels/d2d8bdfd-99a6-41c0-84e7-26e1d6a6057b.com.redhat.rhevm.vdsm,server,
 label: charchannel0}, {frontend-open: true, filename: 
unix:/var/lib/libvirt/qemu/colvm60.monitor,server, label: charmonitor}], 
id: libvirt-2}
2015-03-18 16:18:48.693+: 2092: debug : qemuMonitorJSONIOProcess:248 : 
Total used 559 bytes out of 559 available in buffer
2015-03-18 16:18:48.693+: 2093: debug : qemuMonitorJSONCommandWithFd:291 : 
Receive command reply ret=0 rxObject=0x7fb445ffe110
2015-03-18 16:18:48.694+: 2093: debug : qemuMonitorJSONCommandWithFd:286 : 
Send command 
'{execute:qom-list,arguments:{path:/machine/unattached/device[0]},id:libvirt-3}'
 for write with FD -1
2015-03-18 16:18:48.694+: 2092: debug : qemuMonitorJSONIOProcess:248 : 
Total used 0 bytes out of 1023 available in buffer
2015-03-18 16:18:48.695+: 2092: debug : qemuMonitorJSONIOProcessLine:179 : 
Line [{return: [{name: apic, type: childkvm-apic}, {name: 
filtered-features, type: X86CPUFeatureWordInfo}, {name: 
feature-words, type: X86CPUFeatureWordInfo}, {name: apic-id, type: 
int}, {name: tsc-frequency, type: int}, {name: model-id, type: 
string}, {name: vendor, type: string}, {name: xlevel, type: 
int}, {name: level, type: int}, {name: stepping, type: int}, 
{name: model, type: int}, {name: family, type: int}, {name: 
parent_bus, type: linkbus}, {name: kvm, type: bool}, {name: 
enforce, type: bool}, {name: check, type: bool}, {name: 
hv-time, type: bool}, {name: hv-vapic, type: bool}, {name: 
hv-relaxed, type: bool}, {name: hv-spinlocks, type: int}, 
{name: pmu, type: bool}, {name: hotplugged, type: bool}, 
{name: hotpluggable, type: bool}, {name: realized, type: bool}, 
{name: type, type: string}], id: libvirt-3}]
2015-03-18 16:18:48.695+: 2092: debug : qemuMonitorJSONIOProcessLine:199 : 
QEMU_MONITOR_RECV_REPLY: mon=0x7fb40c017670 reply={return: [{name: apic, 
type: childkvm-apic}, {name: filtered-features, type: 
X86CPUFeatureWordInfo}, {name: feature-words, type: 
X86CPUFeatureWordInfo}, {name: apic-id, type: int}, {name: 
tsc-frequency, type: int}, {name: model-id, type: string}, 
{name: vendor, type: string}, {name: xlevel, type: int}, 
{name: level, type: int}, {name: 

Re: [ovirt-users] Live migration fails - domain not found -

2015-03-18 Thread Markus Stockhausen
 Von: Paul Heinlein [heinl...@madboa.com]
 Gesendet: Mittwoch, 18. März 2015 18:43
 An: Markus Stockhausen
 Cc: Users@ovirt.org
 Betreff: Re: [ovirt-users] Live migration fails - domain not found -
 
 On Wed, 18 Mar 2015, Markus Stockhausen wrote:
 
  although we already upgraded several hypervisor nodes to Ovirt 3.5.1
  the newest upgrade has left the host in a very strang state. We did:
 
  - Host was removed from cluster
  - Ovirt 3.5 repo was activated on host
  - Host was reinstalled from enging
 
  And we got:
  - A host that is active and looks nice in the engine
  - We can start/stop VMs on the host
  - But we cannot live migrate machines to (or even away from) the host
 
 Are the source and destination hypervisor hosts running the OS
 revision (e.g., both running CentOS 6.6)?

Yes both are FC20 (+virt-preview). In between we found the error. It was
a network issue on the migration network that became clear after we
analyzed the vdsm logs on the migration source host. I opened a RFE 
to identify the issue better next time.

https://bugzilla.redhat.com/show_bug.cgi?id=1203417

Markus

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration fails - domain not found -

2015-03-18 Thread Paul Heinlein

On Wed, 18 Mar 2015, Markus Stockhausen wrote:

although we already upgraded several hypervisor nodes to Ovirt 3.5.1 
the newest upgrade has left the host in a very strang state. We did:


- Host was removed from cluster
- Ovirt 3.5 repo was activated on host
- Host was reinstalled from enging

And we got:
- A host that is active and looks nice in the engine
- We can start/stop VMs on the host
- But we cannot live migrate machines to (or even away from) the host


Are the source and destination hypervisor hosts running the OS 
revision (e.g., both running CentOS 6.6)?


--
Paul Heinlein
heinl...@madboa.com
45°38' N, 122°6' W___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Live migration on el6?

2014-11-07 Thread Demeter Tibor
Hi, 

I have a question. 

Need I rebuild and install the redhat's qemu-kvm for ovirt 3.5 if I wanna 
working live migration function? 

At this moment it is not working for me. 

Thanks 

Tibor 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration on el6?

2014-11-07 Thread Gianluca Cecchi
On Fri, Nov 7, 2014 at 10:23 AM, Demeter Tibor tdeme...@itsmart.hu wrote:

 Hi,

 I have a question.

 Need I rebuild and install the redhat's qemu-kvm for ovirt 3.5 if I wanna
 working live migration function?

 At this moment it is not working for me.

 Thanks

 Tibor




 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users



If you have 6.6 check here:
http://lists.ovirt.org/pipermail/users/2014-October/028679.html

HIH,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration on el6?

2014-11-07 Thread Francesco Romani
- Original Message -
 From: Demeter Tibor tdeme...@itsmart.hu
 To: users users@ovirt.org
 Sent: Friday, November 7, 2014 10:23:26 AM
 Subject: [ovirt-users] Live migration on el6?
 
 Hi,
 
 I have a question.
 
 Need I rebuild and install the redhat's qemu-kvm for ovirt 3.5 if I wanna
 working live migration function?
 
 At this moment it is not working for me.

Live VM migration should work out of the box.

Please add more details if it does'nt.

Bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration on el6?

2014-11-07 Thread Demeter Tibor




Demeter Tibor 


Email: tdemeter @itsmart.hu 
Skype: candyman_78 
Phone: +36 30 462 0500 
Web : www.it smart.hu 

Hi,

I have  centos 6.6.
There is a two node cluster that based on a gluster based storage.

The node0 is an all-in-one install, the node1 is a single host.

It is stange, because the live migration working from node1 to node0 but not 
from node0 to node1.


When I click to migrate, then I got this message:


Migration failed, No available host found (VM: aaa, Source: node1).

2014-Nov-07, 15:15
Migration failed due to Error: Fatal error during migration. Trying to migrate 
to another Host (VM: aaa, Source: node1, Destination: UNKNOWN).

2014-Nov-07, 15:15
Migration started (VM: aaa, Source: node1, Destination: node0, User: admin).


Thanks in advance.

Tibor


- Eredeti üzenet -
 - Original Message -
  From: Demeter Tibor tdeme...@itsmart.hu
  To: users users@ovirt.org
  Sent: Friday, November 7, 2014 10:23:26 AM
  Subject: [ovirt-users] Live migration on el6?
  
  Hi,
  
  I have a question.
  
  Need I rebuild and install the redhat's qemu-kvm for ovirt 3.5 if I wanna
  working live migration function?
  
  At this moment it is not working for me.
 
 Live VM migration should work out of the box.
 
 Please add more details if it does'nt.
 
 Bests,
 
 --
 Francesco Romani
 RedHat Engineering Virtualization R  D
 Phone: 8261328
 IRC: fromani

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration on el6?

2014-11-07 Thread Demeter Tibor
Hi,

After I did a restart on node0 the live migration does working fine.

I don't know what was the reason of this.

Thank you for responses.

Tibor

- Eredeti üzenet -
 
 
 
 
 Demeter Tibor
 
 
 Email: tdemeter @itsmart.hu
 Skype: candyman_78
 Phone: +36 30 462 0500
 Web : www.it smart.hu
 
 Hi,
 
 I have  centos 6.6.
 There is a two node cluster that based on a gluster based storage.
 
 The node0 is an all-in-one install, the node1 is a single host.
 
 It is stange, because the live migration working from node1 to node0 but not
 from node0 to node1.
 
 
 When I click to migrate, then I got this message:
 
 
 Migration failed, No available host found (VM: aaa, Source: node1).
 
 2014-Nov-07, 15:15
 Migration failed due to Error: Fatal error during migration. Trying to
 migrate to another Host (VM: aaa, Source: node1, Destination: UNKNOWN).
 
 2014-Nov-07, 15:15
 Migration started (VM: aaa, Source: node1, Destination: node0, User: admin).
 
 
 Thanks in advance.
 
 Tibor
 
 
 - Eredeti üzenet -
  - Original Message -
   From: Demeter Tibor tdeme...@itsmart.hu
   To: users users@ovirt.org
   Sent: Friday, November 7, 2014 10:23:26 AM
   Subject: [ovirt-users] Live migration on el6?
   
   Hi,
   
   I have a question.
   
   Need I rebuild and install the redhat's qemu-kvm for ovirt 3.5 if I wanna
   working live migration function?
   
   At this moment it is not working for me.
  
  Live VM migration should work out of the box.
  
  Please add more details if it does'nt.
  
  Bests,
  
  --
  Francesco Romani
  RedHat Engineering Virtualization R  D
  Phone: 8261328
  IRC: fromani
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration / Snapshots- CentOS 6.5

2014-07-09 Thread Brad Bendy
Thought I was smooth sailing, guess not.

When I did this clone would I need to reset the Sanlock UUIDs? Im
getting a bunch of weird errors when going to activate a gluster
storage domain to the datacenter.

2014-07-09 07:24:40,211 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
(org.ovirt.thread.pool-6-thread-6) [5338d906] Command
CreateStoragePoolVDSCommand(HostName = phx-kvm-01, HostId =
dfcb3d3d-32c0-40ec-8c96-6abb6f0589b1,
storagePoolId=0002-0002-0002-0002-03e7,
storagePoolName=Default,
masterDomainId=cd39b203-e95a-4580-8070-eb283cce0e16,
domainsIdList=[cd39b203-e95a-4580-8070-eb283cce0e16], masterVersion=5)
execution failed. Exception: VDSErrorException: VDSGenericException:
VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot
acquire host id: ('cd39b203-e95a-4580-8070-eb283cce0e16',
SanlockException(22, 'Sanlock lockspace add failure', 'Invalid
argument')), code = 661

2014-07-09 07:24:40,215 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand]
(org.ovirt.thread.pool-6-thread-6) [5338d906] FINISH,
CreateStoragePoolVDSCommand, log id: 30f5b525
2014-07-09 07:24:40,216 ERROR
[org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand]
(org.ovirt.thread.pool-6-thread-6) [5338d906] Command
org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand
throw Vdc Bll exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
CreateStoragePoolVDS, error = Cannot acquire host id:
('cd39b203-e95a-4580-8070-eb283cce0e16', SanlockException(22, 'Sanlock
lockspace add failure', 'Invalid argument')), code = 661 (Failed with
error AcquireHostIdFailure and code 661)


Any insight into this?

On Mon, Jul 7, 2014 at 12:14 AM, Jorick Astrego j.astr...@netbulae.eu wrote:

 On 07/05/2014 04:39 PM, Karli Sjöberg wrote:


 Den 5 jul 2014 16:22 skrev Brad Bendy brad.be...@gmail.com:

 Haha, yeah never have been a Fedora fan, and nothing has changed. Is
 the only big feature im missing out on is snapshots? From what I can
 tell, and in my testing, everything else seems to work. Was deploying
 GlusterFS but without the live migration to another host that is
 somewhat defeated.

 VM live migration works, live _disk_ migration does not.

 Only way to get that is with RHEL really then?

 No, as I earlier pointed out, there is a place you can get the packages you
 need for CentOS:
 http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/

 You'll have to download- and force install them over the already installed
 versions of those packages on all Hosts and then it'll work.

 Though, next time there are updates, yum will update from the standard repos
 and it just stops working again until you repeat the procedure.

 /K


 Just add exclude=qemu-kvm* in /etc/yum.conf so yum will leave them allone

 Kind regards,
 Jorick Astrego
 Netbulae

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration / Snapshots- CentOS 6.5

2014-07-09 Thread Jason Brooks


- Original Message -
 From: Brad Bendy brad.be...@gmail.com
 To: Karli Sjöberg karli.sjob...@slu.se
 Cc: users@ovirt.org
 Sent: Saturday, July 5, 2014 6:57:45 AM
 Subject: Re: [ovirt-users] Live Migration / Snapshots- CentOS 6.5
 
 If I use Fedora will everything work? I had numerous issues, IIRC I
 could not even get the ovirtmgmt switch to install and a few other
 things. What version of Fedora do you recommend then? Ill do another
 install and give that a whirl again.

My current setup is on F20, with a hosted engine running on CentOS 6.5.

I mainly use Fedora because nested kvm works better there.

My storage is gluster, hosted on the F20 machines. I'm seeing some
SELinux issues right now, so I'm in permissive mode until I can 
file bugs for them.

Jason


 
 Thanks!
 
 On Fri, Jul 4, 2014 at 10:33 PM, Karli Sjöberg karli.sjob...@slu.se wrote:
 
  Den 5 jul 2014 07:04 skrev Brad Bendy brad.be...@gmail.com:
 
  Hi,
 
  Ive seeing conflicting info with what version of qemu rpms are needed
  to do live migration under CentOS. It appears the stock ones will not
  work and the RHEV ones are required. All the mailing list post I see
  are from 3-4 months ago, so not sure.
 
  Im getting VDSGenericException: VDSErrorException: Failed to
  SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error
  SNAPSHOT_FAILED and code 48)
 
  I also saw this thread:
  http://comments.gmane.org/gmane.linux.centos.general/138593
 
  Ive been having issues getting those to install, but before I spent to
  much more time I wanted to really see if I was on the right track.
 
  Is there a better OS choice? I first started trying with Fedora 19 and
  20 and has major issues, went to CentOS 6.5 and this is the first and
  only issue so far ive ran into.
 
  Thanks!
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
  Well, going with Fedora would at least get you the snapshots working, if I
  remember correctly, but that's not something you run in production. As you
  said, major issues.
 
  For CentOS, you need special versions of certain packages, since RedHat
  wants you to pay for RHEV, they have chosen to cripple the standard
  packages so those features won't work:
  http://lists.ovirt.org/pipermail/devel/2014-June/007735.html
 
  And here you can find the packages you need:
  http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/
 
  /K
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration / Snapshots- CentOS 6.5

2014-07-07 Thread Sven Kieske
While I have never tested or used this feature myself, I'm pretty
sure this is not normal behaviour, this shouldn't take that long
on an idle vm, unless something else puts your hardware under heavy
load?

Am 06.07.2014 08:17, schrieb Brad Bendy:
 Is normal behavior a outage when a snapshot occurs? Ive got the rhev
 builds installed but when I take a snap im getting well over two
 minutes of downtime, on a VM with zero usage.

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration / Snapshots- CentOS 6.5

2014-07-07 Thread Jorick Astrego


On 07/05/2014 04:39 PM, Karli Sjöberg wrote:



Den 5 jul 2014 16:22 skrev Brad Bendy brad.be...@gmail.com:

 Haha, yeah never have been a Fedora fan, and nothing has changed. Is
 the only big feature im missing out on is snapshots? From what I can
 tell, and in my testing, everything else seems to work. Was deploying
 GlusterFS but without the live migration to another host that is
 somewhat defeated.

VM live migration works, live _disk_ migration does not.

 Only way to get that is with RHEL really then?

No, as I earlier pointed out, there is a place you can get the 
packages you need for CentOS:

http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/

You'll have to download- and force install them over the already 
installed versions of those packages on all Hosts and then it'll work.


Though, next time there are updates, yum will update from the standard 
repos and it just stops working again until you repeat the procedure.


/K



Just add exclude=qemu-kvm* in /etc/yum.conf so yum will leave them allone

Kind regards,
Jorick Astrego
Netbulae
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration / Snapshots- CentOS 6.5

2014-07-06 Thread Brad Bendy
Is normal behavior a outage when a snapshot occurs? Ive got the rhev
builds installed but when I take a snap im getting well over two
minutes of downtime, on a VM with zero usage. The VM goes into pause
state then, I guess the snapshot is a copy of the entire VM? I have
not got a second host up with those rhev build to see the behavior
when disk migrating

Thanks

On Sat, Jul 5, 2014 at 7:51 AM, Brad Bendy brad.be...@gmail.com wrote:
 There we go, sorry about that! Ill give these a test then. Thanks for the help

 On Sat, Jul 5, 2014 at 7:39 AM, Karli Sjöberg karli.sjob...@slu.se wrote:

 Den 5 jul 2014 16:22 skrev Brad Bendy brad.be...@gmail.com:



 Haha, yeah never have been a Fedora fan, and nothing has changed. Is
 the only big feature im missing out on is snapshots? From what I can
 tell, and in my testing, everything else seems to work. Was deploying
 GlusterFS but without the live migration to another host that is
 somewhat defeated.

 VM live migration works, live _disk_ migration does not.

 Only way to get that is with RHEL really then?

 No, as I earlier pointed out, there is a place you can get the packages you
 need for CentOS:
 http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/

 You'll have to download- and force install them over the already installed
 versions of those packages on all Hosts and then it'll work.

 Though, next time there are updates, yum will update from the standard repos
 and it just stops working again until you repeat the procedure.

 /K


 On Sat, Jul 5, 2014 at 7:05 AM, Karli Sjöberg karli.sjob...@slu.se
 wrote:
 
  Den 5 jul 2014 15:57 skrev Brad Bendy brad.be...@gmail.com:
 
 
 
  If I use Fedora will everything work? I had numerous issues, IIRC I
  could not even get the ovirtmgmt switch to install and a few other
  things. What version of Fedora do you recommend then?
 
  None:) We switched long ago to CentOS and have never looked back, even
  with
  these issues. Not worth the headache that is Fedora.
 
  /K
 
  Ill do another
  install and give that a whirl again.
 
  Thanks!
 
  On Fri, Jul 4, 2014 at 10:33 PM, Karli Sjöberg karli.sjob...@slu.se
  wrote:
  
   Den 5 jul 2014 07:04 skrev Brad Bendy brad.be...@gmail.com:
  
   Hi,
  
   Ive seeing conflicting info with what version of qemu rpms are
   needed
   to do live migration under CentOS. It appears the stock ones will
   not
   work and the RHEV ones are required. All the mailing list post I see
   are from 3-4 months ago, so not sure.
  
   Im getting VDSGenericException: VDSErrorException: Failed to
   SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error
   SNAPSHOT_FAILED and code 48)
  
   I also saw this thread:
   http://comments.gmane.org/gmane.linux.centos.general/138593
  
   Ive been having issues getting those to install, but before I spent
   to
   much more time I wanted to really see if I was on the right track.
  
   Is there a better OS choice? I first started trying with Fedora 19
   and
   20 and has major issues, went to CentOS 6.5 and this is the first
   and
   only issue so far ive ran into.
  
   Thanks!
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
   Well, going with Fedora would at least get you the snapshots working,
   if
   I remember correctly, but that's not something you run in production.
   As you
   said, major issues.
  
   For CentOS, you need special versions of certain packages, since
   RedHat wants you to pay for RHEV, they have chosen to cripple the
   standard
   packages so those features won't work:
   http://lists.ovirt.org/pipermail/devel/2014-June/007735.html
  
   And here you can find the packages you need:
  
  
   http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/
  
   /K
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration / Snapshots- CentOS 6.5

2014-07-06 Thread Karli Sjöberg

Den 6 jul 2014 08:17 skrev Brad Bendy brad.be...@gmail.com:

 Is normal behavior a outage when a snapshot occurs? Ive got the rhev
 builds installed but when I take a snap im getting well over two
 minutes of downtime, on a VM with zero usage.

That behavior is expected, engine pauses the VM during snapshot to ensure 
consistency. Not as sure about the timeframe though; two mins for a idle VM? 
When I snap, it's a matter if seconds, but that might be your Gluster spooking. 
We use a traditional NFS server. Why not set that up and benchmark the 
difference?

/K

 The VM goes into pause
 state then, I guess the snapshot is a copy of the entire VM? I have
 not got a second host up with those rhev build to see the behavior
 when disk migrating

 Thanks

 On Sat, Jul 5, 2014 at 7:51 AM, Brad Bendy brad.be...@gmail.com wrote:
  There we go, sorry about that! Ill give these a test then. Thanks for the 
  help
 
  On Sat, Jul 5, 2014 at 7:39 AM, Karli Sjöberg karli.sjob...@slu.se wrote:
 
  Den 5 jul 2014 16:22 skrev Brad Bendy brad.be...@gmail.com:
 
 
 
  Haha, yeah never have been a Fedora fan, and nothing has changed. Is
  the only big feature im missing out on is snapshots? From what I can
  tell, and in my testing, everything else seems to work. Was deploying
  GlusterFS but without the live migration to another host that is
  somewhat defeated.
 
  VM live migration works, live _disk_ migration does not.
 
  Only way to get that is with RHEL really then?
 
  No, as I earlier pointed out, there is a place you can get the packages you
  need for CentOS:
  http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/
 
  You'll have to download- and force install them over the already installed
  versions of those packages on all Hosts and then it'll work.
 
  Though, next time there are updates, yum will update from the standard 
  repos
  and it just stops working again until you repeat the procedure.
 
  /K
 
 
  On Sat, Jul 5, 2014 at 7:05 AM, Karli Sjöberg karli.sjob...@slu.se
  wrote:
  
   Den 5 jul 2014 15:57 skrev Brad Bendy brad.be...@gmail.com:
  
  
  
   If I use Fedora will everything work? I had numerous issues, IIRC I
   could not even get the ovirtmgmt switch to install and a few other
   things. What version of Fedora do you recommend then?
  
   None:) We switched long ago to CentOS and have never looked back, even
   with
   these issues. Not worth the headache that is Fedora.
  
   /K
  
   Ill do another
   install and give that a whirl again.
  
   Thanks!
  
   On Fri, Jul 4, 2014 at 10:33 PM, Karli Sjöberg karli.sjob...@slu.se
   wrote:
   
Den 5 jul 2014 07:04 skrev Brad Bendy brad.be...@gmail.com:
   
Hi,
   
Ive seeing conflicting info with what version of qemu rpms are
needed
to do live migration under CentOS. It appears the stock ones will
not
work and the RHEV ones are required. All the mailing list post I see
are from 3-4 months ago, so not sure.
   
Im getting VDSGenericException: VDSErrorException: Failed to
SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error
SNAPSHOT_FAILED and code 48)
   
I also saw this thread:
http://comments.gmane.org/gmane.linux.centos.general/138593
   
Ive been having issues getting those to install, but before I spent
to
much more time I wanted to really see if I was on the right track.
   
Is there a better OS choice? I first started trying with Fedora 19
and
20 and has major issues, went to CentOS 6.5 and this is the first
and
only issue so far ive ran into.
   
Thanks!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
   
Well, going with Fedora would at least get you the snapshots working,
if
I remember correctly, but that's not something you run in production.
As you
said, major issues.
   
For CentOS, you need special versions of certain packages, since
RedHat wants you to pay for RHEV, they have chosen to cripple the
standard
packages so those features won't work:
http://lists.ovirt.org/pipermail/devel/2014-June/007735.html
   
And here you can find the packages you need:
   
   
http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/
   
/K
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration / Snapshots- CentOS 6.5

2014-07-05 Thread Brad Bendy
If I use Fedora will everything work? I had numerous issues, IIRC I
could not even get the ovirtmgmt switch to install and a few other
things. What version of Fedora do you recommend then? Ill do another
install and give that a whirl again.

Thanks!

On Fri, Jul 4, 2014 at 10:33 PM, Karli Sjöberg karli.sjob...@slu.se wrote:

 Den 5 jul 2014 07:04 skrev Brad Bendy brad.be...@gmail.com:

 Hi,

 Ive seeing conflicting info with what version of qemu rpms are needed
 to do live migration under CentOS. It appears the stock ones will not
 work and the RHEV ones are required. All the mailing list post I see
 are from 3-4 months ago, so not sure.

 Im getting VDSGenericException: VDSErrorException: Failed to
 SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error
 SNAPSHOT_FAILED and code 48)

 I also saw this thread:
 http://comments.gmane.org/gmane.linux.centos.general/138593

 Ive been having issues getting those to install, but before I spent to
 much more time I wanted to really see if I was on the right track.

 Is there a better OS choice? I first started trying with Fedora 19 and
 20 and has major issues, went to CentOS 6.5 and this is the first and
 only issue so far ive ran into.

 Thanks!
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

 Well, going with Fedora would at least get you the snapshots working, if I 
 remember correctly, but that's not something you run in production. As you 
 said, major issues.

 For CentOS, you need special versions of certain packages, since RedHat 
 wants you to pay for RHEV, they have chosen to cripple the standard packages 
 so those features won't work:
 http://lists.ovirt.org/pipermail/devel/2014-June/007735.html

 And here you can find the packages you need:
 http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/

 /K
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration / Snapshots- CentOS 6.5

2014-07-05 Thread Karli Sjöberg

Den 5 jul 2014 15:57 skrev Brad Bendy brad.be...@gmail.com:

 If I use Fedora will everything work? I had numerous issues, IIRC I
 could not even get the ovirtmgmt switch to install and a few other
 things. What version of Fedora do you recommend then?

None:) We switched long ago to CentOS and have never looked back, even with 
these issues. Not worth the headache that is Fedora.

/K

 Ill do another
 install and give that a whirl again.

 Thanks!

 On Fri, Jul 4, 2014 at 10:33 PM, Karli Sjöberg karli.sjob...@slu.se wrote:
 
  Den 5 jul 2014 07:04 skrev Brad Bendy brad.be...@gmail.com:
 
  Hi,
 
  Ive seeing conflicting info with what version of qemu rpms are needed
  to do live migration under CentOS. It appears the stock ones will not
  work and the RHEV ones are required. All the mailing list post I see
  are from 3-4 months ago, so not sure.
 
  Im getting VDSGenericException: VDSErrorException: Failed to
  SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error
  SNAPSHOT_FAILED and code 48)
 
  I also saw this thread:
  http://comments.gmane.org/gmane.linux.centos.general/138593
 
  Ive been having issues getting those to install, but before I spent to
  much more time I wanted to really see if I was on the right track.
 
  Is there a better OS choice? I first started trying with Fedora 19 and
  20 and has major issues, went to CentOS 6.5 and this is the first and
  only issue so far ive ran into.
 
  Thanks!
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
  Well, going with Fedora would at least get you the snapshots working, if I 
  remember correctly, but that's not something you run in production. As you 
  said, major issues.
 
  For CentOS, you need special versions of certain packages, since RedHat 
  wants you to pay for RHEV, they have chosen to cripple the standard 
  packages so those features won't work:
  http://lists.ovirt.org/pipermail/devel/2014-June/007735.html
 
  And here you can find the packages you need:
  http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/
 
  /K
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration / Snapshots- CentOS 6.5

2014-07-05 Thread Brad Bendy
Haha, yeah never have been a Fedora fan, and nothing has changed. Is
the only big feature im missing out on is snapshots? From what I can
tell, and in my testing, everything else seems to work. Was deploying
GlusterFS but without the live migration to another host that is
somewhat defeated. Only way to get that is with RHEL really then?

On Sat, Jul 5, 2014 at 7:05 AM, Karli Sjöberg karli.sjob...@slu.se wrote:

 Den 5 jul 2014 15:57 skrev Brad Bendy brad.be...@gmail.com:



 If I use Fedora will everything work? I had numerous issues, IIRC I
 could not even get the ovirtmgmt switch to install and a few other
 things. What version of Fedora do you recommend then?

 None:) We switched long ago to CentOS and have never looked back, even with
 these issues. Not worth the headache that is Fedora.

 /K

 Ill do another
 install and give that a whirl again.

 Thanks!

 On Fri, Jul 4, 2014 at 10:33 PM, Karli Sjöberg karli.sjob...@slu.se
 wrote:
 
  Den 5 jul 2014 07:04 skrev Brad Bendy brad.be...@gmail.com:
 
  Hi,
 
  Ive seeing conflicting info with what version of qemu rpms are needed
  to do live migration under CentOS. It appears the stock ones will not
  work and the RHEV ones are required. All the mailing list post I see
  are from 3-4 months ago, so not sure.
 
  Im getting VDSGenericException: VDSErrorException: Failed to
  SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error
  SNAPSHOT_FAILED and code 48)
 
  I also saw this thread:
  http://comments.gmane.org/gmane.linux.centos.general/138593
 
  Ive been having issues getting those to install, but before I spent to
  much more time I wanted to really see if I was on the right track.
 
  Is there a better OS choice? I first started trying with Fedora 19 and
  20 and has major issues, went to CentOS 6.5 and this is the first and
  only issue so far ive ran into.
 
  Thanks!
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
  Well, going with Fedora would at least get you the snapshots working, if
  I remember correctly, but that's not something you run in production. As 
  you
  said, major issues.
 
  For CentOS, you need special versions of certain packages, since
  RedHat wants you to pay for RHEV, they have chosen to cripple the standard
  packages so those features won't work:
  http://lists.ovirt.org/pipermail/devel/2014-June/007735.html
 
  And here you can find the packages you need:
 
  http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/
 
  /K
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration / Snapshots- CentOS 6.5

2014-07-05 Thread Brad Bendy
There we go, sorry about that! Ill give these a test then. Thanks for the help

On Sat, Jul 5, 2014 at 7:39 AM, Karli Sjöberg karli.sjob...@slu.se wrote:

 Den 5 jul 2014 16:22 skrev Brad Bendy brad.be...@gmail.com:



 Haha, yeah never have been a Fedora fan, and nothing has changed. Is
 the only big feature im missing out on is snapshots? From what I can
 tell, and in my testing, everything else seems to work. Was deploying
 GlusterFS but without the live migration to another host that is
 somewhat defeated.

 VM live migration works, live _disk_ migration does not.

 Only way to get that is with RHEL really then?

 No, as I earlier pointed out, there is a place you can get the packages you
 need for CentOS:
 http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/

 You'll have to download- and force install them over the already installed
 versions of those packages on all Hosts and then it'll work.

 Though, next time there are updates, yum will update from the standard repos
 and it just stops working again until you repeat the procedure.

 /K


 On Sat, Jul 5, 2014 at 7:05 AM, Karli Sjöberg karli.sjob...@slu.se
 wrote:
 
  Den 5 jul 2014 15:57 skrev Brad Bendy brad.be...@gmail.com:
 
 
 
  If I use Fedora will everything work? I had numerous issues, IIRC I
  could not even get the ovirtmgmt switch to install and a few other
  things. What version of Fedora do you recommend then?
 
  None:) We switched long ago to CentOS and have never looked back, even
  with
  these issues. Not worth the headache that is Fedora.
 
  /K
 
  Ill do another
  install and give that a whirl again.
 
  Thanks!
 
  On Fri, Jul 4, 2014 at 10:33 PM, Karli Sjöberg karli.sjob...@slu.se
  wrote:
  
   Den 5 jul 2014 07:04 skrev Brad Bendy brad.be...@gmail.com:
  
   Hi,
  
   Ive seeing conflicting info with what version of qemu rpms are
   needed
   to do live migration under CentOS. It appears the stock ones will
   not
   work and the RHEV ones are required. All the mailing list post I see
   are from 3-4 months ago, so not sure.
  
   Im getting VDSGenericException: VDSErrorException: Failed to
   SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error
   SNAPSHOT_FAILED and code 48)
  
   I also saw this thread:
   http://comments.gmane.org/gmane.linux.centos.general/138593
  
   Ive been having issues getting those to install, but before I spent
   to
   much more time I wanted to really see if I was on the right track.
  
   Is there a better OS choice? I first started trying with Fedora 19
   and
   20 and has major issues, went to CentOS 6.5 and this is the first
   and
   only issue so far ive ran into.
  
   Thanks!
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
   Well, going with Fedora would at least get you the snapshots working,
   if
   I remember correctly, but that's not something you run in production.
   As you
   said, major issues.
  
   For CentOS, you need special versions of certain packages, since
   RedHat wants you to pay for RHEV, they have chosen to cripple the
   standard
   packages so those features won't work:
   http://lists.ovirt.org/pipermail/devel/2014-June/007735.html
  
   And here you can find the packages you need:
  
  
   http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/
  
   /K
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Live Migration / Snapshots- CentOS 6.5

2014-07-04 Thread Brad Bendy
Hi,

Ive seeing conflicting info with what version of qemu rpms are needed
to do live migration under CentOS. It appears the stock ones will not
work and the RHEV ones are required. All the mailing list post I see
are from 3-4 months ago, so not sure.

Im getting VDSGenericException: VDSErrorException: Failed to
SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error
SNAPSHOT_FAILED and code 48)

I also saw this thread:
http://comments.gmane.org/gmane.linux.centos.general/138593

Ive been having issues getting those to install, but before I spent to
much more time I wanted to really see if I was on the right track.

Is there a better OS choice? I first started trying with Fedora 19 and
20 and has major issues, went to CentOS 6.5 and this is the first and
only issue so far ive ran into.

Thanks!
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration / Snapshots- CentOS 6.5

2014-07-04 Thread Karli Sjöberg

Den 5 jul 2014 07:04 skrev Brad Bendy brad.be...@gmail.com:

 Hi,

 Ive seeing conflicting info with what version of qemu rpms are needed
 to do live migration under CentOS. It appears the stock ones will not
 work and the RHEV ones are required. All the mailing list post I see
 are from 3-4 months ago, so not sure.

 Im getting VDSGenericException: VDSErrorException: Failed to
 SnapshotVDS, error = Snapshot failed, code = 48 (Failed with error
 SNAPSHOT_FAILED and code 48)

 I also saw this thread:
 http://comments.gmane.org/gmane.linux.centos.general/138593

 Ive been having issues getting those to install, but before I spent to
 much more time I wanted to really see if I was on the right track.

 Is there a better OS choice? I first started trying with Fedora 19 and
 20 and has major issues, went to CentOS 6.5 and this is the first and
 only issue so far ive ran into.

 Thanks!
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

Well, going with Fedora would at least get you the snapshots working, if I 
remember correctly, but that's not something you run in production. As you 
said, major issues.

For CentOS, you need special versions of certain packages, since RedHat wants 
you to pay for RHEV, they have chosen to cripple the standard packages so those 
features won't work:
http://lists.ovirt.org/pipermail/devel/2014-June/007735.html

And here you can find the packages you need:
http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/

/K
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration - quest VM stall

2014-06-13 Thread Michal Skrivanek

On 9 Jun 2014, at 21:05, Markus Stockhausen wrote:

 Hello,
 
 at the moment we are investigating stalls of Windows XP VMs during
 live migration. Our environment consists of:
 
 - FC20 hypervisor nodes 
 - qemu 1.6.2
 - OVirt 3.4.1
 - Guest: Windows XP SP2
 - VM Disks: Virtio  IDE tested
 - SPICE / VNC: both tested
 - Balloon: With  without tested
 - Cluster compatibility: 3.4 - CPU Nehalem
 
 After 2-10 live migrations the Windows XP guest is no longer responsive.
 
 First of all we thougth that it might be related to SPICE because we were
 no longer able to logon to the console. So we installed XP telnet server in 
 the VM but that showed a similar behaviour:
 
 - The telnet welcome dialogue is always available (network seems ok)
 - Sometime after a live migration  if you enter the password the telnet 
   gives no response.
 In parallel the SPICE console allows to move open windows. But as soon
 as one clicks on the start the menu the system gives no response.
 
 Even after updating to qemu 2.0 with virt-preview respositories the
 behaviour stays the same. Looks like the system cannot access

This really seems more either SPICE or QEMU related….
You can isolate the behavior to that single VM? Or single OS type(others work 
ok)? or it's happening for any other VM randomly?

Thanks,
michal

 
 Any ideas?
 
 Markus
 InterScan_Disclaimer.txt___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Live migration - quest VM stall

2014-06-09 Thread Markus Stockhausen
Hello,

at the moment we are investigating stalls of Windows XP VMs during
live migration. Our environment consists of:

- FC20 hypervisor nodes
- qemu 1.6.2
- OVirt 3.4.1
- Guest: Windows XP SP2
- VM Disks: Virtio  IDE tested
- SPICE / VNC: both tested
- Balloon: With  without tested
- Cluster compatibility: 3.4 - CPU Nehalem

After 2-10 live migrations the Windows XP guest is no longer responsive.

First of all we thougth that it might be related to SPICE because we were
no longer able to logon to the console. So we installed XP telnet server in
the VM but that showed a similar behaviour:

- The telnet welcome dialogue is always available (network seems ok)
- Sometime after a live migration  if you enter the password the telnet
  gives no response.
In parallel the SPICE console allows to move open windows. But as soon
as one clicks on the start the menu the system gives no response.

Even after updating to qemu 2.0 with virt-preview respositories the
behaviour stays the same. Looks like the system cannot access

Any ideas?

Markus

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration Fail

2014-06-02 Thread Maurice James
I will give it a try. Thanks 

Maurice James 
Media-Node 
www.media-node.com 

- Original Message -

From: Christian Rebel christian.re...@gmx.at 
To: users@ovirt.org 
Sent: Saturday, May 31, 2014 12:23:08 PM 
Subject: Re: [ovirt-users] Live Migration Fail 



Hi Maurice  Maor, 



I had exactly the same issue and in order to fix it I implemented the below 
workaround, hope I will not have any further problems with it… 

But you have to keep in mind that the Snapshot are not getting deleted 
afterwards, I think this issue is targeted for Release 3.5 with the Feature 
“Live Merge” 
Workaround: 
I installed in Force Mode all qemu*5.8.x86*.rpm Packages from 
http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/
 on the physical hosts (BTW my hosts are running on CentOS 6.5 with latest yum 
updates). 



see also: 

https://bugzilla.redhat.com/show_bug.cgi?id=1009100 

http://lists.ovirt.org/pipermail/users/2014-April/023132.html 



br, 

Christian 




From: Maurice James [mailto:mja...@media-node.com] 
Sent: Samstag, 10. Mai 2014 02:33 
To: users 
Subject: [ovirt-users] Live Migration Fail 





Live disk migrations are still failing even after upgrade to 3.4.1 from 3.4.0. 
Is this still an open issue? 

___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration Fail

2014-06-02 Thread Maurice James
That didnt work for me 

- Original Message -

From: Christian Rebel christian.re...@gmx.at 
To: users@ovirt.org 
Sent: Saturday, May 31, 2014 12:23:08 PM 
Subject: Re: [ovirt-users] Live Migration Fail 



Hi Maurice  Maor, 



I had exactly the same issue and in order to fix it I implemented the below 
workaround, hope I will not have any further problems with it… 

But you have to keep in mind that the Snapshot are not getting deleted 
afterwards, I think this issue is targeted for Release 3.5 with the Feature 
“Live Merge” 
Workaround: 
I installed in Force Mode all qemu*5.8.x86*.rpm Packages from 
http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/
 on the physical hosts (BTW my hosts are running on CentOS 6.5 with latest yum 
updates). 



see also: 

https://bugzilla.redhat.com/show_bug.cgi?id=1009100 

http://lists.ovirt.org/pipermail/users/2014-April/023132.html 



br, 

Christian 




From: Maurice James [mailto:mja...@media-node.com] 
Sent: Samstag, 10. Mai 2014 02:33 
To: users 
Subject: [ovirt-users] Live Migration Fail 





Live disk migrations are still failing even after upgrade to 3.4.1 from 3.4.0. 
Is this still an open issue? 

___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration Fail

2014-06-02 Thread Maurice James
It looks like my problem is ties to the our of the box Blank template. Im 
looking for help on how to reassign the template once the VM is already created 


- Original Message -

From: Maurice James mja...@media-node.com 
To: Christian Rebel christian.re...@gmx.at 
Cc: users@ovirt.org 
Sent: Monday, June 2, 2014 8:14:21 AM 
Subject: Re: [ovirt-users] Live Migration Fail 

That didnt work for me 

- Original Message -

From: Christian Rebel christian.re...@gmx.at 
To: users@ovirt.org 
Sent: Saturday, May 31, 2014 12:23:08 PM 
Subject: Re: [ovirt-users] Live Migration Fail 



Hi Maurice  Maor, 



I had exactly the same issue and in order to fix it I implemented the below 
workaround, hope I will not have any further problems with it… 

But you have to keep in mind that the Snapshot are not getting deleted 
afterwards, I think this issue is targeted for Release 3.5 with the Feature 
“Live Merge” 
Workaround: 
I installed in Force Mode all qemu*5.8.x86*.rpm Packages from 
http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/
 on the physical hosts (BTW my hosts are running on CentOS 6.5 with latest yum 
updates). 



see also: 

https://bugzilla.redhat.com/show_bug.cgi?id=1009100 

http://lists.ovirt.org/pipermail/users/2014-April/023132.html 



br, 

Christian 




From: Maurice James [mailto:mja...@media-node.com] 
Sent: Samstag, 10. Mai 2014 02:33 
To: users 
Subject: [ovirt-users] Live Migration Fail 





Live disk migrations are still failing even after upgrade to 3.4.1 from 3.4.0. 
Is this still an open issue? 

___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 


___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration Fail

2014-05-31 Thread Christian Rebel
Hi Maurice  Maor,

 

I had exactly the same issue and in order to fix it I implemented the below 
workaround, hope I will not have any further problems with it…

But you have to keep in mind that the Snapshot are not getting deleted 
afterwards, I think this issue is targeted for Release 3.5 with the Feature 
“Live Merge”

 
Workaround:
I installed in Force Mode all qemu*5.8.x86*.rpm Packages from  
http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/
 
http://jenkins.ovirt.org/view/All/job/qemu-kvm-rhev_create-rpms_el6/lastStableBuild/
 on the physical hosts (BTW my hosts are running on CentOS 6.5 with latest yum 
updates).


see also:

 https://bugzilla.redhat.com/show_bug.cgi?id=1009100 
https://bugzilla.redhat.com/show_bug.cgi?id=1009100 

 http://lists.ovirt.org/pipermail/users/2014-April/023132.html 
http://lists.ovirt.org/pipermail/users/2014-April/023132.html 

 

br,

Christian

 

From: Maurice James [mailto:mja...@media-node.com] 
Sent: Samstag, 10. Mai 2014 02:33
To: users
Subject: [ovirt-users] Live Migration Fail

 

Live disk migrations are still failing even after upgrade to 3.4.1 from 3.4.0. 
Is this still an open issue?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration Fail

2014-05-11 Thread Maor Lipchuk
Is there an open bug on it?
if not, can u please file a bug on it and attach the VDSM (from both
hosts) ,engine logs, and the output from the Host of tree command under
/rhev/data-center and also ls -l to eliminate any permission issues
under the image 21484146-1a6c-4a31-896e-da1156888dfc in /rhev/data-center.

Thanks,
Maor

On 05/11/2014 02:37 AM, Maurice James wrote:
 This fails for all VM disks
 
 - Original Message -
 From: Maor Lipchuk mlipc...@redhat.com
 To: Maurice James mja...@media-node.com
 Cc: users users@ovirt.org
 Sent: Saturday, May 10, 2014 7:19:21 PM
 Subject: Re: [ovirt-users] Live Migration Fail
 
 Seems like VDSM has encountered a problem to find the drive:
 Thread-221::ERROR::2014-05-10 17:36:57,134::vm::3928::vm.Vm::(snapshot)
 vmId=`7f341f92-134a-47e7-b7ed-e7df772806f3`::The base volume doesn't
 exist: {'device': 'disk', 'domainID':
 'e0e65e47-52c8-41bd-8499-a3e025831215', 'volumeID':
 'deae7162-1eb7-423e-9115-3e7de542c89c', 'imageID':
 '21484146-1a6c-4a31-896e-da1156888dfc'}
 
 Can u please run the tree command on /rhev/data-center/..
 Also can u please run ls -l to eliminate any permission issues under
 image 21484146-1a6c-4a31-896e-da1156888dfc in /rhev/data-center.
 Does this only fails for a specific VM or is it also failing for other VMs?
 
 regards,
 Maor
 
 On 05/11/2014 12:43 AM, Maurice James wrote:
 VDSM logs from the source and destination are attached





 - Original Message -
 From: Maor Lipchuk mlipc...@redhat.com
 To: Maurice James mja...@media-node.com, users users@ovirt.org
 Sent: Saturday, May 10, 2014 4:42:00 PM
 Subject: Re: [ovirt-users] Live Migration Fail

 Hi Maurice,

 I was looking at your engine and VDSM logs, it looks like the operation
 of live storage migration has been done on a Host called Staurn, but the
 VDSM logs seems to be from Beetlejuice host, can u check this please

 regards,
 Maor

 On 05/10/2014 03:33 AM, Maurice James wrote:
 Live disk migrations are still failing even after upgrade to 3.4.1 from
 3.4.0. Is this still an open issue?


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration Fail

2014-05-11 Thread Maurice James
Bug opened
https://bugzilla.redhat.com/show_bug.cgi?id=1096529 

- Original Message -
From: Maor Lipchuk mlipc...@redhat.com
To: Maurice James mja...@media-node.com
Cc: users users@ovirt.org
Sent: Sunday, May 11, 2014 5:19:27 AM
Subject: Re: [ovirt-users] Live Migration Fail

Is there an open bug on it?
if not, can u please file a bug on it and attach the VDSM (from both
hosts) ,engine logs, and the output from the Host of tree command under
/rhev/data-center and also ls -l to eliminate any permission issues
under the image 21484146-1a6c-4a31-896e-da1156888dfc in /rhev/data-center.

Thanks,
Maor

On 05/11/2014 02:37 AM, Maurice James wrote:
 This fails for all VM disks
 
 - Original Message -
 From: Maor Lipchuk mlipc...@redhat.com
 To: Maurice James mja...@media-node.com
 Cc: users users@ovirt.org
 Sent: Saturday, May 10, 2014 7:19:21 PM
 Subject: Re: [ovirt-users] Live Migration Fail
 
 Seems like VDSM has encountered a problem to find the drive:
 Thread-221::ERROR::2014-05-10 17:36:57,134::vm::3928::vm.Vm::(snapshot)
 vmId=`7f341f92-134a-47e7-b7ed-e7df772806f3`::The base volume doesn't
 exist: {'device': 'disk', 'domainID':
 'e0e65e47-52c8-41bd-8499-a3e025831215', 'volumeID':
 'deae7162-1eb7-423e-9115-3e7de542c89c', 'imageID':
 '21484146-1a6c-4a31-896e-da1156888dfc'}
 
 Can u please run the tree command on /rhev/data-center/..
 Also can u please run ls -l to eliminate any permission issues under
 image 21484146-1a6c-4a31-896e-da1156888dfc in /rhev/data-center.
 Does this only fails for a specific VM or is it also failing for other VMs?
 
 regards,
 Maor
 
 On 05/11/2014 12:43 AM, Maurice James wrote:
 VDSM logs from the source and destination are attached





 - Original Message -
 From: Maor Lipchuk mlipc...@redhat.com
 To: Maurice James mja...@media-node.com, users users@ovirt.org
 Sent: Saturday, May 10, 2014 4:42:00 PM
 Subject: Re: [ovirt-users] Live Migration Fail

 Hi Maurice,

 I was looking at your engine and VDSM logs, it looks like the operation
 of live storage migration has been done on a Host called Staurn, but the
 VDSM logs seems to be from Beetlejuice host, can u check this please

 regards,
 Maor

 On 05/10/2014 03:33 AM, Maurice James wrote:
 Live disk migrations are still failing even after upgrade to 3.4.1 from
 3.4.0. Is this still an open issue?


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration Fail

2014-05-11 Thread Maor Lipchuk
Thanks

On 05/11/2014 09:24 PM, Maurice James wrote:
 Bug opened
 https://bugzilla.redhat.com/show_bug.cgi?id=1096529 
 
 - Original Message -
 From: Maor Lipchuk mlipc...@redhat.com
 To: Maurice James mja...@media-node.com
 Cc: users users@ovirt.org
 Sent: Sunday, May 11, 2014 5:19:27 AM
 Subject: Re: [ovirt-users] Live Migration Fail
 
 Is there an open bug on it?
 if not, can u please file a bug on it and attach the VDSM (from both
 hosts) ,engine logs, and the output from the Host of tree command under
 /rhev/data-center and also ls -l to eliminate any permission issues
 under the image 21484146-1a6c-4a31-896e-da1156888dfc in /rhev/data-center.
 
 Thanks,
 Maor
 
 On 05/11/2014 02:37 AM, Maurice James wrote:
 This fails for all VM disks

 - Original Message -
 From: Maor Lipchuk mlipc...@redhat.com
 To: Maurice James mja...@media-node.com
 Cc: users users@ovirt.org
 Sent: Saturday, May 10, 2014 7:19:21 PM
 Subject: Re: [ovirt-users] Live Migration Fail

 Seems like VDSM has encountered a problem to find the drive:
 Thread-221::ERROR::2014-05-10 17:36:57,134::vm::3928::vm.Vm::(snapshot)
 vmId=`7f341f92-134a-47e7-b7ed-e7df772806f3`::The base volume doesn't
 exist: {'device': 'disk', 'domainID':
 'e0e65e47-52c8-41bd-8499-a3e025831215', 'volumeID':
 'deae7162-1eb7-423e-9115-3e7de542c89c', 'imageID':
 '21484146-1a6c-4a31-896e-da1156888dfc'}

 Can u please run the tree command on /rhev/data-center/..
 Also can u please run ls -l to eliminate any permission issues under
 image 21484146-1a6c-4a31-896e-da1156888dfc in /rhev/data-center.
 Does this only fails for a specific VM or is it also failing for other VMs?

 regards,
 Maor

 On 05/11/2014 12:43 AM, Maurice James wrote:
 VDSM logs from the source and destination are attached





 - Original Message -
 From: Maor Lipchuk mlipc...@redhat.com
 To: Maurice James mja...@media-node.com, users users@ovirt.org
 Sent: Saturday, May 10, 2014 4:42:00 PM
 Subject: Re: [ovirt-users] Live Migration Fail

 Hi Maurice,

 I was looking at your engine and VDSM logs, it looks like the operation
 of live storage migration has been done on a Host called Staurn, but the
 VDSM logs seems to be from Beetlejuice host, can u check this please

 regards,
 Maor

 On 05/10/2014 03:33 AM, Maurice James wrote:
 Live disk migrations are still failing even after upgrade to 3.4.1 from
 3.4.0. Is this still an open issue?


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration Fail

2014-05-10 Thread Maor Lipchuk
Hi Maurice,

I was looking at your engine and VDSM logs, it looks like the operation
of live storage migration has been done on a Host called Staurn, but the
VDSM logs seems to be from Beetlejuice host, can u check this please

regards,
Maor

On 05/10/2014 03:33 AM, Maurice James wrote:
 Live disk migrations are still failing even after upgrade to 3.4.1 from
 3.4.0. Is this still an open issue?
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration Fail

2014-05-10 Thread Maurice James
Beeltjuice is the SPM and the engine host. I run the migration again and get 
the log vdsm log from the destination host

- Original Message -
From: Maor Lipchuk mlipc...@redhat.com
To: Maurice James mja...@media-node.com, users users@ovirt.org
Sent: Saturday, May 10, 2014 4:42:00 PM
Subject: Re: [ovirt-users] Live Migration Fail

Hi Maurice,

I was looking at your engine and VDSM logs, it looks like the operation
of live storage migration has been done on a Host called Staurn, but the
VDSM logs seems to be from Beetlejuice host, can u check this please

regards,
Maor

On 05/10/2014 03:33 AM, Maurice James wrote:
 Live disk migrations are still failing even after upgrade to 3.4.1 from
 3.4.0. Is this still an open issue?
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration Fail

2014-05-10 Thread Maor Lipchuk
Seems like VDSM has encountered a problem to find the drive:
Thread-221::ERROR::2014-05-10 17:36:57,134::vm::3928::vm.Vm::(snapshot)
vmId=`7f341f92-134a-47e7-b7ed-e7df772806f3`::The base volume doesn't
exist: {'device': 'disk', 'domainID':
'e0e65e47-52c8-41bd-8499-a3e025831215', 'volumeID':
'deae7162-1eb7-423e-9115-3e7de542c89c', 'imageID':
'21484146-1a6c-4a31-896e-da1156888dfc'}

Can u please run the tree command on /rhev/data-center/..
Also can u please run ls -l to eliminate any permission issues under
image 21484146-1a6c-4a31-896e-da1156888dfc in /rhev/data-center.
Does this only fails for a specific VM or is it also failing for other VMs?

regards,
Maor

On 05/11/2014 12:43 AM, Maurice James wrote:
 VDSM logs from the source and destination are attached
 
 
 
 
 
 - Original Message -
 From: Maor Lipchuk mlipc...@redhat.com
 To: Maurice James mja...@media-node.com, users users@ovirt.org
 Sent: Saturday, May 10, 2014 4:42:00 PM
 Subject: Re: [ovirt-users] Live Migration Fail
 
 Hi Maurice,
 
 I was looking at your engine and VDSM logs, it looks like the operation
 of live storage migration has been done on a Host called Staurn, but the
 VDSM logs seems to be from Beetlejuice host, can u check this please
 
 regards,
 Maor
 
 On 05/10/2014 03:33 AM, Maurice James wrote:
 Live disk migrations are still failing even after upgrade to 3.4.1 from
 3.4.0. Is this still an open issue?


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live Migration Fail

2014-05-10 Thread Maurice James
This fails for all VM disks

- Original Message -
From: Maor Lipchuk mlipc...@redhat.com
To: Maurice James mja...@media-node.com
Cc: users users@ovirt.org
Sent: Saturday, May 10, 2014 7:19:21 PM
Subject: Re: [ovirt-users] Live Migration Fail

Seems like VDSM has encountered a problem to find the drive:
Thread-221::ERROR::2014-05-10 17:36:57,134::vm::3928::vm.Vm::(snapshot)
vmId=`7f341f92-134a-47e7-b7ed-e7df772806f3`::The base volume doesn't
exist: {'device': 'disk', 'domainID':
'e0e65e47-52c8-41bd-8499-a3e025831215', 'volumeID':
'deae7162-1eb7-423e-9115-3e7de542c89c', 'imageID':
'21484146-1a6c-4a31-896e-da1156888dfc'}

Can u please run the tree command on /rhev/data-center/..
Also can u please run ls -l to eliminate any permission issues under
image 21484146-1a6c-4a31-896e-da1156888dfc in /rhev/data-center.
Does this only fails for a specific VM or is it also failing for other VMs?

regards,
Maor

On 05/11/2014 12:43 AM, Maurice James wrote:
 VDSM logs from the source and destination are attached
 
 
 
 
 
 - Original Message -
 From: Maor Lipchuk mlipc...@redhat.com
 To: Maurice James mja...@media-node.com, users users@ovirt.org
 Sent: Saturday, May 10, 2014 4:42:00 PM
 Subject: Re: [ovirt-users] Live Migration Fail
 
 Hi Maurice,
 
 I was looking at your engine and VDSM logs, it looks like the operation
 of live storage migration has been done on a Host called Staurn, but the
 VDSM logs seems to be from Beetlejuice host, can u check this please
 
 regards,
 Maor
 
 On 05/10/2014 03:33 AM, Maurice James wrote:
 Live disk migrations are still failing even after upgrade to 3.4.1 from
 3.4.0. Is this still an open issue?


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration failing

2014-04-29 Thread Francesco Romani
- Original Message -
 From: Steve Dainard sdain...@miovision.com
 To: users users@ovirt.org
 Sent: Tuesday, April 29, 2014 4:32:08 AM
 Subject: Re: [ovirt-users] Live migration failing
 
 Another error on migration.

Hi, in both cases the core issue is

ibvirtError: Unable to read from monitor: Connection reset by peer

can you share the libvirtd and qemu logs?

Hopefully we can find some more information on those logs.

Bests,

-- 
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration failing

2014-04-29 Thread Dafna Ron
Actually, the best way to debug this would be to look at both des and 
src vdsm logs.

is this happening on all vm's or just one of them?
was this vm launched from an iso? is that iso still available?
are there any snapshots?
what are the vdsm, libvirt and qemu versions?

Thanks.

Dafna


On 04/29/2014 02:24 PM, Steve Dainard wrote:

Thanks, logs attached:

libvirtd.log.4/central-syslog.log covers the first event (17:12 
timestamp)

libvirtd.log.3/owncloud.log covers the second event (01:22 timestamp)


Steve



On Tue, Apr 29, 2014 at 4:48 AM, Francesco Romani from...@redhat.com 
mailto:from...@redhat.com wrote:


- Original Message -
 From: Steve Dainard sdain...@miovision.com
mailto:sdain...@miovision.com
 To: users users@ovirt.org mailto:users@ovirt.org
 Sent: Tuesday, April 29, 2014 4:32:08 AM
 Subject: Re: [ovirt-users] Live migration failing

 Another error on migration.

Hi, in both cases the core issue is

ibvirtError: Unable to read from monitor: Connection reset by peer

can you share the libvirtd and qemu logs?

Hopefully we can find some more information on those logs.

Bests,

--
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration failing

2014-04-29 Thread Dafna Ron
-syslog, 
Source: ovirt002, Destination: ovirt001).
2014-Apr-28, 13:12 Migration started (VM: central-syslog, Source: 
ovirt002, Destination: ovirt001, User: admin).



Thanks,
Steve



On Tue, Apr 29, 2014 at 9:51 AM, Dafna Ron d...@redhat.com 
mailto:d...@redhat.com wrote:


Actually, the best way to debug this would be to look at both des
and src vdsm logs.
is this happening on all vm's or just one of them?
was this vm launched from an iso? is that iso still available?
are there any snapshots?
what are the vdsm, libvirt and qemu versions?

Thanks.

Dafna



On 04/29/2014 02:24 PM, Steve Dainard wrote:

Thanks, logs attached:

libvirtd.log.4/central-syslog.log covers the first event
(17:12 timestamp)
libvirtd.log.3/owncloud.log covers the second event (01:22
timestamp)


Steve



On Tue, Apr 29, 2014 at 4:48 AM, Francesco Romani
from...@redhat.com mailto:from...@redhat.com
mailto:from...@redhat.com mailto:from...@redhat.com wrote:

- Original Message -
 From: Steve Dainard sdain...@miovision.com
mailto:sdain...@miovision.com
mailto:sdain...@miovision.com
mailto:sdain...@miovision.com
 To: users users@ovirt.org mailto:users@ovirt.org
mailto:users@ovirt.org mailto:users@ovirt.org
 Sent: Tuesday, April 29, 2014 4:32:08 AM
 Subject: Re: [ovirt-users] Live migration failing

 Another error on migration.

Hi, in both cases the core issue is

ibvirtError: Unable to read from monitor: Connection reset
by peer

can you share the libvirtd and qemu logs?

Hopefully we can find some more information on those logs.

Bests,

--
Francesco Romani
RedHat Engineering Virtualization R  D
Phone: 8261328
IRC: fromani




___
Users mailing list
Users@ovirt.org mailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



-- 
Dafna Ron






--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Live migration failing

2014-04-28 Thread Gabi C
Do you have Network Qos ?
Pe 29.04.2014 05:32, Steve Dainard sdain...@miovision.com a scris:

 Another error on migration.

 GUI (yes the VM is actually running, and continues to run after migration
 failure):
 2014-Apr-28, 22:22 VM owncloud is down. Exit message: Domain not found: no
 domain with matching uuid '8c7ab0d2-c059-44f0-9966-2a2e1d986956'.
 2014-Apr-28, 22:22 Migration started (VM: owncloud, Source: ovirt002,
 Destination: ovirt001, User: admin).

 vdsm.log attached.




 *Steve Dainard *
 IT Infrastructure Manager
 Miovision http://miovision.com/ | *Rethink Traffic*

 *Blog http://miovision.com/blog  |  **LinkedIn
 https://www.linkedin.com/company/miovision-technologies  |  Twitter
 https://twitter.com/miovision  |  Facebook
 https://www.facebook.com/miovision*
 --
  Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener,
 ON, Canada | N2C 1L3
 This e-mail may contain information that is privileged or confidential. If
 you are not the intended recipient, please delete the e-mail and any
 attachments and notify us immediately.


 On Mon, Apr 28, 2014 at 1:26 PM, Steve Dainard sdain...@miovision.comwrote:

 Upgraded from Ovirt 3.3.2 to 3.4 recently
 (ovirt-engine-3.4.0-1.el6.noarch)
 Hosts packages:
 vdsm-4.14.6-0.el6.x86_64
 libvirt-0.10.2-29.el6_5.7.x86_64
 qemu-kvm-0.12.1.2-2.415.el6_5.8.x86_64 (from the jenkins build with live
 snapshot support)


 GUI Errors:
 2014-Apr-28, 13:12 Migration failed due to Error: Fatal error during
 migration (VM: central-syslog, Source: ovirt002, Destination: ovirt001).
 2014-Apr-28, 13:12 Migration failed due to Error: Fatal error during
 migration. Trying to migrate to another Host (VM: central-syslog, Source:
 ovirt002, Destination: ovirt001).
 2014-Apr-28, 13:12 Migration started (VM: central-syslog, Source:
 ovirt002, Destination: ovirt001, User: admin).

 VDSM log from ovirt002 host attached.

 Thanks for any help,


 *Steve *



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] live migration and snapshot problem

2014-04-10 Thread Demeter Tibor
Dear members, 

We made a test plaform for testing ovirt 3.4 features. We got four amd x2 4400+ 
machine with 2 gigs of ram and build a gluster based cluster. I set up an 
amd-G2 based cluster. 
I rebuild and installed the qemu-kvm-rhev package ( 
http://ftp.redhat.com/redhat/linux/enterprise/6Server/en/RHEV/SRPMS/qemu-kvm-rhev-0.12.1.2-2.415.el6_5.7.src.rpm
 ) . 
The data storage is a four brick distributed-replicated gluster storage. I 
got a virtual machine (centos 6.5) from the ovirt's openstack based template 
repository. 

Everything was good, the vm can run on any host, but the live migration doesn't 
work. Also, the snapshot feature is lost from menu. I can make live snapshot, 
but it doesn't show on the panel. 

So: 

- I lost the snapshots:) 
- the live migration doesn't work. 

Can anyone help me? 

Thanks in advance. 

Tibor 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration and snapshot problem

2014-04-10 Thread Dafna Ron
can you please attach engine, vdsm, libvirt, qemu and gluster logs from 
both the create snapshot and live migration actions.


Thanks,
Dafna

On 04/10/2014 05:59 PM, Demeter Tibor wrote:

Dear members,

We made a test plaform for testing ovirt 3.4 features. We got four amd 
x2 4400+ machine with 2 gigs of ram and build a gluster based cluster. 
I set up an amd-G2 based cluster.
I rebuild and installed the qemu-kvm-rhev package 
(http://ftp.redhat.com/redhat/linux/enterprise/6Server/en/RHEV/SRPMS/qemu-kvm-rhev-0.12.1.2-2.415.el6_5.7.src.rpm) 
.
The data storage is a four brick distributed-replicated gluster 
storage. I got a virtual machine (centos 6.5) from the ovirt's 
openstack based template repository.


Everything was good, the vm can run on any host, but the live 
migration doesn't work. Also, the snapshot feature is lost from menu. 
I can make live snapshot, but it doesn't show on the panel.


So:

- I  lost the snapshots:)
- the live migration doesn't work.

Can anyone help me?

Thanks in advance.

Tibor


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Dafna Ron
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] live migration and snapshot problem

2014-04-10 Thread Demeter Tibor
Hi,

I'm sorry, but - I don't know how - but at this moment everything ok. I can do 
live migrations beetwen hosts and the live snapshot things are displayed
But, I didn't do anything...

Sorry for this.

Tibor



- Eredeti üzenet -
 can you please attach engine, vdsm, libvirt, qemu and gluster logs from
 both the create snapshot and live migration actions.
 
 Thanks,
 Dafna
 
 On 04/10/2014 05:59 PM, Demeter Tibor wrote:
  Dear members,
 
  We made a test plaform for testing ovirt 3.4 features. We got four amd
  x2 4400+ machine with 2 gigs of ram and build a gluster based cluster.
  I set up an amd-G2 based cluster.
  I rebuild and installed the qemu-kvm-rhev package
  (http://ftp.redhat.com/redhat/linux/enterprise/6Server/en/RHEV/SRPMS/qemu-kvm-rhev-0.12.1.2-2.415.el6_5.7.src.rpm)
  .
  The data storage is a four brick distributed-replicated gluster
  storage. I got a virtual machine (centos 6.5) from the ovirt's
  openstack based template repository.
 
  Everything was good, the vm can run on any host, but the live
  migration doesn't work. Also, the snapshot feature is lost from menu.
  I can make live snapshot, but it doesn't show on the panel.
 
  So:
 
  - I  lost the snapshots:)
  - the live migration doesn't work.
 
  Can anyone help me?
 
  Thanks in advance.
 
  Tibor
 
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 
 --
 Dafna Ron
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users