Re: [ovirt-users] Getting error when i try to assign logical networks to interfaces

2017-03-24 Thread martin chamambo
@ Dan i am not using FCOE , i am trying to set up logical networks , my
oVirt Engine Version: 4.0.6.3-1.el7.centos and my vdsm versions is
vdsm-4.18.4.1-0.el7.centos

On Thu, Mar 23, 2017 at 11:26 PM, Dan Kenigsberg  wrote:

> On Thu, Mar 23, 2017 at 9:25 AM, martin chamambo 
> wrote:
> > I havent set up any hooks and when i try to assign logical networks to an
> > already existing interface on the host ,it gives me this error
> >
> > Hook error: Hook Error: ('Traceback (most recent call last):\n  File
> > "/usr/libexec/vdsm/hooks/before_network_setup/50_fcoe", line 18, in
> > \nfrom vdsm.netconfpersistence import
> > RunningConfig\nImportError: No module named netconfpersistence\n',)
> >
>
> vdsm-hook-fcoe is installed by default on ovirt-node.
> Which version of vdsm (and vdsm-hook-fcoe) are you using?
> This Traceback smells like a mismatch between the two.
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt node local storage behavior

2017-03-24 Thread Misak Khachatryan
Hello,

I have one server where i was installe oVirt node 4.1 and used in my
infrastructure. It's used as single server cluster with local storage
configured.

I created separate volume for it and mounted as separate directory:


[root@virt4 ~]# lvs
 LV VG  Attr   LSize   Pool
Origin   Data%  Meta%  Move Log Cpy%Sync
Convert
 local_storage  onn Vwi-aotz-- 350.00g pool00
7.27
 ovirt-node-ng-4.1.0-0.20170201.0   onn Vri---tz-k 425.37g pool00
 ovirt-node-ng-4.1.0-0.20170201.0+1 onn Vwi---tz-- 425.37g pool00
ovirt-node-ng-4.1.0-0.20170201.0
 ovirt-node-ng-4.1.0-0.20170316.0   onn Vwi---tz-k 425.37g pool00 root
 ovirt-node-ng-4.1.0-0.20170316.0+1 onn Vwi---tz-- 425.37g pool00
ovirt-node-ng-4.1.0-0.20170316.0
 ovirt-node-ng-4.1.1-0.20170322.0   onn Vri---tz-k 425.37g pool00
 ovirt-node-ng-4.1.1-0.20170322.0+1 onn Vwi-aotz-- 425.37g pool00
ovirt-node-ng-4.1.1-0.20170322.0 2.17
 pool00 onn twi-aotz-- 440.37g
18.44  9.91
 root   onn Vwi---tz-- 425.37g pool00
 swap   onn -wi-ao   7.88g
 varonn Vwi-aotz--  15.00g pool00
16.28

mount:

/dev/mapper/onn-local_storage on /local_storage type xfs
(rw,relatime,seclabel,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)

[root@virt4 ~]# ll /
drwxr-xr-x.   3 vdsm kvm 76 Mar 23 09:02 local_storage

All operations was done through cockpit UI

With 4.1.1 upgrade i decided to upgrade it. After server reboot i
discovered that my local storage folder was cleared, so i need to
recreate all VMs and disks from scratch.

Is this a bug or I did something wrong?


Best regards,
Misak Khachatryan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Installation of ovirtNode3.6 on VMware workstation is failing

2017-03-24 Thread martin chamambo
Good day

I am using ovirtEngine 4.0 and ovirtnode 4.0  on the same engine i also
need to test ovirtNode 3.6 since its  supported.

Initially i struggled with installing ovirt engine 4.0 until i selected LVM
thin provisioning

the  same trick is not working with ovirtNode3.6 any type of partition
,standard partition ,LVM ,LVM thin provisioning is not working

is there anyone who experienced the same issue
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bulk move vm disks?

2017-03-24 Thread nicolas

El 2017-03-24 10:29, Ernest Beinrohr escribió:

On 24.03.2017 11:11, gflwqs gflwqs wrote:


Hi list,
I need to move 600+ vms:from one data domain to another, however
from what i can see in the GUI i can only move one vm disk at the
time which would be very time consuming.

I there any way i can bulk move those vm disks?
By the way, I can't stop the vms they have to be online during the
migration..

 This is my python program:

 # ... API init

 vms= api.vms.list(query = 'vmname')


If you're planning to do it that way, make sure you install version 3.x 
of ovirt-engine-sdk-python. Newer versions (4.x) differ too much in 
syntax.


Also, if you want to move a whole Storage Domain, you might be 
interested in listing VMs by the storage domain name, i.e.:


api.vms.list(query='Storage=myoldstoragedomain')

That will return a list of all machines in that storage domain.



 for vm in vms:
   print vm.name
   for disk in vm.disks.list( ):
     print " disk: " + disk.name + " " + disk.get_alias()
     sd = api.storagedomains.get('NEWSTORAGE')

     try:
   disk.move(params.Action(storage_domain=sd))

   disk_id = disk.get_id()
   while True:
   print("Waiting for movement to complete ...")
   time.sleep(10)
   disk = vm.disks.get(id=disk_id)
   if disk.get_status().get_state() == "ok":
   break

     except:
   print "Cannot move."

 api.disconnect()

--

 Ernest Beinrohr, AXON PRO
 Ing [1], RHCE [2], RHCVA [2], LPIC [3], VCA [4],
 +421-2-62410360 +421-903-482603


Links:
--
[1] http://www.beinrohr.sk/ing.php
[2] http://www.beinrohr.sk/rhce.php
[3] http://www.beinrohr.sk/lpic.php
[4] http://www.beinrohr.sk/vca.php

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Bulk move vm disks?

2017-03-24 Thread gflwqs gflwqs
Hi list,
I need to move 600+ vms:from one data domain to another, however from what
i can see in the GUI i can only move one vm disk at the time which would be
very time consuming.

I there any way i can bulk move those vm disks?
By the way, I can't stop the vms they have to be online during the
migration..

Regards
Christian
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Details about why a live migration failed

2017-03-24 Thread Davide Ferrari
Helo

I have a VM on a host that seems it cannot be live-migrated away. I'm
trying to migrate it to another (superior) cluster but please consider:
- VM isn't receiving too much tarffic and is not doing much at all
- I've already successfully live migrated other VMs from this host to the
same other cluster

looking through engine.log I cannot see anything interesting a aprt from
the generic ( I grepped for the job ID)

2017-03-24 10:30:52,186 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-42) [20518310] Correlation ID: 20518310,
Job ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom
Event ID: -1, Message: Migration started (VM: druid-co01., Source:
vmhost04, Destination: vmhost06, User: admin@internal-authz).
2017-03-24 10:34:28,072 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-2) [6d6a3a53] Correlation ID: 20518310, Job
ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom Event
ID: -1, Message: Failed to migrate VM druid-co01. to Host vmhost06 . Trying
to migrate to another Host.
2017-03-24 10:34:28,676 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-2) [6d6a3a53] Correlation ID: 20518310, Job
ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom Event
ID: -1, Message: Migration started (VM: druid-co01., Source: vmhost04,
Destination: vmhost11, User: admin@internal-authz).
2017-03-24 10:37:59,097 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-40) [779e9773] Correlation ID: 20518310,
Job ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom
Event ID: -1, Message: Failed to migrate VM druid-co01. to Host vmhost11 .
Trying to migrate to another Host.
2017-03-24 10:38:00,626 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-40) [779e9773] Correlation ID: 20518310,
Job ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom
Event ID: -1, Message: Migration started (VM: druid-co01, Source: vmhost04,
Destination: vmhost08, User: admin@internal-authz).
2017-03-24 10:41:29,441 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-42) [6f032492] Correlation ID: 20518310,
Job ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom
Event ID: -1, Message: Failed to migrate VM druid-co01 to Host vmhost08 .
Trying to migrate to another Host.
2017-03-24 10:41:29,488 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-42) [6f032492] Correlation ID: 20518310,
Job ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom
Event ID: -1, Message: Migration failed  (VM: druid-co01, Source: vmhost04).

-- 
Davide Ferrari
Senior Systems Engineer
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bulk move vm disks?

2017-03-24 Thread nicolas
You can use oVirt 4.1 with the ovirt-engine-sdk-python package version 
3.x, they are backwards compatible.


Regards.

El 2017-03-24 11:12, gflwqs gflwqs escribió:

Ok Thank you Nicolas we are using ovirt4.1..

Regards
Christian

2017-03-24 12:03 GMT+01:00 :


El 2017-03-24 10:29, Ernest Beinrohr escribió:
On 24.03.2017 11 [1]:11, gflwqs gflwqs wrote:

Hi list,
I need to move 600+ vms:from one data domain to another, however
from what i can see in the GUI i can only move one vm disk at the
time which would be very time consuming.

I there any way i can bulk move those vm disks?
By the way, I can't stop the vms they have to be online during the
migration..
 This is my python program:

 # ... API init

 vms= api.vms.list(query = 'vmname')


 If you're planning to do it that way, make sure you install version
3.x of ovirt-engine-sdk-python. Newer versions (4.x) differ too much
in syntax.

 Also, if you want to move a whole Storage Domain, you might be
interested in listing VMs by the storage domain name, i.e.:

 api.vms.list(query='Storage=myoldstoragedomain')

 That will return a list of all machines in that storage domain.


 for vm in vms:
   print vm.name [2]
   for disk in vm.disks.list( ):
 print " disk: " + disk.name [3] + " " + disk.get_alias()
 sd = api.storagedomains.get('NEWSTORAGE')

 try:
   disk.move(params.Action(storage_domain=sd))

   disk_id = disk.get_id()
   while True:
   print("Waiting for movement to complete ...")
   time.sleep(10)
   disk = vm.disks.get(id=disk_id)
   if disk.get_status().get_state() == "ok":
   break

 except:
   print "Cannot move."

 api.disconnect()

--

 Ernest Beinrohr, AXON PRO
 Ing [1], RHCE [2], RHCVA [2], LPIC [3], VCA [4],
 +421-2-62410360 [4] +421-903-482603 [5]

Links:
--
[1] http://www.beinrohr.sk/ing.php [6]
[2] http://www.beinrohr.sk/rhce.php [7]
[3] http://www.beinrohr.sk/lpic.php [8]
[4] http://www.beinrohr.sk/vca.php [9]

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [10]

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users [10]



Links:
--
[1] tel:24.03.2017%2011
[2] http://vm.name
[3] http://disk.name
[4] tel:%2B421-2-62410360
[5] tel:%2B421-903-482603
[6] http://www.beinrohr.sk/ing.php
[7] http://www.beinrohr.sk/rhce.php
[8] http://www.beinrohr.sk/lpic.php
[9] http://www.beinrohr.sk/vca.php
[10] http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade from 3.6 to 4.1

2017-03-24 Thread Christophe TREFOIS

> On 23 Mar 2017, at 20:09, Brett Holcomb  wrote:
> 
> I am currently running oVirt 3.6 on a physical server using hosted engine 
> environment.  I have one server since it's a lab setup.  The storage is on a 
> Synology 3615xs iSCSI LUN so that's where the vms are.  I plan to upgrade to 
> 4.1 and need to check to make sure I understand the procedure.  I've read the 
> oVirt 4.1 Release Notes and they leave some questions.
> 
> First they say I can simply install the 4.1 release repo update all the 
> ovirt-*-setup* and then run engine-setup.

I don’t know for sure, but I would first go to latest 4.0 and then to 4.1 as 
I’m not sure they test upgrades from 3.6 to 4.1 directly. 

> 
> 1. I assume this is on the engine VM running on the host physical box.

Yes, inside the VM. But first, follow the guide below and make sure engine is 
in global maintenance mode. 

> 
> 2.  What does engine-setup do.  Does it know what I have and simply update or 
> do I have to go through setup again.

You don’t have to setup from scratch. 

> 
> 3.  Then do I go to the host and update all the ovirt stuff?

Yes, first putting host in local maintenance mode and removing global 
maintenance mode from engine. 

> 
> However, they then say for oVirt Hosted Engine follow a link for upgrading 
> which takes me to a Not Found :( page but did have a link back to the release 
> notes which link to the Not Found which  So what do I need to know about 
> upgrading a hosted engine setup that there are no directions for.  Are there 
> some gotchas?  I thought that the release notes said I just had to upgrade 
> the engine and then the host.

@ovirt, can this be fixed ? It’s quite annoying indeed.

Meanwhile, I usually follow the linke from 4.0.0 release notes which is not 
404. 

https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine 

> 
> Given that my VMs are on iSCSI what happens if things go bad and I have to 
> start from scratch.  Can I import the VMs created under 3.6 into 4.1 or do I 
> have to do something else like copy them somewhere for backup.

It might be good to shutdown the VMs and do an export if you have a storage 
domain for that. Just to be 100 % safe.
In any case, during the upgrade, since the host is in maintenance, all VMs have 
to be OFF.

> 
> Any other hints and tips are appreciated.

Don’t have iSCSI so can’t help much with that.
I’m just a regular user who failed many times :)

> 
> Thanks.
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



smime.p7s
Description: S/MIME cryptographic signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Details about why a live migration failed

2017-03-24 Thread Davide Ferrari
And this is the vdsm log from vmhost04:

Thread-6320715::DEBUG::2017-03-24
11:38:02,224::migration::188::virt.vm::(_setupVdsConnection)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Initiating connection with
destination
Thread-6320715::DEBUG::2017-03-24
11:38:02,239::migration::200::virt.vm::(_setupVdsConnection)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Destination server is:
vm08.ovirt.internal:54321
Thread-6320715::DEBUG::2017-03-24
11:38:02,241::migration::246::virt.vm::(_prepareGuest)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration started
Thread-6320715::DEBUG::2017-03-24
11:38:02,253::guestagent::502::virt.vm::(send_lifecycle_event)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::send_lifecycle_event
before_migration called
Thread-6320715::DEBUG::2017-03-24
11:38:02,319::migration::353::virt.vm::(run)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::migration semaphore acquired
after 2 seconds
Thread-6320715::INFO::2017-03-24
11:38:02,950::migration::407::virt.vm::(_startUnderlyingMigration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Creation of destination VM
took: 0 seconds
Thread-6320715::INFO::2017-03-24
11:38:02,951::migration::429::virt.vm::(_startUnderlyingMigration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::starting migration to
qemu+tls://vm08.ovirt.internal/system with miguri tcp://192.168.10.107
Thread-6320715::DEBUG::2017-03-24
11:38:02,951::migration::494::virt.vm::(_perform_with_conv_schedule)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::performing migration with conv
schedule
Thread-6320717::DEBUG::2017-03-24
11:38:02,952::migration::620::virt.vm::(run)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::starting migration monitor
thread
Thread-6320717::DEBUG::2017-03-24
11:38:02,952::migration::739::virt.vm::(_execute_action_with_params)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Setting downtime to 100
Thread-6320717::INFO::2017-03-24
11:38:12,959::migration::712::virt.vm::(monitor_migration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 10 seconds
elapsed, 8% of data processed, total data: 16456MB, processed data: 483MB,
remaining data: 15157MB, transfer speed 52MBps, zero pages: 209524MB,
compressed: 0MB, dirty rate: -1, memory iteration: -1
Thread-6320717::INFO::2017-03-24
11:38:22,963::migration::712::virt.vm::(monitor_migration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 20 seconds
elapsed, 13% of data processed, total data: 16456MB, processed data:
1003MB, remaining data: 14469MB, transfer speed 51MBps, zero pages:
252844MB, compressed: 0MB, dirty rate: -1, memory iteration: -1
Thread-6320717::INFO::2017-03-24
11:38:32,966::migration::712::virt.vm::(monitor_migration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 30 seconds
elapsed, 16% of data processed, total data: 16456MB, processed data:
1524MB, remaining data: 13839MB, transfer speed 52MBps, zero pages:
281344MB, compressed: 0MB, dirty rate: -1, memory iteration: -1
Thread-6320717::INFO::2017-03-24
11:38:42,969::migration::712::virt.vm::(monitor_migration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 40 seconds
elapsed, 42% of data processed, total data: 16456MB, processed data:
2040MB, remaining data: 9685MB, transfer speed 52MBps, zero pages:
1214867MB, compressed: 0MB, dirty rate: -1, memory iteration: -1
Thread-6320717::INFO::2017-03-24
11:38:52,973::migration::712::virt.vm::(monitor_migration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 50 seconds
elapsed, 46% of data processed, total data: 16456MB, processed data:
2560MB, remaining data: 8944MB, transfer speed 52MBps, zero pages:
1271798MB, compressed: 0MB, dirty rate: -1, memory iteration: -1
Thread-6320717::INFO::2017-03-24
11:39:02,976::migration::712::virt.vm::(monitor_migration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 60 seconds
elapsed, 51% of data processed, total data: 16456MB, processed data:
3080MB, remaining data: 8148MB, transfer speed 52MBps, zero pages:
1342711MB, compressed: 0MB, dirty rate: -1, memory iteration: -1
Thread-6320717::INFO::2017-03-24
11:39:12,979::migration::712::virt.vm::(monitor_migration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 70 seconds
elapsed, 56% of data processed, total data: 16456MB, processed data:
3600MB, remaining data: 7353MB, transfer speed 52MBps, zero pages:
1413447MB, compressed: 0MB, dirty rate: -1, memory iteration: -1
Thread-6320717::INFO::2017-03-24
11:39:22,984::migration::712::virt.vm::(monitor_migration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 80 seconds
elapsed, 59% of data processed, total data: 16456MB, processed data:
4120MB, remaining data: 6755MB, transfer speed 52MBps, zero pages:
1433707MB, compressed: 0MB, dirty rate: -1, memory iteration: -1
Thread-6320717::INFO::2017-03-24
11:39:32,987::migration::712::virt.vm::(monitor_migration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 90 seconds
elapsed, 64% of 

Re: [ovirt-users] Details about why a live migration failed

2017-03-24 Thread Francesco Romani

On 03/24/2017 11:58 AM, Davide Ferrari wrote:
> And this is the vdsm log from vmhost04:
>
> Thread-6320717::INFO::2017-03-24
> 11:41:13,019::migration::712::virt.vm::(monitor_migration)
> vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 190
> seconds elapsed, 98% of data processed, total data: 16456MB, processed
> data: 9842MB, remaining data: 386MB, transfer speed 52MBps, zero
> pages: 1718676MB, compressed: 0MB, dirty rate: -1, memory iteration: -1
> libvirtEventLoop::DEBUG::2017-03-24
> 11:41:21,007::vm::4291::virt.vm::(onLibvirtLifecycleEvent)
> vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::event Suspended detail 0
> opaque None
> libvirtEventLoop::INFO::2017-03-24
> 11:41:21,025::vm::4815::virt.vm::(_logGuestCpuStatus)
> vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::CPU stopped: onSuspend
> libvirtEventLoop::DEBUG::2017-03-24
> 11:41:21,069::vm::4291::virt.vm::(onLibvirtLifecycleEvent)
> vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::event Resumed detail 0
> opaque None
> libvirtEventLoop::INFO::2017-03-24
> 11:41:21,069::vm::4815::virt.vm::(_logGuestCpuStatus)
> vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::CPU running: onResume
> Thread-6320715::DEBUG::2017-03-24
> 11:41:21,224::migration::715::virt.vm::(stop)
> vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::stopping migration
> monitor thread
> Thread-6320715::ERROR::2017-03-24
> 11:41:21,225::migration::252::virt.vm::(_recover)
> vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::operation failed:
> migration job: unexpectedly failed

This is surprising (no pun intended)
With a pretty high chance this comes from libvirt, I'm afraid you need
to dig in the libvirt logs/journal entries to learn more.
Vdsm could unfortunately do better than what it is already doing :\

-- 
Francesco Romani
Senior SW Eng., Virtualization R
Red Hat
IRC: fromani github: @fromanirh

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bulk move vm disks?

2017-03-24 Thread Ernest Beinrohr

On 24.03.2017 11:11, gflwqs gflwqs wrote:

Hi list,
I need to move 600+ vms:from one data domain to another, however from 
what i can see in the GUI i can only move one vm disk at the time 
which would be very time consuming.


I there any way i can bulk move those vm disks?
By the way, I can't stop the vms they have to be online during the 
migration..



This is my python program:

# ... API init

vms= api.vms.list(query = 'vmname')

for vm in vms:
  print vm.name
  for disk in vm.disks.list( ):
print " disk: " + disk.name + " " + disk.get_alias()
sd = api.storagedomains.get('NEWSTORAGE')

try:
  disk.move(params.Action(storage_domain=sd))

  disk_id = disk.get_id()
  while True:
  print("Waiting for movement to complete ...")
  time.sleep(10)
  disk = vm.disks.get(id=disk_id)
  if disk.get_status().get_state() == "ok":
  break

except:
  print "Cannot move."

api.disconnect()



--
Ernest Beinrohr, AXON PRO
Ing , RHCE 
, RHCVA 
, LPIC 
, VCA ,

+421-2-62410360 +421-903-482603
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Bulk move vm disks?

2017-03-24 Thread Staniforth, Paul
Hello Christian,
   I have recently moved around 700 VM disks between 
storage domains, you can select multiple disks in the GUI and move them. I did 
this on oVirt 3.6 most of these were dependant on template disks so I had to 
copy the template disks to the destination domain once all the dependant VM 
disks were moved I could remove the template disk from the source domain.
If the VMs are up it automatically creates a snapshot, in version 3.6 these 
aren't automatically removed.
Regards,
 Paul S.

On 24 Mar 2017 10:12, gflwqs gflwqs  wrote:
Hi list,
I need to move 600+ vms:from one data domain to another, however from what i 
can see in the GUI i can only move one vm disk at the time which would be very 
time consuming.

I there any way i can bulk move those vm disks?
By the way, I can't stop the vms they have to be online during the migration..

Regards
Christian

To view the terms under which this email is distributed, please go to:-
http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Upgrade from 3.6 to 4.1

2017-03-24 Thread Arman Khalatyan
Before upgrade make shure that epel is disabled, there are some conflicts
in collectd package.


On Fri, Mar 24, 2017 at 11:51 AM, Christophe TREFOIS <
christophe.tref...@uni.lu> wrote:

>
> On 23 Mar 2017, at 20:09, Brett Holcomb  wrote:
>
> I am currently running oVirt 3.6 on a physical server using hosted engine
> environment.  I have one server since it's a lab setup.  The storage is on
> a Synology 3615xs iSCSI LUN so that's where the vms are.  I plan to upgrade
> to 4.1 and need to check to make sure I understand the procedure.  I've
> read the oVirt 4.1 Release Notes and they leave some questions.
>
> First they say I can simply install the 4.1 release repo update all the
> ovirt-*-setup* and then run engine-setup.
>
>
> I don’t know for sure, but I would first go to latest 4.0 and then to 4.1
> as I’m not sure they test upgrades from 3.6 to 4.1 directly.
>
>
> 1. I assume this is on the engine VM running on the host physical box.
>
>
> Yes, inside the VM. But first, follow the guide below and make sure engine
> is in global maintenance mode.
>
>
> 2.  What does engine-setup do.  Does it know what I have and simply update
> or do I have to go through setup again.
>
>
> You don’t have to setup from scratch.
>
>
> 3.  Then do I go to the host and update all the ovirt stuff?
>
>
> Yes, first putting host in local maintenance mode and removing global
> maintenance mode from engine.
>
>
> However, they then say for oVirt Hosted Engine follow a link for upgrading
> which takes me to a Not Found :( page but did have a link back to the
> release notes which link to the Not Found which  So what do I need to
> know about upgrading a hosted engine setup that there are no directions
> for.  Are there some gotchas?  I thought that the release notes said I just
> had to upgrade the engine and then the host.
>
>
> @ovirt, can this be fixed ? It’s quite annoying indeed.
>
> Meanwhile, I usually follow the linke from 4.0.0 release notes which is
> not 404.
>
> https://www.ovirt.org/documentation/how-to/hosted-
> engine/#upgrade-hosted-engine
>
>
> Given that my VMs are on iSCSI what happens if things go bad and I have to
> start from scratch.  Can I import the VMs created under 3.6 into 4.1 or do
> I have to do something else like copy them somewhere for backup.
>
>
> It might be good to shutdown the VMs and do an export if you have a
> storage domain for that. Just to be 100 % safe.
> In any case, during the upgrade, since the host is in maintenance, all VMs
> have to be OFF.
>
>
> Any other hints and tips are appreciated.
>
>
> Don’t have iSCSI so can’t help much with that.
> I’m just a regular user who failed many times :)
>
>
> Thanks.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Details about why a live migration failed

2017-03-24 Thread Davide Ferrari
Mmmh this is all I got from libvirt log on receiver host:

LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
guest=druid-co01,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-5-druid-co01/master-key.aes
-machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m
size=16777216k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
4,maxcpus=64,sockets=16,cores=4,threads=1 -numa
node,nodeid=0,cpus=0-3,mem=16384 -uuid 4f627cc1-9b52-4eef-bf3a-c02e8a6303b8
-smbios 'type=1,manufacturer=oVirt,product=oVirt
Node,version=7-2.1511.el7.centos.2.10,serial=4C4C4544-0037-4C10-8031-B7C04F564232,uuid=4f627cc1-9b52-4eef-bf3a-c02e8a6303b8'
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-5-druid-co01.billydoma/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2017-03-24T10:38:03,driftfix=slew -global
kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x7 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -drive
if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/0001-0001-0001-0001-03e3/ba2bd397-9222-424d-aecc-eb652c0169d9/images/08b19faa-4b1f-4da8-87a2-2af0700a7906/bdb18a7d-1558-41f9-aa3a-e63407c7881e,format=qcow2,if=none,id=drive-virtio-disk0,serial=08b19faa-4b1f-4da8-87a2-2af0700a7906,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive
file=/rhev/data-center/0001-0001-0001-0001-03e3/ba2bd397-9222-424d-aecc-eb652c0169d9/images/987d84da-188b-45c0-99d0-3dde29ddcb6e/51a1b9ee-b0ae-4208-9806-a319d34db06e,format=qcow2,if=none,id=drive-virtio-disk1,serial=987d84da-188b-45c0-99d0-3dde29ddcb6e,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk1,id=virtio-disk1
-netdev tap,fd=34,id=hostnet0,vhost=on,vhostfd=35 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:93,bus=pci.0,addr=0x3
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/4f627cc1-9b52-4eef-bf3a-c02e8a6303b8.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/4f627cc1-9b52-4eef-bf3a-c02e8a6303b8.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice
tls-port=5904,addr=192.168.10.107,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-device
qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2
-incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
-msg timestamp=on
Domain id=5 is tainted: hook-script
2017-03-24T10:38:03.414396Z qemu-kvm: warning: CPU(s) not present in any
NUMA nodes: 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51
52 53 54 55 56 57 58 59 60 61 62 63
2017-03-24T10:38:03.414497Z qemu-kvm: warning: All CPU(s) up to maxcpus
should be described in NUMA config
2017-03-24 10:41:20.982+: shutting down
2017-03-24T10:41:20.986633Z qemu-kvm: load of migration failed:
Input/output error

Donating host doesn't say a thing about this VM.
There's an "input/output error" but I can't see to what is related...





2017-03-24 13:38 GMT+01:00 Francesco Romani :

>
> On 03/24/2017 11:58 AM, Davide Ferrari wrote:
> > And this is the vdsm log from vmhost04:
> >
> > Thread-6320717::INFO::2017-03-24
> > 11:41:13,019::migration::712::virt.vm::(monitor_migration)
> > vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 190
> > seconds elapsed, 98% of data processed, total data: 16456MB, processed
> > data: 9842MB, remaining data: 386MB, transfer speed 52MBps, zero
> > pages: 1718676MB, compressed: 0MB, dirty rate: -1, memory iteration: -1
> > libvirtEventLoop::DEBUG::2017-03-24
> > 11:41:21,007::vm::4291::virt.vm::(onLibvirtLifecycleEvent)
> > vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::event Suspended detail 0
> > opaque None
> > libvirtEventLoop::INFO::2017-03-24
> > 11:41:21,025::vm::4815::virt.vm::(_logGuestCpuStatus)
> > 

[ovirt-users] installing a new host when engine in 4.1 and 4.1.1 just released

2017-03-24 Thread Gianluca Cecchi
Hello,
suppose current situation with 4.1.1 just released:

- my environment is in 4.1 with an engine a DC/Cluster with some hosts
- I configure a new DC/Cluster/Host connected to the same engine
- when I add the 4.1 repo for the host and install host from the engine,
the picked up packages are indeed 4.1.1, while my engine is still 4.1

Does this constitute a problem?
If this is the case, how to install a 4.1 (not 4.1.1) host if I have still
engine in 4.1 and I have not intention to upgrade right now?

On this new host I got for example

ovirt-release41-4.1.1-1.el7.centos.noarch
vdsm-4.19.10-1.el7.centos.x86_64


while on my current hosts I have

ovirt-release41-4.1.0-1.el7.centos.noarch
vdsm-4.19.4-1.el7.centos.x86_64

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Self-Hosted install on fresh centos 7

2017-03-24 Thread Cory Taylor
I am having some difficulty setting up a self hosted engine on a new
install of centos 7, and would appreciate any help.

upon hosted-engine --deploy, I get this:

[ ERROR ] Failed to execute stage 'Misc configuration': [Errno 1] Operation
not permitted:
'/var/run/vdsm/storage/39015a62-0f8f-4b73-998e-5a4923b060f0/9d1ebe53-9997-46fe-a39f-aff5768eae59'


the relevant error in vdsm.log:

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 878,
in _run
return fn(*args, **kargs)
  File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 52, in
wrapper
res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 3145, in prepareImage
raise se.VolumeDoesNotExist(leafUUID)
VolumeDoesNotExist: Volume does not exist:
(u'b5b2412a-f825-43f2-b923-191216117d25',)
2017-03-24 11:19:17,572-0400 INFO  (jsonrpc/0) [storage.TaskManager.Task]
(Task='82952b56-0894-4a25-b14c-1f277d20d30a') aborting: Task is aborted:
'Volume does not exist' - code 201 (task:1176)
2017-03-24 11:19:17,573-0400 ERROR (jsonrpc/0) [storage.Dispatcher]
{'status': {'message': "Volume does not exist:
(u'b5b2412a-f825-43f2-b923-191216117d25',)", 'code': 201}} (dispatcher:78)
2017-03-24 11:19:17,573-0400 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Image.prepare failed (error 201) in 0.01 seconds (__init__:552)
2017-03-24 11:19:17,583-0400 INFO  (jsonrpc/2) [dispatcher] Run and
protect: createVolume(sdUUID=u'39015a62-0f8f-4b73-998e-5a4923b060f0',
spUUID=u'53edef91-404b-45d6-8180-48c59def4e01',
imgUUID=u'bff0948c-0916-49f8-9026-c23a571b8abe', size=u'1048576',
volFormat=5, preallocate=1, diskType=2,
volUUID=u'b5b2412a-f825-43f2-b923-191216117d25',
desc=u'hosted-engine.lockspace',
srcImgUUID=u'----',
srcVolUUID=u'----', initialSize=None)
(logUtils:51)


logs:

https://gist.github.com/anonymous/542443c68e5c9ebef9225ec1c358d627
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cloud init and vms

2017-03-24 Thread Endre Karlson
Yeah i tried that with ubuntu but it looks to just fail whebmn it starts
the vm because it tries to contact the metadata api instead of using the
cdrom source

23. mar. 2017 4:26 p.m. skrev "Artyom Lukianov" :

> Just be sure that the cloud-init service enabled before you create the
> template, otherwise it will fail to initialize a VM.
> Best Regards
>
> On Thu, Mar 23, 2017 at 1:06 PM, Endre Karlson 
> wrote:
>
>> Hi, is there any prerequisite setup on a Ubuntu vm that is turned into a
>> template that needs to be done except installing cloud init packages and
>> sealing the template?
>>
>> Endre
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] iSCSI Discovery cannot detetect LUN

2017-03-24 Thread Lukáš Kaplan
Hello all,

please do you have some experience with troubleshooting adding of iSCSI
domain to ovirt 4.1.1?

I am chalenging this issue now:

1) I have successfuly installed oVirt 4.1.1 environment with self-hosted
engine, 3 nodes and 3 storages (iSCSI Master domain, iSCSI for hosted
engine and NFS ISO domain). Everything is working now.

2) But, when I want to add new iSCSI domain, I can discover it, I can
login, but I cant see any LUN on that storage. (I had same problem in oVirt
4.1.0, so I made upgrade to 4.1.1)

3) Then I tryed to add this storage to another oVirt environment (oVirt
3.6) and there are no problem. I can see LUN on that storage and I can
connect it to oVirt.

I tryed to examine vdsm.log, but it is very detailed and unredable for me
:-/

Thak you in advance, have a nice day,
--
Lukas Kaplan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-03-24 Thread Lukáš Kaplan
Hello Nelson,

I did same thing today too and it was succesfull. But I used different
steps. Check this:
https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine
http://www.ovirt.org/documentation/upgrade-guide/chap-Updates_between_Minor_Releases/

Hope it helps you.

Have a nice day,

--
Lukas Kaplan


2017-03-24 15:11 GMT+01:00 Nelson Lameiras :

> Hello,
>
> When upgrading my test setup from 4.0 to 4.1, my engine vm lost it's
> console (from SPICE to None in GUI)
>
> My test setup :
> 2 manually built hosts using centos 7.3, ovirt 4.1
> 1 manually built hosted engine centos 7.3, oVirt 4.1.0.4-el7, accessible
> with SPICE console via GUI
>
> I updated ovirt-engine from 4.1.0 to 4.1.1 by doing on engine :
> - yum update
> - engine-setup
> - reboot engine
>
> When accessing 4.1.1 GUI, Graphics is set to "None" on "Virtual Machines"
> page, with "console button" greyed out (all other VMs have the same
> Graphics set to the same value as before)
> I tried to edit engine VM settings, and console options are same as before
> (SPLICE, QXL).
>
> I'm hopping this is not a new feature, since if we loose network on
> engine, console is the only way to debug...
>
> Is this a bug?
>
> ps. I was able to reproduce this bug 2 times
>
> cordialement, regards,
>
> 
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
> Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070>
> nelson.lamei...@lyra-network.com
> www.lyra-network.com | www.payzen.eu 
> 
> 
> 
> 
> --
> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self-Hosted install on fresh centos 7

2017-03-24 Thread Simone Tiraboschi
On Fri, Mar 24, 2017 at 4:25 PM, Cory Taylor  wrote:

> I am having some difficulty setting up a self hosted engine on a new
> install of centos 7, and would appreciate any help.
>
> upon hosted-engine --deploy, I get this:
>
> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 1]
> Operation not permitted: '/var/run/vdsm/storage/39015a62-0f8f-4b73-998e-
> 5a4923b060f0/9d1ebe53-9997-46fe-a39f-aff5768eae59'
>

Could you please try directly mounting
svr-fs-01.mact.co:/exports/ovirt_storage/engine_data
and ensure you can successfully write there as vdsm:kvm and change a file
ownership to vdsm:kvm or 36:36?

>
>
> the relevant error in vdsm.log:
>
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 878,
> in _run
> return fn(*args, **kargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 52, in
> wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/storage/hsm.py", line 3145, in prepareImage
> raise se.VolumeDoesNotExist(leafUUID)
> VolumeDoesNotExist: Volume does not exist: (u'b5b2412a-f825-43f2-b923-
> 191216117d25',)
> 2017-03-24 11:19:17,572-0400 INFO  (jsonrpc/0) [storage.TaskManager.Task]
> (Task='82952b56-0894-4a25-b14c-1f277d20d30a') aborting: Task is aborted:
> 'Volume does not exist' - code 201 (task:1176)
> 2017-03-24 11:19:17,573-0400 ERROR (jsonrpc/0) [storage.Dispatcher]
> {'status': {'message': "Volume does not exist: 
> (u'b5b2412a-f825-43f2-b923-191216117d25',)",
> 'code': 201}} (dispatcher:78)
>

This is not an issue: hosted-engine-setup it's checking here to be sure
that the SD uuid is free


> 2017-03-24 11:19:17,573-0400 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
> call Image.prepare failed (error 201) in 0.01 seconds (__init__:552)
> 2017-03-24 11:19:17,583-0400 INFO  (jsonrpc/2) [dispatcher] Run and
> protect: createVolume(sdUUID=u'39015a62-0f8f-4b73-998e-5a4923b060f0',
> spUUID=u'53edef91-404b-45d6-8180-48c59def4e01',
> imgUUID=u'bff0948c-0916-49f8-9026-c23a571b8abe', size=u'1048576',
> volFormat=5, preallocate=1, diskType=2, 
> volUUID=u'b5b2412a-f825-43f2-b923-191216117d25',
> desc=u'hosted-engine.lockspace', 
> srcImgUUID=u'----',
> srcVolUUID=u'----', initialSize=None)
> (logUtils:51)
>

and it's creating the volume here


>
>
> logs:
>
> https://gist.github.com/anonymous/542443c68e5c9ebef9225ec1c358d627
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Summer Of Code

2017-03-24 Thread Tomislav Kulušić
Hello, my name is Tomislav Kulusic and I am coming from Split Croatia. Right 
now I am a second year student at Rochester Institute Of Technology Croatia 
enrolled on Web and Mobile Computing major. I would like to know more about 
your company and to see how to properly propose proposal for summer of code 
project. Thanks in advance and I’m hoping to get response from you soon.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self-Hosted install on fresh centos 7

2017-03-24 Thread Cory Taylor
Yes, already checked permissions. I start the installation with an empty
share, and after failure, a storage domain exists - so it has write
permission.

On Fri, Mar 24, 2017 at 1:11 PM Simone Tiraboschi 
wrote:

> On Fri, Mar 24, 2017 at 4:25 PM, Cory Taylor  wrote:
>
> I am having some difficulty setting up a self hosted engine on a new
> install of centos 7, and would appreciate any help.
>
> upon hosted-engine --deploy, I get this:
>
> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 1]
> Operation not permitted:
> '/var/run/vdsm/storage/39015a62-0f8f-4b73-998e-5a4923b060f0/9d1ebe53-9997-46fe-a39f-aff5768eae59'
>
>
> Could you please try directly mounting 
> svr-fs-01.mact.co:/exports/ovirt_storage/engine_data
> and ensure you can successfully write there as vdsm:kvm and change a file
> ownership to vdsm:kvm or 36:36?
>
>
>
> the relevant error in vdsm.log:
>
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 878,
> in _run
> return fn(*args, **kargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 52, in
> wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/storage/hsm.py", line 3145, in prepareImage
> raise se.VolumeDoesNotExist(leafUUID)
> VolumeDoesNotExist: Volume does not exist:
> (u'b5b2412a-f825-43f2-b923-191216117d25',)
> 2017-03-24 11:19:17,572-0400 INFO  (jsonrpc/0) [storage.TaskManager.Task]
> (Task='82952b56-0894-4a25-b14c-1f277d20d30a') aborting: Task is aborted:
> 'Volume does not exist' - code 201 (task:1176)
> 2017-03-24 11:19:17,573-0400 ERROR (jsonrpc/0) [storage.Dispatcher]
> {'status': {'message': "Volume does not exist:
> (u'b5b2412a-f825-43f2-b923-191216117d25',)", 'code': 201}} (dispatcher:78)
>
>
> This is not an issue: hosted-engine-setup it's checking here to be sure
> that the SD uuid is free
>
>
> 2017-03-24 11:19:17,573-0400 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
> call Image.prepare failed (error 201) in 0.01 seconds (__init__:552)
> 2017-03-24 11:19:17,583-0400 INFO  (jsonrpc/2) [dispatcher] Run and
> protect: createVolume(sdUUID=u'39015a62-0f8f-4b73-998e-5a4923b060f0',
> spUUID=u'53edef91-404b-45d6-8180-48c59def4e01',
> imgUUID=u'bff0948c-0916-49f8-9026-c23a571b8abe', size=u'1048576',
> volFormat=5, preallocate=1, diskType=2,
> volUUID=u'b5b2412a-f825-43f2-b923-191216117d25',
> desc=u'hosted-engine.lockspace',
> srcImgUUID=u'----',
> srcVolUUID=u'----', initialSize=None)
> (logUtils:51)
>
>
> and it's creating the volume here
>
>
>
>
> logs:
>
> https://gist.github.com/anonymous/542443c68e5c9ebef9225ec1c358d627
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Self-Hosted install on fresh centos 7

2017-03-24 Thread Cory Taylor
Also, to save time, I have plenty of free space 100GB+ on both / and /home.

On Fri, Mar 24, 2017 at 1:22 PM Cory Taylor  wrote:

> Yes, already checked permissions. I start the installation with an empty
> share, and after failure, a storage domain exists - so it has write
> permission.
>
> On Fri, Mar 24, 2017 at 1:11 PM Simone Tiraboschi 
> wrote:
>
> On Fri, Mar 24, 2017 at 4:25 PM, Cory Taylor  wrote:
>
> I am having some difficulty setting up a self hosted engine on a new
> install of centos 7, and would appreciate any help.
>
> upon hosted-engine --deploy, I get this:
>
> [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 1]
> Operation not permitted:
> '/var/run/vdsm/storage/39015a62-0f8f-4b73-998e-5a4923b060f0/9d1ebe53-9997-46fe-a39f-aff5768eae59'
>
>
> Could you please try directly mounting 
> svr-fs-01.mact.co:/exports/ovirt_storage/engine_data
> and ensure you can successfully write there as vdsm:kvm and change a file
> ownership to vdsm:kvm or 36:36?
>
>
>
> the relevant error in vdsm.log:
>
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 878,
> in _run
> return fn(*args, **kargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 52, in
> wrapper
> res = f(*args, **kwargs)
>   File "/usr/share/vdsm/storage/hsm.py", line 3145, in prepareImage
> raise se.VolumeDoesNotExist(leafUUID)
> VolumeDoesNotExist: Volume does not exist:
> (u'b5b2412a-f825-43f2-b923-191216117d25',)
> 2017-03-24 11:19:17,572-0400 INFO  (jsonrpc/0) [storage.TaskManager.Task]
> (Task='82952b56-0894-4a25-b14c-1f277d20d30a') aborting: Task is aborted:
> 'Volume does not exist' - code 201 (task:1176)
> 2017-03-24 11:19:17,573-0400 ERROR (jsonrpc/0) [storage.Dispatcher]
> {'status': {'message': "Volume does not exist:
> (u'b5b2412a-f825-43f2-b923-191216117d25',)", 'code': 201}} (dispatcher:78)
>
>
> This is not an issue: hosted-engine-setup it's checking here to be sure
> that the SD uuid is free
>
>
> 2017-03-24 11:19:17,573-0400 INFO  (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
> call Image.prepare failed (error 201) in 0.01 seconds (__init__:552)
> 2017-03-24 11:19:17,583-0400 INFO  (jsonrpc/2) [dispatcher] Run and
> protect: createVolume(sdUUID=u'39015a62-0f8f-4b73-998e-5a4923b060f0',
> spUUID=u'53edef91-404b-45d6-8180-48c59def4e01',
> imgUUID=u'bff0948c-0916-49f8-9026-c23a571b8abe', size=u'1048576',
> volFormat=5, preallocate=1, diskType=2,
> volUUID=u'b5b2412a-f825-43f2-b923-191216117d25',
> desc=u'hosted-engine.lockspace',
> srcImgUUID=u'----',
> srcVolUUID=u'----', initialSize=None)
> (logUtils:51)
>
>
> and it's creating the volume here
>
>
>
>
> logs:
>
> https://gist.github.com/anonymous/542443c68e5c9ebef9225ec1c358d627
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI Discovery cannot detetect LUN

2017-03-24 Thread Yaniv Kaul
On Fri, Mar 24, 2017 at 1:34 PM, Lukáš Kaplan  wrote:

> Hello all,
>
> please do you have some experience with troubleshooting adding of iSCSI
> domain to ovirt 4.1.1?
>
> I am chalenging this issue now:
>
> 1) I have successfuly installed oVirt 4.1.1 environment with self-hosted
> engine, 3 nodes and 3 storages (iSCSI Master domain, iSCSI for hosted
> engine and NFS ISO domain). Everything is working now.
>
> 2) But, when I want to add new iSCSI domain, I can discover it, I can
> login, but I cant see any LUN on that storage. (I had same problem in oVirt
> 4.1.0, so I made upgrade to 4.1.1)
>

Are you sure mappings are correct?
Can you ensure the LUN is empty?
Y.


>
> 3) Then I tryed to add this storage to another oVirt environment (oVirt
> 3.6) and there are no problem. I can see LUN on that storage and I can
> connect it to oVirt.
>
> I tryed to examine vdsm.log, but it is very detailed and unredable for me
> :-/
>
> Thak you in advance, have a nice day,
> --
> Lukas Kaplan
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Libvirtd Segfault

2017-03-24 Thread Yaniv Kaul
On Fri, Mar 24, 2017 at 9:28 PM, Aaron West  wrote:

> Hi Guys,
>
> I've been having some issues this week with my oVirt 4.1 install on CentOS
> 7...
>
> It was previously working great but I made a couple of alterations(which
> I've mostly backed out now) and I ran a yum update followed by a reboot,
> however. after it came back up I couldn't start any virtual machines...
>
> The oVirt web interface reports :
>
> VDSM Local command SpmStatusVDS failed: Heartbeat exceeded
> VDSM Local command GetStatsVDS failed: Heartbeat exceeded
> VM My_Workstation is down with error. Exit message: Failed to find the
> libvirt domain.
>
> Plus some other errors so I checked my "dmesg" output and libvirtd seems
> to segfault :
>
> [70835.914607] libvirtd[10091]: segfault at 0 ip 7f681c7e7721 sp
> 7f680c7d2740 error 4 in libvirt.so.0.2000.0[7f681c73a000+353000]
>

Ouch...

I would suggest enabling libvirt debug logs and perhaps enabling a core
dump if it's not enabled?
Y.


>
> Next I checked logs but I couldn't find anything that seemed relevant to
> me, I mean I see lots of oVirt complaining but that makes sense if libvirtd
> segfaults right?
>
> So last ditch effort was try and use strace and hope with my limited
> knowlege I spot something uselful but I didn't so I've attahced the strace
> output to this email in the hope someone else might.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-03-24 Thread Nelson Lameiras
ok, I'm guessing that your situation is different to mine. 

I'm hopping someone confirms that upgrading oVirt engine from 4.0 to 4.1 does 
(or does not) indeed disable SPICE console on Engine VM ? 

cordialement, regards, 


Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 
nelson.lamei...@lyra-network.com 

www.lyra-network.com | www.payzen.eu 





Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE 



From: "Lukáš Kaplan"  
To: "Nelson Lameiras"  
Cc: "users"  
Sent: Friday, March 24, 2017 4:14:37 PM 
Subject: Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine 
console available 

Hello Nelson, 
I have hosted engine graphics console as VNC. I am using ssho directly to 
engine. I cant say if vnc is working or not, because I have no client for it... 
(Tested in debian jessie with virt-viewer 1.0 and 5.0 - does not work) 

-- 
Lukas Kaplan 



2017-03-24 15:32 GMT+01:00 Nelson Lameiras < nelson.lamei...@lyra-network.com > 
: 



Hello Lukas, 

Thanks for you feedback. 
I did something very similar to procedures you sent me. 

Can you confirm me that your HostedEngine vm still has it's SPICE console 
available and is working? 
(otherwise my update went ok) 

cordialement, regards, 


Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 

nelson.lamei...@lyra-network.com 

www.lyra-network.com | www.payzen.eu 





Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE 



From: "Lukáš Kaplan" < lkap...@dragon.cz > 
To: "Nelson Lameiras" < nelson.lamei...@lyra-network.com > 
Cc: "users" < users@ovirt.org > 
Sent: Friday, March 24, 2017 3:22:44 PM 
Subject: Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine 
console available 

Hello Nelson, 
I did same thing today too and it was succesfull. But I used different steps. 
Check this: 
https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine 
http://www.ovirt.org/documentation/upgrade-guide/chap-Updates_between_Minor_Releases/
 

Hope it helps you. 

Have a nice day, 

-- 
Lukas Kaplan 


2017-03-24 15:11 GMT+01:00 Nelson Lameiras < nelson.lamei...@lyra-network.com > 
: 

BQ_BEGIN

Hello, 

When upgrading my test setup from 4.0 to 4.1, my engine vm lost it's console 
(from SPICE to None in GUI) 

My test setup : 
2 manually built hosts using centos 7.3, ovirt 4.1 
1 manually built hosted engine centos 7.3, oVirt 4.1.0.4-el7, accessible with 
SPICE console via GUI 

I updated ovirt-engine from 4.1.0 to 4.1.1 by doing on engine : 
- yum update 
- engine-setup 
- reboot engine 

When accessing 4.1.1 GUI, Graphics is set to "None" on "Virtual Machines" page, 
with "console button" greyed out (all other VMs have the same Graphics set to 
the same value as before) 
I tried to edit engine VM settings, and console options are same as before 
(SPLICE, QXL). 

I'm hopping this is not a new feature, since if we loose network on engine, 
console is the only way to debug... 

Is this a bug? 

ps. I was able to reproduce this bug 2 times 

cordialement, regards, 


Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 

nelson.lamei...@lyra-network.com 

www.lyra-network.com | www.payzen.eu 





Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE 


___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 






BQ_END


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] prevent ovirt from managing a particular vlan setting on an interface

2017-03-24 Thread Gianluca Cecchi
Is it possible?
So that for example oVirt uses my eth3 interface as part of a bond that it
uses, but doesn't change/remove my already in place ifcfg-eth3.100
configuration file?

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Upgrade guide for oVirt 4.1.x?

2017-03-24 Thread Davide Ferrari
Hello

following the links from the official release notes, I always end up to
this page

https://www.ovirt.org/documentation/upgrade-guide/upgrade-guide/
which just covers updating from 3.6 to 4.0.
Is this still valid for 4.0 -> 4.1 upgrades?

Thanks


-- 
Davide Ferrari
Senior Systems Engineer
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Details about why a live migration failed

2017-03-24 Thread Michal Skrivanek

> On 24 Mar 2017, at 15:15, Davide Ferrari  wrote:
> 
> Mmmh this is all I got from libvirt log on receiver host:
> 
> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin 
> QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name 
> guest=druid-co01,debug-threads=on -S -object 
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-5-druid-co01/master-key.aes
>  -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m 
> size=16777216k,slots=16,maxmem=4294967296k -realtime mlock=off -smp 
> 4,maxcpus=64,sockets=16,cores=4,threads=1 -numa 
> node,nodeid=0,cpus=0-3,mem=16384 -uuid 4f627cc1-9b52-4eef-bf3a-c02e8a6303b8 
> -smbios 'type=1,manufacturer=oVirt,product=oVirt 
> Node,version=7-2.1511.el7.centos.2.10,serial=4C4C4544-0037-4C10-8031-B7C04F564232,uuid=4f627cc1-9b52-4eef-bf3a-c02e8a6303b8'
>  -no-user-config -nodefaults -chardev 
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-5-druid-co01.billydoma/monitor.sock,server,nowait
>  -mon chardev=charmonitor,id=monitor,mode=control -rtc 
> base=2017-03-24T10:38:03,driftfix=slew -global 
> kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on 
> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device 
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x7 -device 
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -drive 
> if=none,id=drive-ide0-1-0,readonly=on -device 
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive 
> file=/rhev/data-center/0001-0001-0001-0001-03e3/ba2bd397-9222-424d-aecc-eb652c0169d9/images/08b19faa-4b1f-4da8-87a2-2af0700a7906/bdb18a7d-1558-41f9-aa3a-e63407c7881e,format=qcow2,if=none,id=drive-virtio-disk0,serial=08b19faa-4b1f-4da8-87a2-2af0700a7906,cache=none,werror=stop,rerror=stop,aio=threads
>  -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>  -drive 
> file=/rhev/data-center/0001-0001-0001-0001-03e3/ba2bd397-9222-424d-aecc-eb652c0169d9/images/987d84da-188b-45c0-99d0-3dde29ddcb6e/51a1b9ee-b0ae-4208-9806-a319d34db06e,format=qcow2,if=none,id=drive-virtio-disk1,serial=987d84da-188b-45c0-99d0-3dde29ddcb6e,cache=none,werror=stop,rerror=stop,aio=threads
>  -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk1,id=virtio-disk1
>  -netdev tap,fd=34,id=hostnet0,vhost=on,vhostfd=35 -device 
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:93,bus=pci.0,addr=0x3
>  -chardev 
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/4f627cc1-9b52-4eef-bf3a-c02e8a6303b8.com.redhat.rhevm.vdsm,server,nowait
>  -device 
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
>  -chardev 
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/4f627cc1-9b52-4eef-bf3a-c02e8a6303b8.org.qemu.guest_agent.0,server,nowait
>  -device 
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
>  -chardev spicevmc,id=charchannel2,name=vdagent -device 
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
>  -spice 
> tls-port=5904,addr=192.168.10.107,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
>  -device 
> qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2
>  -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 
> -msg timestamp=on
> Domain id=5 is tainted: hook-script
> 2017-03-24T10:38:03.414396Z qemu-kvm: warning: CPU(s) not present in any NUMA 
> nodes: 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 
> 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 
> 55 56 57 58 59 60 61 62 63
> 2017-03-24T10:38:03.414497Z qemu-kvm: warning: All CPU(s) up to maxcpus 
> should be described in NUMA config
> 2017-03-24 10:41:20.982+: shutting down
> 2017-03-24T10:41:20.986633Z qemu-kvm: load of migration failed: Input/output 
> error
> 
> Donating host doesn't say a thing about this VM.
> There's an "input/output error" but I can't see to what is related…

most likely to the migration stream, either the TCP connection was cut short or 
internal bug
What are the version of qemu on both ends? host OS?

Thanks,
michal

> 
> 
> 
> 
> 
> 2017-03-24 13:38 GMT+01:00 Francesco Romani  >:
> 
> On 03/24/2017 11:58 AM, Davide Ferrari wrote:
> > And this is the vdsm log from vmhost04:
> >
> > Thread-6320717::INFO::2017-03-24
> > 11:41:13,019::migration::712::virt.vm::(monitor_migration)
> > vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 190
> > seconds elapsed, 98% of data processed, total data: 16456MB, processed
> > data: 9842MB, 

[ovirt-users] appliance upgrade 4.0 to 4.1

2017-03-24 Thread Nelson Lameiras
Hello, 

I'm trying to upgrade an appliance based oVirt install (2 nodes with centos 
7.3, ovirt 4.0) using "hosted-engine --upgrade-appliance" on one host. 

After multiples tries, I always get this error at the end : 

[ INFO ] Running engine-setup on the appliance 
|- Preparing to restore: 
|- - Unpacking file '/root/engine_backup.tar.gz' 
|- FATAL: Backup was created by version '4.0' and can not be restored using the 
installed version 4.1 
|- HE_APPLIANCE_ENGINE_RESTORE_FAIL 
[ ERROR ] Engine backup restore failed on the appliance 

is this normal? 
is this process not yet compatible with oVirt 4.1 appliance? 
which is the "official" way to update oVirt from 4.0 to 4.1 

I tried to do a "yum update && engine setup && reboot" on engine, and indeed it 
works, but there is no rollback possible, so it seems a little dangerours (?) 

cordialement, regards, 


Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 
nelson.lamei...@lyra-network.com 

www.lyra-network.com | www.payzen.eu 





Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-03-24 Thread Nelson Lameiras
Hello Lukas, 

Thanks for you feedback. 
I did something very similar to procedures you sent me. 

Can you confirm me that your HostedEngine vm still has it's SPICE console 
available and is working? 
(otherwise my update went ok) 

cordialement, regards, 


Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 
nelson.lamei...@lyra-network.com 

www.lyra-network.com | www.payzen.eu 





Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE 



From: "Lukáš Kaplan"  
To: "Nelson Lameiras"  
Cc: "users"  
Sent: Friday, March 24, 2017 3:22:44 PM 
Subject: Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine 
console available 

Hello Nelson, 
I did same thing today too and it was succesfull. But I used different steps. 
Check this: 
https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine 
http://www.ovirt.org/documentation/upgrade-guide/chap-Updates_between_Minor_Releases/
 

Hope it helps you. 

Have a nice day, 

-- 
Lukas Kaplan 


2017-03-24 15:11 GMT+01:00 Nelson Lameiras < nelson.lamei...@lyra-network.com > 
: 



Hello, 

When upgrading my test setup from 4.0 to 4.1, my engine vm lost it's console 
(from SPICE to None in GUI) 

My test setup : 
2 manually built hosts using centos 7.3, ovirt 4.1 
1 manually built hosted engine centos 7.3, oVirt 4.1.0.4-el7, accessible with 
SPICE console via GUI 

I updated ovirt-engine from 4.1.0 to 4.1.1 by doing on engine : 
- yum update 
- engine-setup 
- reboot engine 

When accessing 4.1.1 GUI, Graphics is set to "None" on "Virtual Machines" page, 
with "console button" greyed out (all other VMs have the same Graphics set to 
the same value as before) 
I tried to edit engine VM settings, and console options are same as before 
(SPLICE, QXL). 

I'm hopping this is not a new feature, since if we loose network on engine, 
console is the only way to debug... 

Is this a bug? 

ps. I was able to reproduce this bug 2 times 

cordialement, regards, 


Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 

nelson.lamei...@lyra-network.com 

www.lyra-network.com | www.payzen.eu 





Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE 


___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Details about why a live migration failed

2017-03-24 Thread Davide Ferrari
Source: CentOS 7.2 - qemu-kvm-ev-2.3.0-31.el7.16.1
Dest: CentOS 7.3 - qemu-kvm-ev-2.6.0-28.el7_3.3.1

To be fair I'm trying to migrate away that VM so I can install updates on
the source host.


2017-03-24 15:18 GMT+01:00 Michal Skrivanek :

>
> On 24 Mar 2017, at 15:15, Davide Ferrari  wrote:
>
> Mmmh this is all I got from libvirt log on receiver host:
>
> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name 
> guest=druid-co01,debug-threads=on
> -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/
> qemu/domain-5-druid-co01/master-key.aes -machine
> pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m
> size=16777216k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
> 4,maxcpus=64,sockets=16,cores=4,threads=1 -numa
> node,nodeid=0,cpus=0-3,mem=16384 -uuid 4f627cc1-9b52-4eef-bf3a-c02e8a6303b8
> -smbios 'type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-2.1511.el7.centos.2.10,serial=4C4C4544-
> 0037-4C10-8031-B7C04F564232,uuid=4f627cc1-9b52-4eef-bf3a-c02e8a6303b8'
> -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/
> var/lib/libvirt/qemu/domain-5-druid-co01.billydoma/monitor.sock,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2017-03-24T10:38:03,driftfix=slew -global 
> kvm-pit.lost_tick_policy=discard
> -no-hpet -no-shutdown -boot strict=on -device 
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
> -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x7 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4
> -drive if=none,id=drive-ide0-1-0,readonly=on -device
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
> file=/rhev/data-center/0001-0001-0001-0001-03e3/ba2bd397-9222-
> 424d-aecc-eb652c0169d9/images/08b19faa-4b1f-4da8-87a2-
> 2af0700a7906/bdb18a7d-1558-41f9-aa3a-e63407c7881e,format=
> qcow2,if=none,id=drive-virtio-disk0,serial=08b19faa-4b1f-
> 4da8-87a2-2af0700a7906,cache=none,werror=stop,rerror=stop,aio=threads
> -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-
> virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/rhev/data-center/
> 0001-0001-0001-0001-03e3/ba2bd397-9222-
> 424d-aecc-eb652c0169d9/images/987d84da-188b-45c0-99d0-
> 3dde29ddcb6e/51a1b9ee-b0ae-4208-9806-a319d34db06e,format=
> qcow2,if=none,id=drive-virtio-disk1,serial=987d84da-188b-
> 45c0-99d0-3dde29ddcb6e,cache=none,werror=stop,rerror=stop,aio=threads
> -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk1,id=virtio-disk1
> -netdev tap,fd=34,id=hostnet0,vhost=on,vhostfd=35 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:93,bus=pci.0,addr=0x3
> -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/
> 4f627cc1-9b52-4eef-bf3a-c02e8a6303b8.com.redhat.rhevm.vdsm,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/
> 4f627cc1-9b52-4eef-bf3a-c02e8a6303b8.org.qemu.guest_agent.0,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev
> spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-
> serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> -spice tls-port=5904,addr=192.168.10.107,x509-dir=/etc/pki/vdsm/
> libvirt-spice,tls-channel=default,tls-channel=main,tls-
> channel=display,tls-channel=inputs,tls-channel=cursor,tls-
> channel=playback,tls-channel=record,tls-channel=smartcard,
> tls-channel=usbredir,seamless-migration=on -device
> qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,
> vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2 -incoming defer -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on
> Domain id=5 is tainted: hook-script
> 2017-03-24T10:38:03.414396Z qemu-kvm: warning: CPU(s) not present in any
> NUMA nodes: 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
> 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51
> 52 53 54 55 56 57 58 59 60 61 62 63
> 2017-03-24T10:38:03.414497Z qemu-kvm: warning: All CPU(s) up to maxcpus
> should be described in NUMA config
> 2017-03-24 10:41:20.982+: shutting down
> 2017-03-24T10:41:20.986633Z qemu-kvm: load of migration failed:
> Input/output error
>
> Donating host doesn't say a thing about this VM.
> There's an "input/output error" but I can't see to what is related…
>
>
> most likely to the migration stream, either the TCP connection was cut
> short or internal bug
> What are the version of qemu on both ends? host OS?
>
> Thanks,
> michal
>
>
>
>
>
>
> 2017-03-24 13:38 GMT+01:00 Francesco Romani :
>
>>
>> On 03/24/2017 11:58 AM, Davide Ferrari wrote:
>> > And this is the vdsm log from vmhost04:
>> >
>> > 

Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-03-24 Thread Lukáš Kaplan
Hello Nelson,

I have hosted engine graphics console as VNC. I am using ssho directly to
engine. I cant say if vnc is working or not, because I have no client for
it... (Tested in debian jessie with virt-viewer 1.0 and 5.0 - does not work)

--
Lukas Kaplan



2017-03-24 15:32 GMT+01:00 Nelson Lameiras :

> Hello Lukas,
>
> Thanks for you feedback.
> I did something very similar to procedures you sent me.
>
> Can you confirm me that your HostedEngine vm still has it's SPICE console
> available and is working?
> (otherwise my update went ok)
>
> cordialement, regards,
>
> 
> Nelson LAMEIRAS
> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
> Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070>
> nelson.lamei...@lyra-network.com
> www.lyra-network.com | www.payzen.eu 
> 
> 
> 
> 
> --
> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
>
>
> --
> *From: *"Lukáš Kaplan" 
> *To: *"Nelson Lameiras" 
> *Cc: *"users" 
> *Sent: *Friday, March 24, 2017 3:22:44 PM
> *Subject: *Re: [ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more
> engine console available
>
> Hello Nelson,
> I did same thing today too and it was succesfull. But I used different
> steps. Check this:
> https://www.ovirt.org/documentation/how-to/hosted-
> engine/#upgrade-hosted-engine
> http://www.ovirt.org/documentation/upgrade-guide/
> chap-Updates_between_Minor_Releases/
>
> Hope it helps you.
>
> Have a nice day,
>
> --
> Lukas Kaplan
>
>
> 2017-03-24 15:11 GMT+01:00 Nelson Lameiras  com>:
>
>> Hello,
>>
>> When upgrading my test setup from 4.0 to 4.1, my engine vm lost it's
>> console (from SPICE to None in GUI)
>>
>> My test setup :
>> 2 manually built hosts using centos 7.3, ovirt 4.1
>> 1 manually built hosted engine centos 7.3, oVirt 4.1.0.4-el7, accessible
>> with SPICE console via GUI
>>
>> I updated ovirt-engine from 4.1.0 to 4.1.1 by doing on engine :
>> - yum update
>> - engine-setup
>> - reboot engine
>>
>> When accessing 4.1.1 GUI, Graphics is set to "None" on "Virtual Machines"
>> page, with "console button" greyed out (all other VMs have the same
>> Graphics set to the same value as before)
>> I tried to edit engine VM settings, and console options are same as
>> before (SPLICE, QXL).
>>
>> I'm hopping this is not a new feature, since if we loose network on
>> engine, console is the only way to debug...
>>
>> Is this a bug?
>>
>> ps. I was able to reproduce this bug 2 times
>>
>> cordialement, regards,
>>
>> 
>> Nelson LAMEIRAS
>> Ingénieur Systèmes et Réseaux / Systems and Networks engineer
>> Tel: +33 5 32 09 09 70 <+33%205%2032%2009%2009%2070>
>> nelson.lamei...@lyra-network.com
>> www.lyra-network.com | www.payzen.eu 
>> 
>> 
>> 
>> 
>> --
>> Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Getting error when i try to assign logical networks to interfaces

2017-03-24 Thread Dan Kenigsberg
On Fri, Mar 24, 2017 at 11:36 AM, martin chamambo  wrote:
> @ Dan i am not using FCOE , i am trying to set up logical networks , my
> oVirt Engine Version: 4.0.6.3-1.el7.centos and my vdsm versions is
> vdsm-4.18.4.1-0.el7.centos

And what it the version of your vdsm-hook-fcoe?

rpm -qa |grep vdsm
rpm -qf /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe

I assumed that the 50_fcoe script is out of date. It looks like a 3.6
piece of code.

You can just remove 50_fcoe to squelch the problem for a while, but
I'd like to understand why it is there.

>
> On Thu, Mar 23, 2017 at 11:26 PM, Dan Kenigsberg  wrote:
>>
>> On Thu, Mar 23, 2017 at 9:25 AM, martin chamambo 
>> wrote:
>> > I havent set up any hooks and when i try to assign logical networks to
>> > an
>> > already existing interface on the host ,it gives me this error
>> >
>> > Hook error: Hook Error: ('Traceback (most recent call last):\n  File
>> > "/usr/libexec/vdsm/hooks/before_network_setup/50_fcoe", line 18, in
>> > \nfrom vdsm.netconfpersistence import
>> > RunningConfig\nImportError: No module named netconfpersistence\n',)
>> >
>>
>> vdsm-hook-fcoe is installed by default on ovirt-node.
>> Which version of vdsm (and vdsm-hook-fcoe) are you using?
>> This Traceback smells like a mismatch between the two.
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] appliance upgrade 4.0 to 4.1

2017-03-24 Thread Nelson Lameiras
(I should add that I'm trying to upgrade "oVirt Engine Appliance 4.0" to "oVirt 
Engine Appliance 4.1") 

Hello, 

I'm trying to upgrade an appliance based oVirt install (2 nodes with centos 
7.3, ovirt 4.0) using "hosted-engine --upgrade-appliance" on one host. 

After multiples tries, I always get this error at the end : 

[ INFO ] Running engine-setup on the appliance 
|- Preparing to restore: 
|- - Unpacking file '/root/engine_backup.tar.gz' 
|- FATAL: Backup was created by version '4.0' and can not be restored using the 
installed version 4.1 
|- HE_APPLIANCE_ENGINE_RESTORE_FAIL 
[ ERROR ] Engine backup restore failed on the appliance 

is this normal? 
is this process not yet compatible with oVirt 4.1 appliance? 
which is the "official" way to update oVirt from 4.0 to 4.1 

I tried to do a "yum update && engine setup && reboot" on engine, and indeed it 
works, but there is no rollback possible, so it seems a little dangerours (?) 



cordialement, regards, 


Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 
nelson.lamei...@lyra-network.com 

www.lyra-network.com | www.payzen.eu 





Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE 


___ 
Users mailing list 
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] engine upgrade 4.1.0 => 4.1.1, no more engine console available

2017-03-24 Thread Nelson Lameiras
Hello, 

When upgrading my test setup from 4.0 to 4.1, my engine vm lost it's console 
(from SPICE to None in GUI) 

My test setup : 
2 manually built hosts using centos 7.3, ovirt 4.1 
1 manually built hosted engine centos 7.3, oVirt 4.1.0.4-el7, accessible with 
SPICE console via GUI 

I updated ovirt-engine from 4.1.0 to 4.1.1 by doing on engine : 
- yum update 
- engine-setup 
- reboot engine 

When accessing 4.1.1 GUI, Graphics is set to "None" on "Virtual Machines" page, 
with "console button" greyed out (all other VMs have the same Graphics set to 
the same value as before) 
I tried to edit engine VM settings, and console options are same as before 
(SPLICE, QXL). 

I'm hopping this is not a new feature, since if we loose network on engine, 
console is the only way to debug... 

Is this a bug? 

ps. I was able to reproduce this bug 2 times 

cordialement, regards, 


Nelson LAMEIRAS 
Ingénieur Systèmes et Réseaux / Systems and Networks engineer 
Tel: +33 5 32 09 09 70 
nelson.lamei...@lyra-network.com 

www.lyra-network.com | www.payzen.eu 





Lyra Network, 109 rue de l'innovation, 31670 Labège, FRANCE 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users