Re: [Users] Migration Failed

2014-02-27 Thread Meital Bourvine
Hi Koen, 

Can you please attach the relevant vdsm logs? 

- Original Message -

 From: Koen Vanoppen vanoppen.k...@gmail.com
 To: users@ovirt.org
 Sent: Thursday, February 27, 2014 9:38:46 AM
 Subject: [Users] Migration Failed

 Dear all,

 I added a new host to our ovirt. Everything went good, exept in the beginnen
 there was a problem with the firmware of the FibreCard but that is solved
 (maybe relevant to the issue coming up ;-) ), host is green en up now. But
 when I tried to migrate a machine for testing purpose to see if everythin
 was ok, I get the following error in the engine.log and the migration fails:

 2014-02-27 08:33:08,082 INFO [org.ovirt.engine.core.bll.MigrateVmCommand]
 (pool-6-thread-49) [f1a68d8] Running command: MigrateVmCommand internal:
 false. Entities affected : ID: 3444fc9d-0395-4cbb-9a11-28a42802560c Type: VM
 2014-02-27 08:33:08,362 INFO
 [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-6-thread-49)
 [f1a68d8] START, MigrateVDSCommand(HostName = soyuz, HostId =
 6dfa2f9c-85c6-4fb3-b65f-c84620115a1a,
 vmId=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost=
 soyuz.brusselsairport.aero , dstVdsId=6707fa40-753a-4c95-9304-e47198477e4d,
 dstHost= buran.brusselsairport.aero:54321 , migrationMethod=ONLINE,
 tunnelMigration=false), log id: 50cd7284
 2014-02-27 08:33:08,371 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
 (pool-6-thread-49) [f1a68d8] VdsBroker::migrate::Entered
 (vm_guid=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost=
 soyuz.brusselsairport.aero , dstHost= buran.brusselsairport.aero:54321 ,
 method=online
 2014-02-27 08:33:08,405 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
 (pool-6-thread-49) [f1a68d8] START, MigrateBrokerVDSCommand(HostName =
 soyuz, HostId = 6dfa2f9c-85c6-4fb3-b65f-c84620115a1a,
 vmId=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost=
 soyuz.brusselsairport.aero , dstVdsId=6707fa40-753a-4c95-9304-e47198477e4d,
 dstHost= buran.brusselsairport.aero:54321 , migrationMethod=ONLINE,
 tunnelMigration=false), log id: 20806b79
 2014-02-27 08:33:08,441 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
 (pool-6-thread-49) [f1a68d8] FINISH, MigrateBrokerVDSCommand, log id:
 20806b79
 2014-02-27 08:33:08,451 INFO
 [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-6-thread-49)
 [f1a68d8] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 50cd7284
 2014-02-27 08:33:08,491 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (pool-6-thread-49) [f1a68d8] Correlation ID: f1a68d8, Job ID:
 c3642418-3f05-41eb-8b1d-07fe04867742, Call Stack: null, Custom Event ID: -1,
 Message: Migration started (VM: ADW-DevSplunk, Source: soyuz, Destination:
 buran, User: admin@internal).
 2014-02-27 08:33:20,036 INFO
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (DefaultQuartzScheduler_Worker-82) VM ADW-DevSplunk
 3444fc9d-0395-4cbb-9a11-28a42802560c moved from MigratingFrom -- Up
 2014-02-27 08:33:20,042 INFO
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (DefaultQuartzScheduler_Worker-82) Adding VM
 3444fc9d-0395-4cbb-9a11-28a42802560c to re-run list
 2014-02-27 08:33:20,051 ERROR
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (DefaultQuartzScheduler_Worker-82) Rerun vm
 3444fc9d-0395-4cbb-9a11-28a42802560c. Called from vds soyuz
 2014-02-27 08:33:20,107 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (pool-6-thread-50) START, MigrateStatusVDSCommand(HostName = soyuz, HostId =
 6dfa2f9c-85c6-4fb3-b65f-c84620115a1a,
 vmId=3444fc9d-0395-4cbb-9a11-28a42802560c), log id: 75ac0a46
 2014-02-27 08:33:20,124 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (pool-6-thread-50) Failed in MigrateStatusVDS method
 2014-02-27 08:33:20,130 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (pool-6-thread-50) Error code noConPeer and error message
 VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error =
 Could not connect to peer VDS
 2014-02-27 08:33:20,136 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (pool-6-thread-50) Command
 org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return
 value
 StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=10, mMessage=Could
 not connect to peer VDS]]
 2014-02-27 08:33:20,139 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (pool-6-thread-50) HostName = soyuz
 2014-02-27 08:33:20,145 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (pool-6-thread-50) Command MigrateStatusVDS execution failed. Exception:
 VDSErrorException: VDSGenericException: VDSErrorException: Failed to
 MigrateStatusVDS, error = Could not connect to peer VDS
 2014-02-27 08:33:20,148 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (pool-6-thread-50) FINISH, MigrateStatusVDSCommand, log id: 75ac0a46
 

Re: [Users] Migration Failed

2014-02-27 Thread Gadi Ickowicz
Hi,

Unfortunately it seems the vdsm logs cycled - these vdsm logs do not match the 
times for the engine log snippet you pasted - they start at around 10:00 AM and 
the engine points to 8:33...

Gadi Ickowicz

- Original Message -
From: Koen Vanoppen vanoppen.k...@gmail.com
To: Meital Bourvine mbour...@redhat.com, users@ovirt.org
Sent: Thursday, February 27, 2014 11:04:11 AM
Subject: Re: [Users] Migration Failed

In attachment... 
Thanx! 


2014-02-27 9:21 GMT+01:00 Meital Bourvine  mbour...@redhat.com  : 



Hi Koen, 

Can you please attach the relevant vdsm logs? 





From: Koen Vanoppen  vanoppen.k...@gmail.com  
To: users@ovirt.org 
Sent: Thursday, February 27, 2014 9:38:46 AM 
Subject: [Users] Migration Failed 


Dear all, 

I added a new host to our ovirt. Everything went good, exept in the beginnen 
there was a problem with the firmware of the FibreCard but that is solved 
(maybe relevant to the issue coming up ;-) ), host is green en up now. But when 
I tried to migrate a machine for testing purpose to see if everythin was ok, I 
get the following error in the engine.log and the migration fails: 

2014-02-27 08:33:08,082 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] 
(pool-6-thread-49) [f1a68d8] Running command: MigrateVmCommand internal: false. 
Entities affected : ID: 3444fc9d-0395-4cbb-9a11-28a42802560c Type: VM 
2014-02-27 08:33:08,362 INFO 
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-6-thread-49) 
[f1a68d8] START, MigrateVDSCommand(HostName = soyuz, HostId = 
6dfa2f9c-85c6-4fb3-b65f-c84620115a1a, 
vmId=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost= soyuz.brusselsairport.aero 
, dstVdsId=6707fa40-753a-4c95-9304-e47198477e4d, dstHost= 
buran.brusselsairport.aero:54321 , migrationMethod=ONLINE, 
tunnelMigration=false), log id: 50cd7284 
2014-02-27 08:33:08,371 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(pool-6-thread-49) [f1a68d8] VdsBroker::migrate::Entered 
(vm_guid=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost= 
soyuz.brusselsairport.aero , dstHost= buran.brusselsairport.aero:54321 , 
method=online 
2014-02-27 08:33:08,405 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(pool-6-thread-49) [f1a68d8] START, MigrateBrokerVDSCommand(HostName = soyuz, 
HostId = 6dfa2f9c-85c6-4fb3-b65f-c84620115a1a, 
vmId=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost= soyuz.brusselsairport.aero 
, dstVdsId=6707fa40-753a-4c95-9304-e47198477e4d, dstHost= 
buran.brusselsairport.aero:54321 , migrationMethod=ONLINE, 
tunnelMigration=false), log id: 20806b79 
2014-02-27 08:33:08,441 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(pool-6-thread-49) [f1a68d8] FINISH, MigrateBrokerVDSCommand, log id: 20806b79 
2014-02-27 08:33:08,451 INFO 
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-6-thread-49) 
[f1a68d8] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 50cd7284 
2014-02-27 08:33:08,491 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(pool-6-thread-49) [f1a68d8] Correlation ID: f1a68d8, Job ID: 
c3642418-3f05-41eb-8b1d-07fe04867742, Call Stack: null, Custom Event ID: -1, 
Message: Migration started (VM: ADW-DevSplunk, Source: soyuz, Destination: 
buran, User: admin@internal). 
2014-02-27 08:33:20,036 INFO 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-82) VM ADW-DevSplunk 
3444fc9d-0395-4cbb-9a11-28a42802560c moved from MigratingFrom -- Up 
2014-02-27 08:33:20,042 INFO 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-82) Adding VM 
3444fc9d-0395-4cbb-9a11-28a42802560c to re-run list 
2014-02-27 08:33:20,051 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(DefaultQuartzScheduler_Worker-82) Rerun vm 
3444fc9d-0395-4cbb-9a11-28a42802560c. Called from vds soyuz 
2014-02-27 08:33:20,107 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(pool-6-thread-50) START, MigrateStatusVDSCommand(HostName = soyuz, HostId = 
6dfa2f9c-85c6-4fb3-b65f-c84620115a1a, 
vmId=3444fc9d-0395-4cbb-9a11-28a42802560c), log id: 75ac0a46 
2014-02-27 08:33:20,124 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(pool-6-thread-50) Failed in MigrateStatusVDS method 
2014-02-27 08:33:20,130 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(pool-6-thread-50) Error code noConPeer and error message VDSGenericException: 
VDSErrorException: Failed to MigrateStatusVDS, error = Could not connect to 
peer VDS 
2014-02-27 08:33:20,136 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(pool-6-thread-50) Command 
org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value 
StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=10, mMessage=Could 
not connect to peer VDS]] 
2014-02-27 08:33:20,139 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(pool-6-thread-50

Re: [Users] Migration Failed

2014-02-27 Thread Michal Skrivanek

On Feb 27, 2014, at 10:09 , Gadi Ickowicz gicko...@redhat.com wrote:

 Hi,
 
 Unfortunately it seems the vdsm logs cycled - these vdsm logs do not match 
 the times for the engine log snippet you pasted - they start at around 10:00 
 AM and the engine points to 8:33…

seeing error = Could not connect to peer VDS points me to a possible direct 
network connectivity issue between the those two hosts. Src vdsm needs to be 
able to talk to dst vdsm

Thanks,
michal

 
 Gadi Ickowicz
 
 - Original Message -
 From: Koen Vanoppen vanoppen.k...@gmail.com
 To: Meital Bourvine mbour...@redhat.com, users@ovirt.org
 Sent: Thursday, February 27, 2014 11:04:11 AM
 Subject: Re: [Users] Migration Failed
 
 In attachment... 
 Thanx! 
 
 
 2014-02-27 9:21 GMT+01:00 Meital Bourvine  mbour...@redhat.com  : 
 
 
 
 Hi Koen, 
 
 Can you please attach the relevant vdsm logs? 
 
 
 
 
 
 From: Koen Vanoppen  vanoppen.k...@gmail.com  
 To: users@ovirt.org 
 Sent: Thursday, February 27, 2014 9:38:46 AM 
 Subject: [Users] Migration Failed 
 
 
 Dear all, 
 
 I added a new host to our ovirt. Everything went good, exept in the beginnen 
 there was a problem with the firmware of the FibreCard but that is solved 
 (maybe relevant to the issue coming up ;-) ), host is green en up now. But 
 when I tried to migrate a machine for testing purpose to see if everythin was 
 ok, I get the following error in the engine.log and the migration fails: 
 
 2014-02-27 08:33:08,082 INFO [org.ovirt.engine.core.bll.MigrateVmCommand] 
 (pool-6-thread-49) [f1a68d8] Running command: MigrateVmCommand internal: 
 false. Entities affected : ID: 3444fc9d-0395-4cbb-9a11-28a42802560c Type: VM 
 2014-02-27 08:33:08,362 INFO 
 [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-6-thread-49) 
 [f1a68d8] START, MigrateVDSCommand(HostName = soyuz, HostId = 
 6dfa2f9c-85c6-4fb3-b65f-c84620115a1a, 
 vmId=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost= 
 soyuz.brusselsairport.aero , dstVdsId=6707fa40-753a-4c95-9304-e47198477e4d, 
 dstHost= buran.brusselsairport.aero:54321 , migrationMethod=ONLINE, 
 tunnelMigration=false), log id: 50cd7284 
 2014-02-27 08:33:08,371 INFO 
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
 (pool-6-thread-49) [f1a68d8] VdsBroker::migrate::Entered 
 (vm_guid=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost= 
 soyuz.brusselsairport.aero , dstHost= buran.brusselsairport.aero:54321 , 
 method=online 
 2014-02-27 08:33:08,405 INFO 
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
 (pool-6-thread-49) [f1a68d8] START, MigrateBrokerVDSCommand(HostName = soyuz, 
 HostId = 6dfa2f9c-85c6-4fb3-b65f-c84620115a1a, 
 vmId=3444fc9d-0395-4cbb-9a11-28a42802560c, srcHost= 
 soyuz.brusselsairport.aero , dstVdsId=6707fa40-753a-4c95-9304-e47198477e4d, 
 dstHost= buran.brusselsairport.aero:54321 , migrationMethod=ONLINE, 
 tunnelMigration=false), log id: 20806b79 
 2014-02-27 08:33:08,441 INFO 
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
 (pool-6-thread-49) [f1a68d8] FINISH, MigrateBrokerVDSCommand, log id: 
 20806b79 
 2014-02-27 08:33:08,451 INFO 
 [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-6-thread-49) 
 [f1a68d8] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 50cd7284 
 2014-02-27 08:33:08,491 INFO 
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
 (pool-6-thread-49) [f1a68d8] Correlation ID: f1a68d8, Job ID: 
 c3642418-3f05-41eb-8b1d-07fe04867742, Call Stack: null, Custom Event ID: -1, 
 Message: Migration started (VM: ADW-DevSplunk, Source: soyuz, Destination: 
 buran, User: admin@internal). 
 2014-02-27 08:33:20,036 INFO 
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
 (DefaultQuartzScheduler_Worker-82) VM ADW-DevSplunk 
 3444fc9d-0395-4cbb-9a11-28a42802560c moved from MigratingFrom -- Up 
 2014-02-27 08:33:20,042 INFO 
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
 (DefaultQuartzScheduler_Worker-82) Adding VM 
 3444fc9d-0395-4cbb-9a11-28a42802560c to re-run list 
 2014-02-27 08:33:20,051 ERROR 
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
 (DefaultQuartzScheduler_Worker-82) Rerun vm 
 3444fc9d-0395-4cbb-9a11-28a42802560c. Called from vds soyuz 
 2014-02-27 08:33:20,107 INFO 
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
 (pool-6-thread-50) START, MigrateStatusVDSCommand(HostName = soyuz, HostId = 
 6dfa2f9c-85c6-4fb3-b65f-c84620115a1a, 
 vmId=3444fc9d-0395-4cbb-9a11-28a42802560c), log id: 75ac0a46 
 2014-02-27 08:33:20,124 ERROR 
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
 (pool-6-thread-50) Failed in MigrateStatusVDS method 
 2014-02-27 08:33:20,130 ERROR 
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
 (pool-6-thread-50) Error code noConPeer and error message 
 VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = 
 Could not connect to peer VDS 
 2014-02-27 08:33:20,136 INFO

Re: [Users] Migration Failed

2014-02-27 Thread Dan Kenigsberg
On Thu, Feb 27, 2014 at 10:42:54AM +0100, Koen Vanoppen wrote:
 Sorry...
 I added the correct one now

Still, I fail to find the relevant ::ERROR:: line about migration.
But as Michal mentioned, Could not connect to peer VDS means that
source vdsm failed to contact the destination one.

This can stem from physical or logical network problem.
Can you ping from source to dest?
Can what happens when you log into source host and run

  vdsClient -s fqdn-of-destination-host list

? do you get any response? What happens if you disable your firewall?

Regards,
Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration failed (previous migrations succeded)

2014-02-01 Thread Itamar Heim

On 01/31/2014 09:30 AM, Sven Kieske wrote:

Hi,

is there any documentation regarding all
allowed settings in the vdsm.conf?

I didn't find anything related in the rhev docs


that's a question for vdsm mailing list - cc-ing...



Am 30.01.2014 21:43, schrieb Itamar Heim:

On 01/30/2014 10:37 PM, Markus Stockhausen wrote:

Von: Itamar Heim [ih...@redhat.com]
Gesendet: Donnerstag, 30. Januar 2014 21:25
An: Markus Stockhausen; ovirt-users
Betreff: Re: [Users] Migration failed (previous migrations succeded)


Now I' getting serious problems. During the migration the VM was
doing a rather slow download at 1,5 MB/s. So the memory changed
by 15 MB per 10 seconds. No wonder that a check every 10 seconds
was not able to see any progress. Im scared what will happen if I
want to migrate a medium loaded system runing a database.

Any tip for a parametrization?

Markus



what's the bandwidth? default is up to 30MB/sec, to allow up to 3 VMs to
migrate on 1Gb without congesting it.
you could raise that if you have 10GB, or raise the bandwidth cap and
reduce max number of concurrent VMs, etc.


My migration network is IPoIB 10GBit. During our tests only one VM
was migrated.  Bandwidth cap or number of concurrent VMs has not
been changed after default install.

Is migration_max_bandwidth in /etc/vdsm/vdsm.conf still the right place?


probably


And what settings do you suggest?


well, to begin with, 300MB/sec on 10GE (still allowing concurrent
migrations)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users








___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration failed (previous migrations succeded)

2014-01-31 Thread Sven Kieske
Hi,

is there any documentation regarding all
allowed settings in the vdsm.conf?

I didn't find anything related in the rhev docs

Am 30.01.2014 21:43, schrieb Itamar Heim:
 On 01/30/2014 10:37 PM, Markus Stockhausen wrote:
 Von: Itamar Heim [ih...@redhat.com]
 Gesendet: Donnerstag, 30. Januar 2014 21:25
 An: Markus Stockhausen; ovirt-users
 Betreff: Re: [Users] Migration failed (previous migrations succeded)


 Now I' getting serious problems. During the migration the VM was
 doing a rather slow download at 1,5 MB/s. So the memory changed
 by 15 MB per 10 seconds. No wonder that a check every 10 seconds
 was not able to see any progress. Im scared what will happen if I
 want to migrate a medium loaded system runing a database.

 Any tip for a parametrization?

 Markus


 what's the bandwidth? default is up to 30MB/sec, to allow up to 3 VMs to
 migrate on 1Gb without congesting it.
 you could raise that if you have 10GB, or raise the bandwidth cap and
 reduce max number of concurrent VMs, etc.

 My migration network is IPoIB 10GBit. During our tests only one VM
 was migrated.  Bandwidth cap or number of concurrent VMs has not
 been changed after default install.

 Is migration_max_bandwidth in /etc/vdsm/vdsm.conf still the right place?
 
 probably
 
 And what settings do you suggest?
 
 well, to begin with, 300MB/sec on 10GE (still allowing concurrent
 migrations)
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 
 

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration failed (previous migrations succeded)

2014-01-30 Thread Markus Stockhausen
 Von: users-boun...@ovirt.org [users-boun...@ovirt.org]quot; im Auftrag von 
 quot;Markus Stockhausen [stockhau...@collogia.de]
 Gesendet: Donnerstag, 30. Januar 2014 18:05
 An: ovirt-users
 Betreff: [Users] Migration failed (previous migrations succeded)
 
 Hello,
 
 we did some migration tests this day and all of a sudden the migration
 failed. That particular VM was moved around several times that day without
 any problems. During the migration the VM was running a download.

Found the reason. The memory was changing faster than the copy process
worked. The logs show:

Thread-289929::WARNING::2014-01-30 16:14:45,559::vm::800::vm.Vm::(run) 
vmId=`ce64f528-9981-4ec6-a172-9d70a00a34cd`::Migration stalling: dataRemaining 
(19MiB)  smallest_dataRemaining (9MiB). Refer to RHBZ#919201.
...
Thread-289929::WARNING::2014-01-30 16:14:55,561::vm::800::vm.Vm::(run) 
vmId=`ce64f528-9981-4ec6-a172-9d70a00a34cd`::Migration stalling: dataRemaining 
(24MiB)  smallest_dataRemaining (9MiB). Refer to RHBZ#919201.
...
Thread-289929::WARNING::2014-01-30 16:15:05,563::vm::800::vm.Vm::(run) 
vmId=`ce64f528-9981-4ec6-a172-9d70a00a34cd`::Migration stalling: dataRemaining 
(20MiB)  smallest_dataRemaining (9MiB). Refer to RHBZ#919201.
...

Now I' getting serious problems. During the migration the VM was
doing a rather slow download at 1,5 MB/s. So the memory changed 
by 15 MB per 10 seconds. No wonder that a check every 10 seconds 
was not able to see any progress. Im scared what will happen if I
want to migrate a medium loaded system runing a database.

Any tip for a parametrization?

Markus


Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration failed (previous migrations succeded)

2014-01-30 Thread Itamar Heim

On 01/30/2014 09:22 PM, Markus Stockhausen wrote:

Von: users-boun...@ovirt.org [users-boun...@ovirt.org]quot; im Auftrag von 
quot;Markus Stockhausen [stockhau...@collogia.de]
Gesendet: Donnerstag, 30. Januar 2014 18:05
An: ovirt-users
Betreff: [Users] Migration failed (previous migrations succeded)

Hello,

we did some migration tests this day and all of a sudden the migration
failed. That particular VM was moved around several times that day without
any problems. During the migration the VM was running a download.


Found the reason. The memory was changing faster than the copy process
worked. The logs show:

Thread-289929::WARNING::2014-01-30 16:14:45,559::vm::800::vm.Vm::(run) 
vmId=`ce64f528-9981-4ec6-a172-9d70a00a34cd`::Migration stalling: dataRemaining 
(19MiB)  smallest_dataRemaining (9MiB). Refer to RHBZ#919201.
...
Thread-289929::WARNING::2014-01-30 16:14:55,561::vm::800::vm.Vm::(run) 
vmId=`ce64f528-9981-4ec6-a172-9d70a00a34cd`::Migration stalling: dataRemaining 
(24MiB)  smallest_dataRemaining (9MiB). Refer to RHBZ#919201.
...
Thread-289929::WARNING::2014-01-30 16:15:05,563::vm::800::vm.Vm::(run) 
vmId=`ce64f528-9981-4ec6-a172-9d70a00a34cd`::Migration stalling: dataRemaining 
(20MiB)  smallest_dataRemaining (9MiB). Refer to RHBZ#919201.
...

Now I' getting serious problems. During the migration the VM was
doing a rather slow download at 1,5 MB/s. So the memory changed
by 15 MB per 10 seconds. No wonder that a check every 10 seconds
was not able to see any progress. Im scared what will happen if I
want to migrate a medium loaded system runing a database.

Any tip for a parametrization?

Markus



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



what's the bandwidth? default is up to 30MB/sec, to allow up to 3 VMs to 
migrate on 1Gb without congesting it.
you could raise that if you have 10GB, or raise the bandwidth cap and 
reduce max number of concurrent VMs, etc.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration failed (previous migrations succeded)

2014-01-30 Thread Markus Stockhausen
  Von: Itamar Heim [ih...@redhat.com]
  Gesendet: Donnerstag, 30. Januar 2014 21:25
  An: Markus Stockhausen; ovirt-users
  Betreff: Re: [Users] Migration failed (previous migrations succeded)
  
 
  Now I' getting serious problems. During the migration the VM was
  doing a rather slow download at 1,5 MB/s. So the memory changed
  by 15 MB per 10 seconds. No wonder that a check every 10 seconds
  was not able to see any progress. Im scared what will happen if I
  want to migrate a medium loaded system runing a database.
 
  Any tip for a parametrization?
 
  Markus
 
 
 what's the bandwidth? default is up to 30MB/sec, to allow up to 3 VMs to
 migrate on 1Gb without congesting it.
 you could raise that if you have 10GB, or raise the bandwidth cap and
 reduce max number of concurrent VMs, etc.

My migration network is IPoIB 10GBit. During our tests only one VM 
was migrated.  Bandwidth cap or number of concurrent VMs has not
been changed after default install. 

Is migration_max_bandwidth in /etc/vdsm/vdsm.conf still the right place?
And what settings do you suggest?

Markus

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.

Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

Vorstand:
Kadir Akin
Dr. Michael Höhnerbach

Vorsitzender des Aufsichtsrates:
Hans Kristian Langva

Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.

e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.

Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln

executive board:
Kadir Akin
Dr. Michael Höhnerbach

President of the supervisory board:
Hans Kristian Langva

Registry office: district court Cologne
Register number: HRB 52 497


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration failed (previous migrations succeded)

2014-01-30 Thread Itamar Heim

On 01/30/2014 10:37 PM, Markus Stockhausen wrote:

Von: Itamar Heim [ih...@redhat.com]
Gesendet: Donnerstag, 30. Januar 2014 21:25
An: Markus Stockhausen; ovirt-users
Betreff: Re: [Users] Migration failed (previous migrations succeded)


Now I' getting serious problems. During the migration the VM was
doing a rather slow download at 1,5 MB/s. So the memory changed
by 15 MB per 10 seconds. No wonder that a check every 10 seconds
was not able to see any progress. Im scared what will happen if I
want to migrate a medium loaded system runing a database.

Any tip for a parametrization?

Markus



what's the bandwidth? default is up to 30MB/sec, to allow up to 3 VMs to
migrate on 1Gb without congesting it.
you could raise that if you have 10GB, or raise the bandwidth cap and
reduce max number of concurrent VMs, etc.


My migration network is IPoIB 10GBit. During our tests only one VM
was migrated.  Bandwidth cap or number of concurrent VMs has not
been changed after default install.

Is migration_max_bandwidth in /etc/vdsm/vdsm.conf still the right place?


probably


And what settings do you suggest?


well, to begin with, 300MB/sec on 10GE (still allowing concurrent 
migrations)

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration Failed

2014-01-14 Thread Michal Skrivanek

On 14 Jan 2014, at 08:43, Neil wrote:

 Sorry, before anyone wastes anymore time on this, I found the issue.
 
 My NFS ISO domain was attached to the other host node02, but the NFS
 mount wasn't accessible due to the iptables service being activated on
 boot once I had run all the OS updates a while back.
 
 I've disabled the service again, and the migration has completed successfully.

 
 Thank you all for your assistance, and I'm sorry for wasting your time.

good it's working for you
I was also checking the internal error and it's 
http://gerrit.ovirt.org/#/c/21700, which is fixed by now

Thanks,
michal

 
 Regards.
 
 Neil Wilson.
 
 
 
 On Tue, Jan 14, 2014 at 9:30 AM, Neil nwilson...@gmail.com wrote:
 Good morning everyone,
 
 Sorry for the late reply.
 
 Tom: unfortunately selinux is disabled on all the machines involved.
 
 Michal: Attached are the latest logs, I started the migration at
 8:43am and it returned an error/failed at 8:52am. The details of the
 migration are as follows.
 
 node01 (10.0.2.21) is the destination
 node03 (10.0.2.23) is the source
 engine01 (10.0.2.31) is the engine
 Tux is the VM
 
 Strangely enough the immediate pop up error I received in the GUI
 previously didn't appear this time and it looked like it might
 actually work, however after waiting quite a while it eventually
 returned an error in the Tasks as follows...
 
 2014-Jan-14, 09:05 Refresh image list failed for domain(s): bla-iso
 (ISO file type). Please check domain activity.
 2014-Jan-14, 09:05 Migration failed due to Error: Internal Engine
 Error. Trying to migrate to another Host (VM: tux, Source:
 node03.blabla.com, Destination: node01.blabla.com).
 2014-Jan-14, 08:52 Migration failed due to Error: Internal Engine
 Error (VM: tux, Source: node03.blabla.com, Destination:
 node01.blabla.com).
 
 and then also received an error in the engine.log which is why I've
 attached that as well.
 
 Please shout if you need any further info.
 
 Thank you so much.
 
 Regards.
 
 Neil Wilson.
 
 On Mon, Jan 13, 2014 at 4:07 PM, Tom Brown t...@ng23.net wrote:
 
 selinux issue?
 
 
 On 13 January 2014 12:48, Michal Skrivanek michal.skriva...@redhat.com
 wrote:
 
 
 On Jan 13, 2014, at 11:37 , Neil nwilson...@gmail.com wrote:
 
 Good morning everyone,
 
 Sorry to trouble you again, anyone have any ideas on what to try next?
 
 Hi Neil,
 hm, other than noise I don't really see any failures in migration.
 Can you attach both src and dst vdsm log with a hint which VM and at what
 time approx it failed for you?
 There are errors for one volume reoccuring all the time, but that doesn't
 look related to the migration
 
 Thanks,
 michal
 
 
 Thank you so much,
 
 Regards.
 
 Neil Wilson.
 
 On Fri, Jan 10, 2014 at 8:31 AM, Neil nwilson...@gmail.com wrote:
 Hi Dafna,
 
 Apologies for the late reply, I was out of my office yesterday.
 
 Just to get back to you on your questions.
 
 can you look at the vm dialogue and see what boot devices the vm has?
 Sorry I'm not sure where you want me to get this info from? Inside the
 ovirt GUI or on the VM itself.
 The VM has one 2TB LUN assigned. Then inside the VM this is the fstab
 parameters..
 
 [root@tux ~]# cat /etc/fstab
 /dev/VolGroup00/LogVol00 /   ext3defaults
 1 0
 /dev/vda1 /boot   ext3defaults1
 0
 tmpfs   /dev/shmtmpfs   defaults
 0 0
 devpts  /dev/ptsdevpts  gid=5,mode=620
 0 0
 sysfs   /syssysfs   defaults
 0 0
 proc/proc   procdefaults
 0 0
 /dev/VolGroup00/LogVol01 swapswapdefaults
 0 0
 /dev/VolGroup00/LogVol02 /homes  xfs
 defaults,usrquota,grpquota1 0
 
 
 can you write to the vm?
 Yes the machine is fully functioning, it's their main PDC and hosts
 all of their files.
 
 
 can you please dump the vm xml from libvirt? (it's one of the commands
 that you have in virsh)
 
 Below is the xml
 
 domain type='kvm' id='7'
 nametux/name
 uuid2736197b-6dc3-4155-9a29-9306ca64881d/uuid
 memory unit='KiB'8388608/memory
 currentMemory unit='KiB'8388608/currentMemory
 vcpu placement='static'4/vcpu
 cputune
   shares1020/shares
 /cputune
 sysinfo type='smbios'
   system
 entry name='manufacturer'oVirt/entry
 entry name='product'oVirt Node/entry
 entry name='version'6-4.el6.centos.10/entry
 entry name='serial'4C4C4544-0038-5310-8050-C6C04F34354A/entry
 entry name='uuid'2736197b-6dc3-4155-9a29-9306ca64881d/entry
   /system
 /sysinfo
 os
   type arch='x86_64' machine='rhel6.4.0'hvm/type
   smbios mode='sysinfo'/
 /os
 features
   acpi/
 /features
 cpu mode='custom' match='exact'
   model fallback='allow'Westmere/model
   topology sockets='1' cores='4' threads='1'/
 /cpu
 clock offset='variable' adjustment='0' basis='utc'
   timer name='rtc' tickpolicy='catchup'/
 /clock
 on_poweroffdestroy/on_poweroff
 

Re: [Users] Migration Failed

2014-01-13 Thread Neil
Good morning everyone,

Sorry to trouble you again, anyone have any ideas on what to try next?

Thank you so much,

Regards.

Neil Wilson.

On Fri, Jan 10, 2014 at 8:31 AM, Neil nwilson...@gmail.com wrote:
 Hi Dafna,

 Apologies for the late reply, I was out of my office yesterday.

 Just to get back to you on your questions.

 can you look at the vm dialogue and see what boot devices the vm has?
 Sorry I'm not sure where you want me to get this info from? Inside the
 ovirt GUI or on the VM itself.
 The VM has one 2TB LUN assigned. Then inside the VM this is the fstab
 parameters..

 [root@tux ~]# cat /etc/fstab
 /dev/VolGroup00/LogVol00 /   ext3defaults1 0
 /dev/vda1 /boot   ext3defaults1 0
 tmpfs   /dev/shmtmpfs   defaults0 0
 devpts  /dev/ptsdevpts  gid=5,mode=620  0 0
 sysfs   /syssysfs   defaults0 0
 proc/proc   procdefaults0 0
 /dev/VolGroup00/LogVol01 swapswapdefaults0 0
 /dev/VolGroup00/LogVol02 /homes  xfs
 defaults,usrquota,grpquota1 0


 can you write to the vm?
 Yes the machine is fully functioning, it's their main PDC and hosts
 all of their files.


 can you please dump the vm xml from libvirt? (it's one of the commands
 that you have in virsh)

 Below is the xml

 domain type='kvm' id='7'
   nametux/name
   uuid2736197b-6dc3-4155-9a29-9306ca64881d/uuid
   memory unit='KiB'8388608/memory
   currentMemory unit='KiB'8388608/currentMemory
   vcpu placement='static'4/vcpu
   cputune
 shares1020/shares
   /cputune
   sysinfo type='smbios'
 system
   entry name='manufacturer'oVirt/entry
   entry name='product'oVirt Node/entry
   entry name='version'6-4.el6.centos.10/entry
   entry name='serial'4C4C4544-0038-5310-8050-C6C04F34354A/entry
   entry name='uuid'2736197b-6dc3-4155-9a29-9306ca64881d/entry
 /system
   /sysinfo
   os
 type arch='x86_64' machine='rhel6.4.0'hvm/type
 smbios mode='sysinfo'/
   /os
   features
 acpi/
   /features
   cpu mode='custom' match='exact'
 model fallback='allow'Westmere/model
 topology sockets='1' cores='4' threads='1'/
   /cpu
   clock offset='variable' adjustment='0' basis='utc'
 timer name='rtc' tickpolicy='catchup'/
   /clock
   on_poweroffdestroy/on_poweroff
   on_rebootrestart/on_reboot
   on_crashdestroy/on_crash
   devices
 emulator/usr/libexec/qemu-kvm/emulator
 disk type='file' device='cdrom'
   driver name='qemu' type='raw'/
   source startupPolicy='optional'/
   target dev='hdc' bus='ide'/
   readonly/
   serial/serial
   alias name='ide0-1-0'/
   address type='drive' controller='0' bus='1' target='0' unit='0'/
 /disk
 disk type='block' device='disk' snapshot='no'
   driver name='qemu' type='raw' cache='none' error_policy='stop'
 io='native'/
   source 
 dev='/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/fd1a562a-3ba5-4ddb-a643-37912a6ae86f/f747ba2b-98e1-47f5-805b-6bb173bfd6ff'/
   target dev='vda' bus='virtio'/
   serialfd1a562a-3ba5-4ddb-a643-37912a6ae86f/serial
   boot order='1'/
   alias name='virtio-disk0'/
   address type='pci' domain='0x' bus='0x00' slot='0x05'
 function='0x0'/
 /disk
 controller type='ide' index='0'
   alias name='ide0'/
   address type='pci' domain='0x' bus='0x00' slot='0x01'
 function='0x1'/
 /controller
 controller type='virtio-serial' index='0'
   alias name='virtio-serial0'/
   address type='pci' domain='0x' bus='0x00' slot='0x04'
 function='0x0'/
 /controller
 controller type='usb' index='0'
   alias name='usb0'/
   address type='pci' domain='0x' bus='0x00' slot='0x01'
 function='0x2'/
 /controller
 interface type='bridge'
   mac address='00:1a:4a:a8:7a:00'/
   source bridge='ovirtmgmt'/
   target dev='vnet5'/
   model type='virtio'/
   filterref filter='vdsm-no-mac-spoofing'/
   link state='up'/
   alias name='net0'/
   address type='pci' domain='0x' bus='0x00' slot='0x03'
 function='0x0'/
 /interface
 channel type='unix'
   source mode='bind'
 path='/var/lib/libvirt/qemu/channels/tux.com.redhat.rhevm.vdsm'/
   target type='virtio' name='com.redhat.rhevm.vdsm'/
   alias name='channel0'/
   address type='virtio-serial' controller='0' bus='0' port='1'/
 /channel
 channel type='unix'
   source mode='bind'
 path='/var/lib/libvirt/qemu/channels/tux.org.qemu.guest_agent.0'/
   target type='virtio' name='org.qemu.guest_agent.0'/
   alias name='channel1'/
   address type='virtio-serial' controller='0' bus='0' port='2'/
 /channel
 channel type='spicevmc'
   target type='virtio' 

Re: [Users] Migration Failed

2014-01-13 Thread Michal Skrivanek

On Jan 13, 2014, at 11:37 , Neil nwilson...@gmail.com wrote:

 Good morning everyone,
 
 Sorry to trouble you again, anyone have any ideas on what to try next?

Hi Neil,
hm, other than noise I don't really see any failures in migration.
Can you attach both src and dst vdsm log with a hint which VM and at what time 
approx it failed for you?
There are errors for one volume reoccuring all the time, but that doesn't look 
related to the migration

Thanks,
michal

 
 Thank you so much,
 
 Regards.
 
 Neil Wilson.
 
 On Fri, Jan 10, 2014 at 8:31 AM, Neil nwilson...@gmail.com wrote:
 Hi Dafna,
 
 Apologies for the late reply, I was out of my office yesterday.
 
 Just to get back to you on your questions.
 
 can you look at the vm dialogue and see what boot devices the vm has?
 Sorry I'm not sure where you want me to get this info from? Inside the
 ovirt GUI or on the VM itself.
 The VM has one 2TB LUN assigned. Then inside the VM this is the fstab
 parameters..
 
 [root@tux ~]# cat /etc/fstab
 /dev/VolGroup00/LogVol00 /   ext3defaults1 0
 /dev/vda1 /boot   ext3defaults1 0
 tmpfs   /dev/shmtmpfs   defaults0 0
 devpts  /dev/ptsdevpts  gid=5,mode=620  0 0
 sysfs   /syssysfs   defaults0 0
 proc/proc   procdefaults0 0
 /dev/VolGroup00/LogVol01 swapswapdefaults0 0
 /dev/VolGroup00/LogVol02 /homes  xfs
 defaults,usrquota,grpquota1 0
 
 
 can you write to the vm?
 Yes the machine is fully functioning, it's their main PDC and hosts
 all of their files.
 
 
 can you please dump the vm xml from libvirt? (it's one of the commands
 that you have in virsh)
 
 Below is the xml
 
 domain type='kvm' id='7'
  nametux/name
  uuid2736197b-6dc3-4155-9a29-9306ca64881d/uuid
  memory unit='KiB'8388608/memory
  currentMemory unit='KiB'8388608/currentMemory
  vcpu placement='static'4/vcpu
  cputune
shares1020/shares
  /cputune
  sysinfo type='smbios'
system
  entry name='manufacturer'oVirt/entry
  entry name='product'oVirt Node/entry
  entry name='version'6-4.el6.centos.10/entry
  entry name='serial'4C4C4544-0038-5310-8050-C6C04F34354A/entry
  entry name='uuid'2736197b-6dc3-4155-9a29-9306ca64881d/entry
/system
  /sysinfo
  os
type arch='x86_64' machine='rhel6.4.0'hvm/type
smbios mode='sysinfo'/
  /os
  features
acpi/
  /features
  cpu mode='custom' match='exact'
model fallback='allow'Westmere/model
topology sockets='1' cores='4' threads='1'/
  /cpu
  clock offset='variable' adjustment='0' basis='utc'
timer name='rtc' tickpolicy='catchup'/
  /clock
  on_poweroffdestroy/on_poweroff
  on_rebootrestart/on_reboot
  on_crashdestroy/on_crash
  devices
emulator/usr/libexec/qemu-kvm/emulator
disk type='file' device='cdrom'
  driver name='qemu' type='raw'/
  source startupPolicy='optional'/
  target dev='hdc' bus='ide'/
  readonly/
  serial/serial
  alias name='ide0-1-0'/
  address type='drive' controller='0' bus='1' target='0' unit='0'/
/disk
disk type='block' device='disk' snapshot='no'
  driver name='qemu' type='raw' cache='none' error_policy='stop'
 io='native'/
  source 
 dev='/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/fd1a562a-3ba5-4ddb-a643-37912a6ae86f/f747ba2b-98e1-47f5-805b-6bb173bfd6ff'/
  target dev='vda' bus='virtio'/
  serialfd1a562a-3ba5-4ddb-a643-37912a6ae86f/serial
  boot order='1'/
  alias name='virtio-disk0'/
  address type='pci' domain='0x' bus='0x00' slot='0x05'
 function='0x0'/
/disk
controller type='ide' index='0'
  alias name='ide0'/
  address type='pci' domain='0x' bus='0x00' slot='0x01'
 function='0x1'/
/controller
controller type='virtio-serial' index='0'
  alias name='virtio-serial0'/
  address type='pci' domain='0x' bus='0x00' slot='0x04'
 function='0x0'/
/controller
controller type='usb' index='0'
  alias name='usb0'/
  address type='pci' domain='0x' bus='0x00' slot='0x01'
 function='0x2'/
/controller
interface type='bridge'
  mac address='00:1a:4a:a8:7a:00'/
  source bridge='ovirtmgmt'/
  target dev='vnet5'/
  model type='virtio'/
  filterref filter='vdsm-no-mac-spoofing'/
  link state='up'/
  alias name='net0'/
  address type='pci' domain='0x' bus='0x00' slot='0x03'
 function='0x0'/
/interface
channel type='unix'
  source mode='bind'
 path='/var/lib/libvirt/qemu/channels/tux.com.redhat.rhevm.vdsm'/
  target type='virtio' name='com.redhat.rhevm.vdsm'/
  alias name='channel0'/
  address type='virtio-serial' controller='0' bus='0' port='1'/
/channel
channel type='unix'
  source mode='bind'
 

Re: [Users] Migration Failed

2014-01-13 Thread Neil
Sorry, before anyone wastes anymore time on this, I found the issue.

My NFS ISO domain was attached to the other host node02, but the NFS
mount wasn't accessible due to the iptables service being activated on
boot once I had run all the OS updates a while back.

I've disabled the service again, and the migration has completed successfully.

Thank you all for your assistance, and I'm sorry for wasting your time.

Regards.

Neil Wilson.



On Tue, Jan 14, 2014 at 9:30 AM, Neil nwilson...@gmail.com wrote:
 Good morning everyone,

 Sorry for the late reply.

 Tom: unfortunately selinux is disabled on all the machines involved.

 Michal: Attached are the latest logs, I started the migration at
 8:43am and it returned an error/failed at 8:52am. The details of the
 migration are as follows.

 node01 (10.0.2.21) is the destination
 node03 (10.0.2.23) is the source
 engine01 (10.0.2.31) is the engine
 Tux is the VM

 Strangely enough the immediate pop up error I received in the GUI
 previously didn't appear this time and it looked like it might
 actually work, however after waiting quite a while it eventually
 returned an error in the Tasks as follows...

 2014-Jan-14, 09:05 Refresh image list failed for domain(s): bla-iso
 (ISO file type). Please check domain activity.
 2014-Jan-14, 09:05 Migration failed due to Error: Internal Engine
 Error. Trying to migrate to another Host (VM: tux, Source:
 node03.blabla.com, Destination: node01.blabla.com).
 2014-Jan-14, 08:52 Migration failed due to Error: Internal Engine
 Error (VM: tux, Source: node03.blabla.com, Destination:
 node01.blabla.com).

 and then also received an error in the engine.log which is why I've
 attached that as well.

 Please shout if you need any further info.

 Thank you so much.

 Regards.

 Neil Wilson.

 On Mon, Jan 13, 2014 at 4:07 PM, Tom Brown t...@ng23.net wrote:

 selinux issue?


 On 13 January 2014 12:48, Michal Skrivanek michal.skriva...@redhat.com
 wrote:


 On Jan 13, 2014, at 11:37 , Neil nwilson...@gmail.com wrote:

  Good morning everyone,
 
  Sorry to trouble you again, anyone have any ideas on what to try next?

 Hi Neil,
 hm, other than noise I don't really see any failures in migration.
 Can you attach both src and dst vdsm log with a hint which VM and at what
 time approx it failed for you?
 There are errors for one volume reoccuring all the time, but that doesn't
 look related to the migration

 Thanks,
 michal

 
  Thank you so much,
 
  Regards.
 
  Neil Wilson.
 
  On Fri, Jan 10, 2014 at 8:31 AM, Neil nwilson...@gmail.com wrote:
  Hi Dafna,
 
  Apologies for the late reply, I was out of my office yesterday.
 
  Just to get back to you on your questions.
 
  can you look at the vm dialogue and see what boot devices the vm has?
  Sorry I'm not sure where you want me to get this info from? Inside the
  ovirt GUI or on the VM itself.
  The VM has one 2TB LUN assigned. Then inside the VM this is the fstab
  parameters..
 
  [root@tux ~]# cat /etc/fstab
  /dev/VolGroup00/LogVol00 /   ext3defaults
  1 0
  /dev/vda1 /boot   ext3defaults1
  0
  tmpfs   /dev/shmtmpfs   defaults
  0 0
  devpts  /dev/ptsdevpts  gid=5,mode=620
  0 0
  sysfs   /syssysfs   defaults
  0 0
  proc/proc   procdefaults
  0 0
  /dev/VolGroup00/LogVol01 swapswapdefaults
  0 0
  /dev/VolGroup00/LogVol02 /homes  xfs
  defaults,usrquota,grpquota1 0
 
 
  can you write to the vm?
  Yes the machine is fully functioning, it's their main PDC and hosts
  all of their files.
 
 
  can you please dump the vm xml from libvirt? (it's one of the commands
  that you have in virsh)
 
  Below is the xml
 
  domain type='kvm' id='7'
   nametux/name
   uuid2736197b-6dc3-4155-9a29-9306ca64881d/uuid
   memory unit='KiB'8388608/memory
   currentMemory unit='KiB'8388608/currentMemory
   vcpu placement='static'4/vcpu
   cputune
 shares1020/shares
   /cputune
   sysinfo type='smbios'
 system
   entry name='manufacturer'oVirt/entry
   entry name='product'oVirt Node/entry
   entry name='version'6-4.el6.centos.10/entry
   entry name='serial'4C4C4544-0038-5310-8050-C6C04F34354A/entry
   entry name='uuid'2736197b-6dc3-4155-9a29-9306ca64881d/entry
 /system
   /sysinfo
   os
 type arch='x86_64' machine='rhel6.4.0'hvm/type
 smbios mode='sysinfo'/
   /os
   features
 acpi/
   /features
   cpu mode='custom' match='exact'
 model fallback='allow'Westmere/model
 topology sockets='1' cores='4' threads='1'/
   /cpu
   clock offset='variable' adjustment='0' basis='utc'
 timer name='rtc' tickpolicy='catchup'/
   /clock
   on_poweroffdestroy/on_poweroff
   on_rebootrestart/on_reboot
   on_crashdestroy/on_crash
   devices
 emulator/usr/libexec/qemu-kvm/emulator
 disk type='file' 

Re: [Users] Migration Failed

2014-01-13 Thread Sven Kieske
Hi,

you may want to consider to enable your firewall
for security reasons.

The ports which nfs uses are configured under:

/etc/sysconfig/nfs

for EL 6.

There's no reason at all to run oVirt without
correct firewalling, not even in test environments
as you will want firewalling in production and
you have to test it anyway.

-- 
Mit freundlichen Grüßen / Regards

Sven Kieske

Systemadministrator
Mittwald CM Service GmbH  Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration Failed

2014-01-09 Thread Neil
Hi Dafna,

Apologies for the late reply, I was out of my office yesterday.

Just to get back to you on your questions.

can you look at the vm dialogue and see what boot devices the vm has?
Sorry I'm not sure where you want me to get this info from? Inside the
ovirt GUI or on the VM itself.
The VM has one 2TB LUN assigned. Then inside the VM this is the fstab
parameters..

[root@tux ~]# cat /etc/fstab
/dev/VolGroup00/LogVol00 /   ext3defaults1 0
/dev/vda1 /boot   ext3defaults1 0
tmpfs   /dev/shmtmpfs   defaults0 0
devpts  /dev/ptsdevpts  gid=5,mode=620  0 0
sysfs   /syssysfs   defaults0 0
proc/proc   procdefaults0 0
/dev/VolGroup00/LogVol01 swapswapdefaults0 0
/dev/VolGroup00/LogVol02 /homes  xfs
defaults,usrquota,grpquota1 0


can you write to the vm?
Yes the machine is fully functioning, it's their main PDC and hosts
all of their files.


can you please dump the vm xml from libvirt? (it's one of the commands
that you have in virsh)

Below is the xml

domain type='kvm' id='7'
  nametux/name
  uuid2736197b-6dc3-4155-9a29-9306ca64881d/uuid
  memory unit='KiB'8388608/memory
  currentMemory unit='KiB'8388608/currentMemory
  vcpu placement='static'4/vcpu
  cputune
shares1020/shares
  /cputune
  sysinfo type='smbios'
system
  entry name='manufacturer'oVirt/entry
  entry name='product'oVirt Node/entry
  entry name='version'6-4.el6.centos.10/entry
  entry name='serial'4C4C4544-0038-5310-8050-C6C04F34354A/entry
  entry name='uuid'2736197b-6dc3-4155-9a29-9306ca64881d/entry
/system
  /sysinfo
  os
type arch='x86_64' machine='rhel6.4.0'hvm/type
smbios mode='sysinfo'/
  /os
  features
acpi/
  /features
  cpu mode='custom' match='exact'
model fallback='allow'Westmere/model
topology sockets='1' cores='4' threads='1'/
  /cpu
  clock offset='variable' adjustment='0' basis='utc'
timer name='rtc' tickpolicy='catchup'/
  /clock
  on_poweroffdestroy/on_poweroff
  on_rebootrestart/on_reboot
  on_crashdestroy/on_crash
  devices
emulator/usr/libexec/qemu-kvm/emulator
disk type='file' device='cdrom'
  driver name='qemu' type='raw'/
  source startupPolicy='optional'/
  target dev='hdc' bus='ide'/
  readonly/
  serial/serial
  alias name='ide0-1-0'/
  address type='drive' controller='0' bus='1' target='0' unit='0'/
/disk
disk type='block' device='disk' snapshot='no'
  driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native'/
  source 
dev='/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/fd1a562a-3ba5-4ddb-a643-37912a6ae86f/f747ba2b-98e1-47f5-805b-6bb173bfd6ff'/
  target dev='vda' bus='virtio'/
  serialfd1a562a-3ba5-4ddb-a643-37912a6ae86f/serial
  boot order='1'/
  alias name='virtio-disk0'/
  address type='pci' domain='0x' bus='0x00' slot='0x05'
function='0x0'/
/disk
controller type='ide' index='0'
  alias name='ide0'/
  address type='pci' domain='0x' bus='0x00' slot='0x01'
function='0x1'/
/controller
controller type='virtio-serial' index='0'
  alias name='virtio-serial0'/
  address type='pci' domain='0x' bus='0x00' slot='0x04'
function='0x0'/
/controller
controller type='usb' index='0'
  alias name='usb0'/
  address type='pci' domain='0x' bus='0x00' slot='0x01'
function='0x2'/
/controller
interface type='bridge'
  mac address='00:1a:4a:a8:7a:00'/
  source bridge='ovirtmgmt'/
  target dev='vnet5'/
  model type='virtio'/
  filterref filter='vdsm-no-mac-spoofing'/
  link state='up'/
  alias name='net0'/
  address type='pci' domain='0x' bus='0x00' slot='0x03'
function='0x0'/
/interface
channel type='unix'
  source mode='bind'
path='/var/lib/libvirt/qemu/channels/tux.com.redhat.rhevm.vdsm'/
  target type='virtio' name='com.redhat.rhevm.vdsm'/
  alias name='channel0'/
  address type='virtio-serial' controller='0' bus='0' port='1'/
/channel
channel type='unix'
  source mode='bind'
path='/var/lib/libvirt/qemu/channels/tux.org.qemu.guest_agent.0'/
  target type='virtio' name='org.qemu.guest_agent.0'/
  alias name='channel1'/
  address type='virtio-serial' controller='0' bus='0' port='2'/
/channel
channel type='spicevmc'
  target type='virtio' name='com.redhat.spice.0'/
  alias name='channel2'/
  address type='virtio-serial' controller='0' bus='0' port='3'/
/channel
input type='mouse' bus='ps2'/
graphics type='spice' port='5912' tlsPort='5913' autoport='yes'
listen='0' keymap='en-us' passwdValidTo='2013-09-20T07:56:54'
connected='disconnect'
  listen type='address' 

Re: [Users] Migration Failed

2014-01-08 Thread Dafna Ron
Thread-847747::INFO::2014-01-07 
14:30:32,353::logUtils::44::dispatcher::(wrapper) Run and protect: 
inappropriateDevices(thiefId='63da7faa-f92a-4652-90f2-b6660a4fb7b3')
Thread-847747::INFO::2014-01-07 
14:30:32,354::logUtils::47::dispatcher::(wrapper) Run and protect: 
inappropriateDevices, Return response: None


Please check if the vm's were booted with a cd...


bject at 0x7fb1f00cbbd0 log:logUtils.SimpleLogAdapter instance at 
0x7fb1f00be7e8 name:hdc networkDev:False path: readonly:True reqsize:0 
serial: truesize:0 *type:cdrom* volExtensionChunk:1024 
watermarkLimit:536870912

Traceback (most recent call last):
  File /usr/share/vdsm/clientIF.py, line 356, in teardownVolumePath
res = self.irs.teardownImage(drive['domainID'],
  File /usr/share/vdsm/vm.py, line 1386, in __getitem__
raise KeyError(key)
KeyError: 'domainID'
Thread-847747::WARNING::2014-01-07 
14:30:32,351::clientIF::362::vds::(teardownVolumePath) Drive is not a 
vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 
VOLWM_FREE_PCT:50 _blockDev:True _checkIoTuneCategories:bound method D
rive._checkIoTuneCategories of vm.Drive object at 0x7fb1f00cbc10 
_customize:bound method Drive._customize of vm.Drive object at 
0x7fb1f00cbc10 _deviceXML:disk device=disk snapshot=no type=block
  driver cache=none error_policy=stop io=native name=qemu 
type=raw/
  source 
dev=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/9f16f896-1da3-4f9a-a305-ac9c4f51a482/e04c6600-abb9-4ebc-a9b3-77b6c536e258/

  target bus=ide dev=hda/
serial9f16f896-1da3-4f9a-a305-ac9c4f51a482/serial
  alias name=ide0-0-0/
  address bus=0 controller=0 target=0 type=drive unit=0/


On 01/08/2014 06:28 AM, Neil wrote:

Hi Dafna,

Thanks for the reply.

Attached is the log from the source server (node03).

I'll reply to your other questions as soon as I'm back in the office
this afternoon, have to run off to a meeting.

Regards.

Neil Wilson.


On Tue, Jan 7, 2014 at 8:13 PM, Dafna Ron d...@redhat.com wrote:

Ok... several things :)

1. for migration we need to see vdsm logs from both src and dst.

2. Is it possible that the vm has an iso attached? because I see that you
are having problems with the iso domain:

2014-01-07 14:26:27,714 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(pool-6-thread-48) Domain e9ab725d-69c1-4a59-b225-b995d095c289:bla-iso was
reported with error code 358


Thread-1165153::DEBUG::2014-01-07
13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) Unknown
libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no
domain with matching uuid '63da7faa-f92a-4652-90f2-b6660a4fb7b3'

hread-19::ERROR::2014-01-07
13:01:02,621::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
e9ab725d-69c1-4a59-b225-b995d095c289 not found
Traceback (most recent call last):
   File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
 dom = findMethod(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
 raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'e9ab725d-69c1-4a59-b225-b995d095c289',)
Thread-19::ERROR::2014-01-07
13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain)
Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289
monitoring information
Traceback (most recent call last):
   File /usr/share/vdsm/storage/domainMonitor.py, line 190, in
_monitorDomain
 self.domain = sdCache.produce(self.sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 98, in produce
 domain.getRealDomain()
   File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
 return self._cache._realProduce(self._sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce
 domain = self._findDomain(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
 dom = findMethod(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
 raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'e9ab725d-69c1-4a59-b225-b995d095c289',)
Dummy-29013::DEBUG::2014-01-07
13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) 'dd
if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000' (cwd N
one)

3. The migration fails with libvirt error but we need the trace from the
second log:

Thread-1165153::DEBUG::2014-01-07 13:39:42,451::sampling::292::vm.Vm::(stop)
vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics collection
Thread-1163583::DEBUG::2014-01-07 13:39:42,452::sampling::323::vm.Vm::(run)
vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished
Thread-1165153::DEBUG::2014-01-07
13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) Unknown
libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no

Re: [Users] Migration Failed

2014-01-08 Thread Neil
Hi guys,

Apologies for the late reply.

The VM (Tux) was created about 2 years ago, it was converted from a
physical machine using Clonezilla. It's been migrated a number of
times in the past, only now when trying to move it off node03 is it
giving this error.

I've looked for any attached images/cd's and found none unfortunately.

Thank you so much for your assistance so far.

Regards.

Neil Wilson.



On Wed, Jan 8, 2014 at 12:23 PM, Dafna Ron d...@redhat.com wrote:
 Thread-847747::INFO::2014-01-07
 14:30:32,353::logUtils::44::dispatcher::(wrapper) Run and protect:
 inappropriateDevices(thiefId='63da7faa-f92a-4652-90f2-b6660a4fb7b3')
 Thread-847747::INFO::2014-01-07
 14:30:32,354::logUtils::47::dispatcher::(wrapper) Run and protect:
 inappropriateDevices, Return response: None

 Please check if the vm's were booted with a cd...


 bject at 0x7fb1f00cbbd0 log:logUtils.SimpleLogAdapter instance at
 0x7fb1f00be7e8 name:hdc networkDev:False path: readonly:True reqsize:0
 serial: truesize:0 *type:cdrom* volExtensionChunk:1024
 watermarkLimit:536870912

 Traceback (most recent call last):
   File /usr/share/vdsm/clientIF.py, line 356, in teardownVolumePath
 res = self.irs.teardownImage(drive['domainID'],
   File /usr/share/vdsm/vm.py, line 1386, in __getitem__
 raise KeyError(key)
 KeyError: 'domainID'
 Thread-847747::WARNING::2014-01-07
 14:30:32,351::clientIF::362::vds::(teardownVolumePath) Drive is not a vdsm
 image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2 VOLWM_FREE_PCT:50
 _blockDev:True _checkIoTuneCategories:bound method D
 rive._checkIoTuneCategories of vm.Drive object at 0x7fb1f00cbc10
 _customize:bound method Drive._customize of vm.Drive object at
 0x7fb1f00cbc10 _deviceXML:disk device=disk snapshot=no type=block
   driver cache=none error_policy=stop io=native name=qemu
 type=raw/
   source
 dev=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/0e6991ae-6238-4c61-96d2-ca8fed35161e/images/9f16f896-1da3-4f9a-a305-ac9c4f51a482/e04c6600-abb9-4ebc-a9b3-77b6c536e258/
   target bus=ide dev=hda/
 serial9f16f896-1da3-4f9a-a305-ac9c4f51a482/serial
   alias name=ide0-0-0/
   address bus=0 controller=0 target=0 type=drive unit=0/



 On 01/08/2014 06:28 AM, Neil wrote:

 Hi Dafna,

 Thanks for the reply.

 Attached is the log from the source server (node03).

 I'll reply to your other questions as soon as I'm back in the office
 this afternoon, have to run off to a meeting.

 Regards.

 Neil Wilson.


 On Tue, Jan 7, 2014 at 8:13 PM, Dafna Ron d...@redhat.com wrote:

 Ok... several things :)

 1. for migration we need to see vdsm logs from both src and dst.

 2. Is it possible that the vm has an iso attached? because I see that you
 are having problems with the iso domain:

 2014-01-07 14:26:27,714 ERROR
 [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
 (pool-6-thread-48) Domain e9ab725d-69c1-4a59-b225-b995d095c289:bla-iso
 was
 reported with error code 358


 Thread-1165153::DEBUG::2014-01-07
 13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper)
 Unknown
 libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not found: no
 domain with matching uuid '63da7faa-f92a-4652-90f2-b6660a4fb7b3'

 hread-19::ERROR::2014-01-07
 13:01:02,621::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
 e9ab725d-69c1-4a59-b225-b995d095c289 not found
 Traceback (most recent call last):
File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
  dom = findMethod(sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 171, in
 _findUnfetchedDomain
  raise se.StorageDomainDoesNotExist(sdUUID)
 StorageDomainDoesNotExist: Storage domain does not exist:
 (u'e9ab725d-69c1-4a59-b225-b995d095c289',)
 Thread-19::ERROR::2014-01-07

 13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain)
 Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289
 monitoring information
 Traceback (most recent call last):
File /usr/share/vdsm/storage/domainMonitor.py, line 190, in
 _monitorDomain
  self.domain = sdCache.produce(self.sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 98, in produce
  domain.getRealDomain()
File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
  return self._cache._realProduce(self._sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce
  domain = self._findDomain(sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
  dom = findMethod(sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 171, in
 _findUnfetchedDomain
  raise se.StorageDomainDoesNotExist(sdUUID)
 StorageDomainDoesNotExist: Storage domain does not exist:
 (u'e9ab725d-69c1-4a59-b225-b995d095c289',)
 Dummy-29013::DEBUG::2014-01-07
 13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail)
 'dd

 if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox
 iflag=direct,fullblock count=1 bs=1024000' (cwd N
 one)

 3. The 

Re: [Users] Migration Failed

2014-01-07 Thread Elad Ben Aharon
Is it still in the same condition? 
If yes, please add the outputs from both hosts for:

#virsh -r list
#pgrep qemu
#vdsClient -s 0 list table (or 'vdsClient 0 list table' if you are working 
in insecure mode)


Thnaks,
  
Elad Ben Aharon
RHEV-QE storage team




- Original Message -
From: Neil nwilson...@gmail.com
To: users@ovirt.org
Sent: Tuesday, January 7, 2014 4:21:43 PM
Subject: [Users] Migration Failed

Hi guys,

I've tried to migrate a VM from one host(node03) to another(node01),
and it failed to migrate, and the VM(tux) remained on the original
host. I've now tried to migrate the same VM again, and it picks up
that the previous migration is still in progress and refuses to
migrate.

I've checked for the KVM process on each of the hosts and the VM is
definitely still running on node03 so there doesn't appear to be any
chance of the VM trying to run on both hosts (which I've had before
which is very scary).

These are my versions... and attached are my engine.log and my vdsm.log

Centos 6.5
ovirt-iso-uploader-3.3.1-1.el6.noarch
ovirt-host-deploy-1.1.2-1.el6.noarch
ovirt-release-el6-9-1.noarch
ovirt-engine-setup-3.3.1-2.el6.noarch
ovirt-engine-3.3.1-2.el6.noarch
ovirt-host-deploy-java-1.1.2-1.el6.noarch
ovirt-image-uploader-3.3.1-1.el6.noarch
ovirt-engine-dbscripts-3.3.1-2.el6.noarch
ovirt-engine-cli-3.3.0.6-1.el6.noarch
ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch
ovirt-engine-userportal-3.3.1-2.el6.noarch
ovirt-log-collector-3.3.1-1.el6.noarch
ovirt-engine-tools-3.3.1-2.el6.noarch
ovirt-engine-lib-3.3.1-2.el6.noarch
ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch
ovirt-engine-backend-3.3.1-2.el6.noarch
ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch
ovirt-engine-restapi-3.3.1-2.el6.noarch


vdsm-python-4.13.0-11.el6.x86_64
vdsm-cli-4.13.0-11.el6.noarch
vdsm-xmlrpc-4.13.0-11.el6.noarch
vdsm-4.13.0-11.el6.x86_64
vdsm-python-cpopen-4.13.0-11.el6.x86_64

I've had a few issues with this particular installation in the past,
as it's from a very old pre release of ovirt, then upgrading to the
dreyou repo, then finally moving to the official Centos ovirt repo.

Thanks, any help is greatly appreciated.

Regards.

Neil Wilson.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration Failed

2014-01-07 Thread Neil
Hi Elad,

Thanks for assisting me, yes the same condition exists, if I try to
migrate Tux it says The VM Tux is being migrated.


Below are the details requested.


[root@node01 ~]# virsh -r list
 IdName   State

 1 adam   running

[root@node01 ~]# pgrep qemu
11232
[root@node01 ~]# vdsClient -s 0 list table
63da7faa-f92a-4652-90f2-b6660a4fb7b3  11232  adam Up


[root@node03 ~]# virsh -r list
 IdName   State

 7 tuxrunning

[root@node03 ~]# pgrep qemu
32333
[root@node03 ~]# vdsClient -s 0 list table
2736197b-6dc3-4155-9a29-9306ca64881d  32333  tux  Up

Thanks.

Regards.

Neil Wilson.


On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon ebena...@redhat.com wrote:
 Is it still in the same condition?
 If yes, please add the outputs from both hosts for:

 #virsh -r list
 #pgrep qemu
 #vdsClient -s 0 list table (or 'vdsClient 0 list table' if you are 
 working in insecure mode)


 Thnaks,

 Elad Ben Aharon
 RHEV-QE storage team




 - Original Message -
 From: Neil nwilson...@gmail.com
 To: users@ovirt.org
 Sent: Tuesday, January 7, 2014 4:21:43 PM
 Subject: [Users] Migration Failed

 Hi guys,

 I've tried to migrate a VM from one host(node03) to another(node01),
 and it failed to migrate, and the VM(tux) remained on the original
 host. I've now tried to migrate the same VM again, and it picks up
 that the previous migration is still in progress and refuses to
 migrate.

 I've checked for the KVM process on each of the hosts and the VM is
 definitely still running on node03 so there doesn't appear to be any
 chance of the VM trying to run on both hosts (which I've had before
 which is very scary).

 These are my versions... and attached are my engine.log and my vdsm.log

 Centos 6.5
 ovirt-iso-uploader-3.3.1-1.el6.noarch
 ovirt-host-deploy-1.1.2-1.el6.noarch
 ovirt-release-el6-9-1.noarch
 ovirt-engine-setup-3.3.1-2.el6.noarch
 ovirt-engine-3.3.1-2.el6.noarch
 ovirt-host-deploy-java-1.1.2-1.el6.noarch
 ovirt-image-uploader-3.3.1-1.el6.noarch
 ovirt-engine-dbscripts-3.3.1-2.el6.noarch
 ovirt-engine-cli-3.3.0.6-1.el6.noarch
 ovirt-engine-websocket-proxy-3.3.1-2.el6.noarch
 ovirt-engine-userportal-3.3.1-2.el6.noarch
 ovirt-log-collector-3.3.1-1.el6.noarch
 ovirt-engine-tools-3.3.1-2.el6.noarch
 ovirt-engine-lib-3.3.1-2.el6.noarch
 ovirt-engine-webadmin-portal-3.3.1-2.el6.noarch
 ovirt-engine-backend-3.3.1-2.el6.noarch
 ovirt-engine-sdk-python-3.3.0.8-1.el6.noarch
 ovirt-engine-restapi-3.3.1-2.el6.noarch


 vdsm-python-4.13.0-11.el6.x86_64
 vdsm-cli-4.13.0-11.el6.noarch
 vdsm-xmlrpc-4.13.0-11.el6.noarch
 vdsm-4.13.0-11.el6.x86_64
 vdsm-python-cpopen-4.13.0-11.el6.x86_64

 I've had a few issues with this particular installation in the past,
 as it's from a very old pre release of ovirt, then upgrading to the
 dreyou repo, then finally moving to the official Centos ovirt repo.

 Thanks, any help is greatly appreciated.

 Regards.

 Neil Wilson.

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration Failed

2014-01-07 Thread Dafna Ron

Ok... several things :)

1. for migration we need to see vdsm logs from both src and dst.

2. Is it possible that the vm has an iso attached? because I see that 
you are having problems with the iso domain:


2014-01-07 14:26:27,714 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(pool-6-thread-48) Domain e9ab725d-69c1-4a59-b225-b995d095c289:bla-iso 
was reported with error code 358



Thread-1165153::DEBUG::2014-01-07 
13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) 
Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not 
found: no domain with matching uuid '63da7faa-f92a-4652-90f2-b6660a4fb7b3'


hread-19::ERROR::2014-01-07 
13:01:02,621::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 
e9ab725d-69c1-4a59-b225-b995d095c289 not found

Traceback (most recent call last):
  File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
dom = findMethod(sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'e9ab725d-69c1-4a59-b225-b995d095c289',)
Thread-19::ERROR::2014-01-07 
13:01:02,622::domainMonitor::225::Storage.DomainMonitorThread::(_monitorDomain) 
Error while collecting domain e9ab725d-69c1-4a59-b225-b995d095c289 
monitoring information

Traceback (most recent call last):
  File /usr/share/vdsm/storage/domainMonitor.py, line 190, in 
_monitorDomain

self.domain = sdCache.produce(self.sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 98, in produce
domain.getRealDomain()
  File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
return self._cache._realProduce(self._sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce
domain = self._findDomain(sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
dom = findMethod(sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'e9ab725d-69c1-4a59-b225-b995d095c289',)
Dummy-29013::DEBUG::2014-01-07 
13:01:03,507::storage_mailbox::733::Storage.Misc.excCmd::(_checkForMail) 
'dd 
if=/rhev/data-center/28adaf38-a4f6-11e1-a859-cb68949043e4/mastersd/dom_md/inbox 
iflag=direct,fullblock count=1 bs=1024000' (cwd N

one)

3. The migration fails with libvirt error but we need the trace from the 
second log:


Thread-1165153::DEBUG::2014-01-07 
13:39:42,451::sampling::292::vm.Vm::(stop) 
vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stop statistics collection
Thread-1163583::DEBUG::2014-01-07 
13:39:42,452::sampling::323::vm.Vm::(run) 
vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats thread finished
Thread-1165153::DEBUG::2014-01-07 
13:39:42,460::libvirtconnection::108::libvirtconnection::(wrapper) 
Unknown libvirterror: ecode: 42 edom: 10 level: 2 message: Domain not 
found: no domain with matching uuid '63da7faa-f92a-4652-90f2-b6660

a4fb7b3'


4. But I am worried about this and would more info about this vm...

Thread-247::ERROR::2014-01-07 
15:35:14,868::sampling::355::vm.Vm::(collect) 
vmId=`63da7faa-f92a-4652-90f2-b6660a4fb7b3`::Stats function failed: 
AdvancedStatsFunction _highWrite at 0x2ce0998

Traceback (most recent call last):
  File /usr/share/vdsm/sampling.py, line 351, in collect
statsFunction()
  File /usr/share/vdsm/sampling.py, line 226, in __call__
retValue = self._function(*args, **kwargs)
  File /usr/share/vdsm/vm.py, line 509, in _highWrite
if not vmDrive.blockDev or vmDrive.format != 'cow':
AttributeError: 'Drive' object has no attribute 'format'

How did you create this vm? was it from the UI? was it from a script? 
what are the parameters you used?


Thanks,

Dafna


On 01/07/2014 04:34 PM, Neil wrote:

Hi Elad,

Thanks for assisting me, yes the same condition exists, if I try to
migrate Tux it says The VM Tux is being migrated.


Below are the details requested.


[root@node01 ~]# virsh -r list
  IdName   State

  1 adam   running

[root@node01 ~]# pgrep qemu
11232
[root@node01 ~]# vdsClient -s 0 list table
63da7faa-f92a-4652-90f2-b6660a4fb7b3  11232  adam Up


[root@node03 ~]# virsh -r list
  IdName   State

  7 tuxrunning

[root@node03 ~]# pgrep qemu
32333
[root@node03 ~]# vdsClient -s 0 list table
2736197b-6dc3-4155-9a29-9306ca64881d  32333  tux  Up

Thanks.

Regards.

Neil Wilson.


On Tue, Jan 7, 2014 at 4:43 PM, Elad Ben Aharon ebena...@redhat.com wrote:

Is it still in the same condition?
If yes, please add the outputs from both hosts for:

#virsh -r list
#pgrep qemu
#vdsClient -s 0 list table (or 'vdsClient 0 list table' if you are working 
in insecure mode)


Thnaks,

Elad Ben 

Re: [Users] migration failed

2012-02-19 Thread зоррыч
Thank you!
There was a wrong host on one of the nodes in  /etc/hostname 




-Original Message-
From: Nathan Stratton [mailto:nat...@robotics.net] 
Sent: Friday, February 17, 2012 8:27 PM
To: ??
Cc: users@ovirt.org
Subject: Re: [Users] migration failed

On Fri, 17 Feb 2012, ?? wrote:

 How do I fix it?
 I checked the host name on both nodes and found that they resolves 
 correctly (there is an entry in /etc/hostname).
 In DNS hostname is not registered (!)

Have you tried entering them all in /etc/hosts?


Nathan StrattonCTO, BlinkMind, Inc.
nathan at robotics.net nathan at blinkmind.com
http://www.robotics.nethttp://www.blinkmind.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users