Re: [Users] ovirt 3.2 migrations failing

2013-02-05 Thread Douglas Landgraf

Hi Jonathan,

On 01/10/2013 08:56 AM, Jonathan Horne wrote:

[root@d0lppn031 ~]# rpm -qa|grep vdsm
vdsm-4.10.2-0.101.26.el6.x86_64
vdsm-python-4.10.2-0.101.26.el6.x86_64
vdsm-xmlrpc-4.10.2-0.101.26.el6.noarch
vdsm-cli-4.10.2-0.101.26.el6.noarch

yes, i know I'm running the centos version of an alpha from mid-december.
i can't wait for dreyou to repackage after the 1/30 release, i am required
to use centos and 3.2 packages, and our production deployment that will
make or break the company is days away.  i have to figure something out.




Can you please try the latest bits in ovirt nightly repo for EL6?

Add to your host:
[ovirt-nightly]
name=Nightly builds of the oVirt project
baseurl=http://ovirt.org/releases/nightly/rpm/EL/$releasever/
enabled=1
skip_if_unavailable=1
gpgcheck=0

Additionally is required EPEL repo and update few rpm packages:
http://dougsland.fedorapeople.org/oVirt3.2-EL6/


Thanks!

--
Cheers
Douglas
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt 3.2 migrations failing

2013-01-10 Thread Jonathan Horne
[root@d0lppn031 ~]# rpm -qa|grep vdsm
vdsm-4.10.2-0.101.26.el6.x86_64
vdsm-python-4.10.2-0.101.26.el6.x86_64
vdsm-xmlrpc-4.10.2-0.101.26.el6.noarch
vdsm-cli-4.10.2-0.101.26.el6.noarch

yes, i know I'm running the centos version of an alpha from mid-december.
i can't wait for dreyou to repackage after the 1/30 release, i am required
to use centos and 3.2 packages, and our production deployment that will
make or break the company is days away.  i have to figure something out.





On 1/10/13 3:54 AM, "Dan Kenigsberg"  wrote:

>On Wed, Jan 09, 2013 at 06:05:04PM -0500, Jeff Bailey wrote:
>>
>> On 1/9/2013 5:52 PM, Jonathan Horne wrote:
>> >yep, sure enough:
>> >
>> >
>> >[root@d0lppn032 ~]# cat /var/log/libvirtd.log|grep CA
>> >2013-01-09 22:45:30.310+: 4413: error :
>> >virNetTLSContextCheckCertFile:92 : Cannot read CA certificate
>> >'/etc/pki/CA/cacert.pem': No such file or directory
>> >[root@d0lppn032 ~]# locate cacert.pem
>> >/etc/pki/vdsm/certs/cacert.pem
>>
>> I just installed symbolic links for the CA cert and the client cert
>> and key to the vdsm certs/key on the hosts and everything was fine
>> then.
>
>Jonathan, which vdsm version do you have installed?
>As Jeff mentioned, we might have already fixed this issue.



This is a PRIVATE message. If you are not the intended recipient, please delete 
without copying and kindly advise us by e-mail of the mistake in delivery. 
NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to 
any order or other contract unless pursuant to explicit written agreement or 
government initiative expressly permitting the use of e-mail for such purpose.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt 3.2 migrations failing

2013-01-10 Thread Dan Kenigsberg
On Wed, Jan 09, 2013 at 06:05:04PM -0500, Jeff Bailey wrote:
> 
> On 1/9/2013 5:52 PM, Jonathan Horne wrote:
> >yep, sure enough:
> >
> >
> >[root@d0lppn032 ~]# cat /var/log/libvirtd.log|grep CA
> >2013-01-09 22:45:30.310+: 4413: error :
> >virNetTLSContextCheckCertFile:92 : Cannot read CA certificate
> >'/etc/pki/CA/cacert.pem': No such file or directory
> >[root@d0lppn032 ~]# locate cacert.pem
> >/etc/pki/vdsm/certs/cacert.pem
> 
> I just installed symbolic links for the CA cert and the client cert
> and key to the vdsm certs/key on the hosts and everything was fine
> then.

Jonathan, which vdsm version do you have installed?
As Jeff mentioned, we might have already fixed this issue.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt 3.2 migrations failing

2013-01-09 Thread Jeff Bailey
 released
Thread-40128::DEBUG::2013-01-09
07:49:09,147::BindingXMLRPC::915::vds::(wrapper) return
vmMigrationCreate
with {'status': {'message': 'Done', 'code': 0}, 'migrationPort': 0,
'params': {'status': 'Migration Destination', 'acpiEnable': 'true',
'emulatedMachine': 'pc', 'afterMigrationStatus': 'Up',
'spiceSecureChannels':
'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard',
'pid': '0', 'transparentHugePages': 'true', 'displaySecurePort': '-1',
'timeOffset': 1, 'cpuType': 'SandyBridge', 'smp': '4', 'migrationDest':
'libvirt', 'custom': {}, 'vmType': 'kvm', 'spiceSslCipherSuite':
'DEFAULT', 'memSize': 4096, 'vmName': 'd0lpvf051', 'nice': '0',
'username': 'root', 'vmId': 'e00adf83-78d2-4f65-a259-0e01680f57fd',
'displayIp': '0', 'keyboardLayout': 'en-us', 'displayPort': '-1',
'smartcardEnable': 'false', 'guestIPs': '10.32.0.51', 'nicModel':
'rtl8139,pv', 'smpCoresPerSocket': '1', 'kvmEnable': 'true',
'pitReinjection': 'false', 'devices': [{'device': 'usb', 'alias':
'usb0',
'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00',
'domain':
'0x', 'type': 'pci', 'function': '0x2'}}, {'device': 'ide', 'alias':
'ide0', 'type': 'controller', 'address': {'slot': '0x01', 'bus': '0x00',
'domain': '0x', 'type': 'pci', 'function': '0x1'}}, {'device':
'virtio-serial', 'alias': 'virtio-serial0', 'type': 'controller',
'address': {'slot': '0x05', 'bus': '0x00', 'domain': '0x', 'type':
'pci', 'function': '0x0'}}, {'specParams': {'vram': '65536'}, 'alias':
'video0', 'deviceId': '99c46e79-710c-4f60-87eb-904fa496f0a7', 'address':
{'slot': '0x02', 'bus': '0x00', 'domain': '0x', 'type': 'pci',
'function': '0x0'}, 'device': 'qxl', 'type': 'video'}, {'nicModel':
'pv',
'macAddr': '00:1a:4a:20:00:9e', 'network': 'ovirtmgmt', 'alias': 'net0',
'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId':
'054f3e29-d531-438b-8d1c-09cbed863bea', 'address': {'slot': '0x03',
'bus':
'0x00', 'domain': '0x', 'type': 'pci', 'function': '0x0'}, 'device':
'bridge', 'type': 'interface', 'name': 'vnet0'}, {'nicModel': 'pv',
'macAddr': '00:1a:4a:20:00:9f', 'network': 'ovirtmgmt', 'alias': 'net1',
'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId':
'48ff9261-fbd8-4f0f-a823-4c6451226b95', 'address': {'slot': '0x04',
'bus':
'0x00', 'domain': '0x', 'type': 'pci', 'function': '0x0'}, 'device':
'bridge', 'type': 'interface', 'name': 'vnet1'}, {'target': 4194304,
'specParams': {'model': 'virtio'}, 'alias': 'balloon0', 'deviceId':
'df32c652-f9c6-42db-ae88-579ee90aa714', 'address': {'slot': '0x07',
'bus':
'0x00', 'domain': '0x', 'type': 'pci', 'function': '0x0'}, 'device':
'memballoon', 'type': 'balloon'}, {'index': '2', 'iface': 'ide', 'name':
'hdc', 'alias': 'ide0-1-0', 'shared': 'false', 'specParams': {'path':
''},
'readonly': 'True', 'deviceId': '62d5042c-3b75-4cb3-b175-373f66c7356b',
'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target':
'0',
'unit': '0'}, 'device': 'cdrom', 'path': '', 'type': 'disk'},
{'address':
{'slot': '0x06', 'bus': '0x00', 'domain': '0x', 'type': 'pci',
'function': '0x0'}, 'index': 0, 'iface': 'virtio', 'apparentsize':
'5368709120', 'alias': 'virtio-disk0', 'imageID':
'0a9e06f9-2e85-4410-930d-7e381128c212', 'readonly': 'False', 'shared':
'false', 'truesize': '5368709120', 'type': 'disk', 'domainID':
'd668d949-42f5-49a3-b33f-0b174ac226c5', 'reqsize': '0', 'format': 'cow',
'deviceId': '0a9e06f9-2e85-4410-930d-7e381128c212', 'poolID':
'fdab6642-8928-43d5-8ca1-b68792d70729', 'device': 'disk', 'path':

'/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a
3-

b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21
d8
-498c-b12a-efb43a2ef881', 'propagateErrors': 'off', 'optional': 'false',
'name': 'vda', 'volumeID': '93410a32-21d8-498c-b12a-efb43a2ef881',
'specParams': {}, 'volumeChain': [{'path':

'/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a
3-

b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21
d8
-498c-b12a-efb43a2ef881', 'domainID':
'd668d949-42f5-49a3-b33f-0b174ac226c5', 'volumeID':
'93410a32-21d8-498c-b12a-efb43a2ef881', 'imageID':
'0a9e06f9-2e85-4410-930d-7e381128c212'}]}, {'device': 'unix', 'alias':
'channel0', 'type': 'channel', 'address': {'bus': '0', 'controller':
'0',
'type': 'virtio-serial', 'port': '1'}}, {'device': 'unix', 'alias':
'channel1', 'type': 'channel', 'address': {'bus': '0', 'controller':
'0',
'type': 'virtio-serial', 'port': '2'}}, {'device': 'spicevmc', 'alias':
'channel2', 'type': 'channel', 'address': {'bus': '0', 'controller':
'0',
'type': 'virtio-serial', 'port': '3'}}], 'clientIp': '', 'display':
'qxl'}}
Thread-40129::DEBUG::2013-01-09
07:49:09,148::libvirtvm::1751::vm.Vm::(_waitForIncomingMigrationFinish)
vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::Waiting 300 seconds for end
of migration
Thread-40129::ERROR::2013-01-09
07:49:09,310::vm::696::vm.Vm::(_startUnderlyingVm)
vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::The vm start process failed
Thread-40129::DEBUG::2013-01-09
07:49:09,311::vm::1045::vm.Vm::(setDownStatus)
vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::Changed state to Down:
Domain
not found: no domain with matching uuid
'e00adf83-78d2-4f65-a259-0e01680f57fd'



thank you for your help!
jonathan




On 1/9/13 1:18 AM, "Haim Ateya"  wrote:


- Original Message -

From: "Jonathan Horne" 
To: "Jonathan Horne" , users@ovirt.org
Sent: Tuesday, January 8, 2013 10:26:52 PM
Subject: Re: [Users] ovirt 3.2 migrations failing





so far i see this, and it looks related:

this only means libvirt can't find the guest on the host, was it on
source or destination?

please run the following:

on source server:

- egrep 'vmMigrate|_setupVdsConnection' /var/log/vdsm/vdsm.log
- then, from output, get the Thread number of both commands and run
grep
again:
- egrep 'Thread-$x|Thread-$y' /var/log/vdsm/vdsm.log

on destination server:

- egrep 'vmMigrationCreate|prepareImage' /var/log/vdsm/vdsm.log
- then, from output, get the Thread number of both commands and run
grep
again:
- egrep 'Thread-$x|Thread-$y' /var/log/vdsm/vdsm.log

please paste it here.

Haim






Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 676, in _startUnderlyingVm
self._waitForIncomingMigrationFinish()
File "/usr/share/vdsm/libvirtvm.py", line 1757, in
_waitForIncomingMigrationFinish
self._connection.lookupByUUIDString(self.id),
File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py",
line 111, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2682, in
lookupByUUIDString
if ret is None:raise libvirtError('virDomainLookupByUUIDString()
failed', conn=self)
libvirtError: Domain not found: no domain with matching uuid
'063c7cbe-c569-4df3-b9a7-6474c41d797e'





From: Jonathan Horne < jho...@skopos.us >
Date: Tuesday, January 8, 2013 2:15 PM
To: " users@ovirt.org " < users@ovirt.org >
Subject: [Users] ovirt 3.2 migrations failing







i just built up 2 nodes and a manager on 3.2 dreyou packages, and now
that i have a VM up and installed with rhev agent, the VM is unable
to migrate. the failure is pretty much immediate.


i don't know where to begin troubleshooting this, can someone help me
get going in the right direction? just let me know what logs are
appropriate and i will post them up.


thanks,
jonathan



This is a PRIVATE message. If you are not the intended recipient,
please delete without copying and kindly advise us by e-mail of the
mistake in delivery. NOTE: Regardless of content, this e-mail shall
not operate to bind SKOPOS to any order or other contract unless
pursuant to explicit written agreement or government initiative
expressly permitting the use of e-mail for such purpose.

This is a PRIVATE message. If you are not the intended recipient,
please delete without copying and kindly advise us by e-mail of the
mistake in delivery. NOTE: Regardless of content, this e-mail shall
not operate to bind SKOPOS to any order or other contract unless
pursuant to explicit written agreement or government initiative
expressly permitting the use of e-mail for such purpose.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



This is a PRIVATE message. If you are not the intended recipient,
please delete without copying and kindly advise us by e-mail of the
mistake in delivery. NOTE: Regardless of content, this e-mail shall not
operate to bind SKOPOS to any order or other contract unless pursuant to
explicit written agreement or government initiative expressly permitting
the use of e-mail for such purpose.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



This is a PRIVATE message. If you are not the intended recipient, please delete 
without copying and kindly advise us by e-mail of the mistake in delivery. 
NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to 
any order or other contract unless pursuant to explicit written agreement or 
government initiative expressly permitting the use of e-mail for such purpose.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt 3.2 migrations failing

2013-01-09 Thread Jonathan Horne
lter = [ \\"a%36090a028108c8e5dca0e950ae01a%\\", \\"r%.*%\\" ] }
>> global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1 }
>> backup {  retain_min = 50  retain_days = 0 } " --noheadings --units b
>> --nosuffix --separator | -o
>> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags
>> d668d949-42f5-49a3-b33f-0b174ac226c5' (cwd None)
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,081::misc::84::Storage.Misc.excCmd::() SUCCESS:  =
>> '';  = 0
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,094::lvm::442::OperationMutex::(_reloadlvs) Operation 'lvm
>>reload
>> operation' released the operation mutex
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,106::misc::84::Storage.Misc.excCmd::() '/bin/dd
>> iflag=direct skip=4 bs=512
>> if=/dev/d668d949-42f5-49a3-b33f-0b174ac226c5/metadata count=1' (cwd
>>None)
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,125::misc::84::Storage.Misc.excCmd::() SUCCESS:  =
>> '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.00859049
>>s,
>> 59.6 kB/s\n';  = 0
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,126::misc::325::Storage.Misc::(validateDDBytes) err: ['1+0
>> records in', '1+0 records out', '512 bytes (512 B) copied, 0.00859049 s,
>> 59.6 kB/s'], size: 512
>> Thread-40129::INFO::2013-01-09
>> 07:49:09,126::image::344::Storage.Image::(getChain)
>> sdUUID=d668d949-42f5-49a3-b33f-0b174ac226c5
>> imgUUID=0a9e06f9-2e85-4410-930d-7e381128c212
>> chain=[]
>> Thread-40129::INFO::2013-01-09
>> 07:49:09,127::logUtils::44::dispatcher::(wrapper) Run and protect:
>> prepareImage, Return response: {'path':
>>
>>'/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a
>>3-
>>
>>b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21
>>d8
>> -498c-b12a-efb43a2ef881', 'chain': [{'path':
>>
>>'/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a
>>3-
>>
>>b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21
>>d8
>> -498c-b12a-efb43a2ef881', 'domainID':
>> 'd668d949-42f5-49a3-b33f-0b174ac226c5', 'volumeID':
>> '93410a32-21d8-498c-b12a-efb43a2ef881', 'imageID':
>> '0a9e06f9-2e85-4410-930d-7e381128c212'}]}
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,128::task::1151::TaskManager.Task::(prepare)
>> Task=`d6ca414a-5a97-47a2-a2ec-ea3acd992aca`::finished: {'path':
>>
>>'/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a
>>3-
>>
>>b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21
>>d8
>> -498c-b12a-efb43a2ef881', 'chain': [{'path':
>>
>>'/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a
>>3-
>>
>>b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21
>>d8
>> -498c-b12a-efb43a2ef881', 'domainID':
>> 'd668d949-42f5-49a3-b33f-0b174ac226c5', 'volumeID':
>> '93410a32-21d8-498c-b12a-efb43a2ef881', 'imageID':
>> '0a9e06f9-2e85-4410-930d-7e381128c212'}]}
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,128::task::568::TaskManager.Task::(_updateState)
>> Task=`d6ca414a-5a97-47a2-a2ec-ea3acd992aca`::moving from state preparing
>> -> state finished
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,128::resourceManager::830::ResourceManager.Owner::(releaseAll)
>> Owner.releaseAll requests {} resources
>> {'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5': < ResourceRef
>> 'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5', isValid: 'True' obj:
>> 'None'>}
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,129::resourceManager::864::ResourceManager.Owner::(cancelAll)
>> Owner.cancelAll requests {}
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,129::resourceManager::557::ResourceManager::(releaseResource)
>> Trying to release resource
>>'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5'
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,130::resourceManager::573::ResourceManager::(releaseResource)
>> Released resource 'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5' (0
>>active
>> users)
>> Thread-40129::DEBUG::2013-01-09
>> 07:49:09,130::resourceManager::578::ResourceManager::(releaseResource)
>> 

Re: [Users] ovirt 3.2 migrations failing

2013-01-09 Thread Jeff Bailey
Thread-40129::DEBUG::2013-01-09
07:49:09,125::misc::84::Storage.Misc.excCmd::() SUCCESS:  =
'1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.00859049 s,
59.6 kB/s\n';  = 0
Thread-40129::DEBUG::2013-01-09
07:49:09,126::misc::325::Storage.Misc::(validateDDBytes) err: ['1+0
records in', '1+0 records out', '512 bytes (512 B) copied, 0.00859049 s,
59.6 kB/s'], size: 512
Thread-40129::INFO::2013-01-09
07:49:09,126::image::344::Storage.Image::(getChain)
sdUUID=d668d949-42f5-49a3-b33f-0b174ac226c5
imgUUID=0a9e06f9-2e85-4410-930d-7e381128c212
chain=[]
Thread-40129::INFO::2013-01-09
07:49:09,127::logUtils::44::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'path':
'/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a3-
b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21d8
-498c-b12a-efb43a2ef881', 'chain': [{'path':
'/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a3-
b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21d8
-498c-b12a-efb43a2ef881', 'domainID':
'd668d949-42f5-49a3-b33f-0b174ac226c5', 'volumeID':
'93410a32-21d8-498c-b12a-efb43a2ef881', 'imageID':
'0a9e06f9-2e85-4410-930d-7e381128c212'}]}
Thread-40129::DEBUG::2013-01-09
07:49:09,128::task::1151::TaskManager.Task::(prepare)
Task=`d6ca414a-5a97-47a2-a2ec-ea3acd992aca`::finished: {'path':
'/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a3-
b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21d8
-498c-b12a-efb43a2ef881', 'chain': [{'path':
'/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a3-
b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21d8
-498c-b12a-efb43a2ef881', 'domainID':
'd668d949-42f5-49a3-b33f-0b174ac226c5', 'volumeID':
'93410a32-21d8-498c-b12a-efb43a2ef881', 'imageID':
'0a9e06f9-2e85-4410-930d-7e381128c212'}]}
Thread-40129::DEBUG::2013-01-09
07:49:09,128::task::568::TaskManager.Task::(_updateState)
Task=`d6ca414a-5a97-47a2-a2ec-ea3acd992aca`::moving from state preparing
-> state finished
Thread-40129::DEBUG::2013-01-09
07:49:09,128::resourceManager::830::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5': < ResourceRef
'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5', isValid: 'True' obj:
'None'>}
Thread-40129::DEBUG::2013-01-09
07:49:09,129::resourceManager::864::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-40129::DEBUG::2013-01-09
07:49:09,129::resourceManager::557::ResourceManager::(releaseResource)
Trying to release resource 'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5'
Thread-40129::DEBUG::2013-01-09
07:49:09,130::resourceManager::573::ResourceManager::(releaseResource)
Released resource 'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5' (0 active
users)
Thread-40129::DEBUG::2013-01-09
07:49:09,130::resourceManager::578::ResourceManager::(releaseResource)
Resource 'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5' is free, finding
out if anyone is waiting for it.
Thread-40129::DEBUG::2013-01-09
07:49:09,131::resourceManager::585::ResourceManager::(releaseResource) No
one is waiting for resource
'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5', Clearing records.
Thread-40129::DEBUG::2013-01-09
07:49:09,131::task::957::TaskManager.Task::(_decref)
Task=`d6ca414a-5a97-47a2-a2ec-ea3acd992aca`::ref 0 aborting False
Thread-40129::INFO::2013-01-09
07:49:09,131::clientIF::316::vds::(prepareVolumePath) prepared volume
path:
/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a3-b
33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21d8-
498c-b12a-efb43a2ef881
Thread-40128::DEBUG::2013-01-09
07:49:09,145::API::483::vds::(migrationCreate) Destination VM creation
succeeded
Thread-40129::DEBUG::2013-01-09
07:49:09,144::vm::672::vm.Vm::(_startUnderlyingVm)
vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::_ongoingCreations released
Thread-40128::DEBUG::2013-01-09
07:49:09,147::BindingXMLRPC::915::vds::(wrapper) return vmMigrationCreate
with {'status': {'message': 'Done', 'code': 0}, 'migrationPort': 0,
'params': {'status': 'Migration Destination', 'acpiEnable': 'true',
'emulatedMachine': 'pc', 'afterMigrationStatus': 'Up',
'spiceSecureChannels':
'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard',
'pid': '0', 'transparentHugePages': 'true', 'displaySecurePort': &#

Re: [Users] ovirt 3.2 migrations failing

2013-01-09 Thread Jonathan Horne
49:09,126::image::344::Storage.Image::(getChain)
sdUUID=d668d949-42f5-49a3-b33f-0b174ac226c5
imgUUID=0a9e06f9-2e85-4410-930d-7e381128c212
chain=[]
Thread-40129::INFO::2013-01-09
07:49:09,127::logUtils::44::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'path':
'/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a3-
b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21d8
-498c-b12a-efb43a2ef881', 'chain': [{'path':
'/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a3-
b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21d8
-498c-b12a-efb43a2ef881', 'domainID':
'd668d949-42f5-49a3-b33f-0b174ac226c5', 'volumeID':
'93410a32-21d8-498c-b12a-efb43a2ef881', 'imageID':
'0a9e06f9-2e85-4410-930d-7e381128c212'}]}
Thread-40129::DEBUG::2013-01-09
07:49:09,128::task::1151::TaskManager.Task::(prepare)
Task=`d6ca414a-5a97-47a2-a2ec-ea3acd992aca`::finished: {'path':
'/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a3-
b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21d8
-498c-b12a-efb43a2ef881', 'chain': [{'path':
'/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a3-
b33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21d8
-498c-b12a-efb43a2ef881', 'domainID':
'd668d949-42f5-49a3-b33f-0b174ac226c5', 'volumeID':
'93410a32-21d8-498c-b12a-efb43a2ef881', 'imageID':
'0a9e06f9-2e85-4410-930d-7e381128c212'}]}
Thread-40129::DEBUG::2013-01-09
07:49:09,128::task::568::TaskManager.Task::(_updateState)
Task=`d6ca414a-5a97-47a2-a2ec-ea3acd992aca`::moving from state preparing
-> state finished
Thread-40129::DEBUG::2013-01-09
07:49:09,128::resourceManager::830::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5': < ResourceRef
'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5', isValid: 'True' obj:
'None'>}
Thread-40129::DEBUG::2013-01-09
07:49:09,129::resourceManager::864::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-40129::DEBUG::2013-01-09
07:49:09,129::resourceManager::557::ResourceManager::(releaseResource)
Trying to release resource 'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5'
Thread-40129::DEBUG::2013-01-09
07:49:09,130::resourceManager::573::ResourceManager::(releaseResource)
Released resource 'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5' (0 active
users)
Thread-40129::DEBUG::2013-01-09
07:49:09,130::resourceManager::578::ResourceManager::(releaseResource)
Resource 'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5' is free, finding
out if anyone is waiting for it.
Thread-40129::DEBUG::2013-01-09
07:49:09,131::resourceManager::585::ResourceManager::(releaseResource) No
one is waiting for resource
'Storage.d668d949-42f5-49a3-b33f-0b174ac226c5', Clearing records.
Thread-40129::DEBUG::2013-01-09
07:49:09,131::task::957::TaskManager.Task::(_decref)
Task=`d6ca414a-5a97-47a2-a2ec-ea3acd992aca`::ref 0 aborting False
Thread-40129::INFO::2013-01-09
07:49:09,131::clientIF::316::vds::(prepareVolumePath) prepared volume
path:
/rhev/data-center/fdab6642-8928-43d5-8ca1-b68792d70729/d668d949-42f5-49a3-b
33f-0b174ac226c5/images/0a9e06f9-2e85-4410-930d-7e381128c212/93410a32-21d8-
498c-b12a-efb43a2ef881
Thread-40128::DEBUG::2013-01-09
07:49:09,145::API::483::vds::(migrationCreate) Destination VM creation
succeeded
Thread-40129::DEBUG::2013-01-09
07:49:09,144::vm::672::vm.Vm::(_startUnderlyingVm)
vmId=`e00adf83-78d2-4f65-a259-0e01680f57fd`::_ongoingCreations released
Thread-40128::DEBUG::2013-01-09
07:49:09,147::BindingXMLRPC::915::vds::(wrapper) return vmMigrationCreate
with {'status': {'message': 'Done', 'code': 0}, 'migrationPort': 0,
'params': {'status': 'Migration Destination', 'acpiEnable': 'true',
'emulatedMachine': 'pc', 'afterMigrationStatus': 'Up',
'spiceSecureChannels':
'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard',
'pid': '0', 'transparentHugePages': 'true', 'displaySecurePort': '-1',
'timeOffset': 1, 'cpuType': 'SandyBridge', 'smp': '4', 'migrationDest':
'libvirt', 'custom': {}, 'vmType': 'kvm', 'spiceSslCipherSuite':
'DEFAULT', 'memSize': 4096, 'vmName': 'd0lpvf051', 'nice': '0',
'username': 'root', 'vmId': 'e00adf83-

Re: [Users] ovirt 3.2 migrations failing

2013-01-08 Thread Haim Ateya


- Original Message -
> From: "Jonathan Horne" 
> To: "Jonathan Horne" , users@ovirt.org
> Sent: Tuesday, January 8, 2013 10:26:52 PM
> Subject: Re: [Users] ovirt 3.2 migrations failing
> 
> 
> 
> 
> 
> so far i see this, and it looks related:

this only means libvirt can't find the guest on the host, was it on source or 
destination?

please run the following:

on source server:

- egrep 'vmMigrate|_setupVdsConnection' /var/log/vdsm/vdsm.log 
- then, from output, get the Thread number of both commands and run grep again:
- egrep 'Thread-$x|Thread-$y' /var/log/vdsm/vdsm.log

on destination server:

- egrep 'vmMigrationCreate|prepareImage' /var/log/vdsm/vdsm.log
- then, from output, get the Thread number of both commands and run grep again:
- egrep 'Thread-$x|Thread-$y' /var/log/vdsm/vdsm.log

please paste it here.

Haim




> 
> 
> 
> Traceback (most recent call last):
> File "/usr/share/vdsm/vm.py", line 676, in _startUnderlyingVm
> self._waitForIncomingMigrationFinish()
> File "/usr/share/vdsm/libvirtvm.py", line 1757, in
> _waitForIncomingMigrationFinish
> self._connection.lookupByUUIDString(self.id),
> File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py",
> line 111, in wrapper
> ret = f(*args, **kwargs)
> File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2682, in
> lookupByUUIDString
> if ret is None:raise libvirtError('virDomainLookupByUUIDString()
> failed', conn=self)
> libvirtError: Domain not found: no domain with matching uuid
> '063c7cbe-c569-4df3-b9a7-6474c41d797e'
> 
> 
> 
> 
> 
> From: Jonathan Horne < jho...@skopos.us >
> Date: Tuesday, January 8, 2013 2:15 PM
> To: " users@ovirt.org " < users@ovirt.org >
> Subject: [Users] ovirt 3.2 migrations failing
> 
> 
> 
> 
> 
> 
> 
> i just built up 2 nodes and a manager on 3.2 dreyou packages, and now
> that i have a VM up and installed with rhev agent, the VM is unable
> to migrate. the failure is pretty much immediate.
> 
> 
> i don't know where to begin troubleshooting this, can someone help me
> get going in the right direction? just let me know what logs are
> appropriate and i will post them up.
> 
> 
> thanks,
> jonathan
> 
> 
> 
> This is a PRIVATE message. If you are not the intended recipient,
> please delete without copying and kindly advise us by e-mail of the
> mistake in delivery. NOTE: Regardless of content, this e-mail shall
> not operate to bind SKOPOS to any order or other contract unless
> pursuant to explicit written agreement or government initiative
> expressly permitting the use of e-mail for such purpose.
> 
> This is a PRIVATE message. If you are not the intended recipient,
> please delete without copying and kindly advise us by e-mail of the
> mistake in delivery. NOTE: Regardless of content, this e-mail shall
> not operate to bind SKOPOS to any order or other contract unless
> pursuant to explicit written agreement or government initiative
> expressly permitting the use of e-mail for such purpose.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt 3.2 migrations failing

2013-01-08 Thread Jonathan Horne
so far i see this, and it looks related:

Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 676, in _startUnderlyingVm
self._waitForIncomingMigrationFinish()
  File "/usr/share/vdsm/libvirtvm.py", line 1757, in 
_waitForIncomingMigrationFinish
self._connection.lookupByUUIDString(self.id),
  File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 
111, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2682, in 
lookupByUUIDString
if ret is None:raise libvirtError('virDomainLookupByUUIDString() failed', 
conn=self)
libvirtError: Domain not found: no domain with matching uuid 
'063c7cbe-c569-4df3-b9a7-6474c41d797e'

From: Jonathan Horne mailto:jho...@skopos.us>>
Date: Tuesday, January 8, 2013 2:15 PM
To: "users@ovirt.org<mailto:users@ovirt.org>" 
mailto:users@ovirt.org>>
Subject: [Users] ovirt 3.2 migrations failing

i just built up 2 nodes and a manager on 3.2 dreyou packages, and now that i 
have a VM up and installed with rhev agent, the VM is unable to migrate. the 
failure is pretty much immediate.

i don't know where to begin troubleshooting this, can someone help me get going 
in the right direction?  just let me know what logs are appropriate and i will 
post them up.

thanks,
jonathan


This is a PRIVATE message. If you are not the intended recipient, please delete 
without copying and kindly advise us by e-mail of the mistake in delivery. 
NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to 
any order or other contract unless pursuant to explicit written agreement or 
government initiative expressly permitting the use of e-mail for such purpose.


This is a PRIVATE message. If you are not the intended recipient, please delete 
without copying and kindly advise us by e-mail of the mistake in delivery. 
NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to 
any order or other contract unless pursuant to explicit written agreement or 
government initiative expressly permitting the use of e-mail for such purpose.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] ovirt 3.2 migrations failing

2013-01-08 Thread Jonathan Horne
i just built up 2 nodes and a manager on 3.2 dreyou packages, and now that i 
have a VM up and installed with rhev agent, the VM is unable to migrate. the 
failure is pretty much immediate.

i don't know where to begin troubleshooting this, can someone help me get going 
in the right direction?  just let me know what logs are appropriate and i will 
post them up.

thanks,
jonathan


This is a PRIVATE message. If you are not the intended recipient, please delete 
without copying and kindly advise us by e-mail of the mistake in delivery. 
NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to 
any order or other contract unless pursuant to explicit written agreement or 
government initiative expressly permitting the use of e-mail for such purpose.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users