Re: [Users] 6.4 CR: oVirt 3.1 breaks with missing cpu features after update to CentOS 6.4 (6.3 + CR)

2013-03-05 Thread Mark Wu

On Tue 05 Mar 2013 04:52:05 AM CST, Itamar Heim wrote:

On 04/03/2013 19:03, Patrick Hurrelmann wrote:

Hi list,

I tested the upcoming CentOS 6.4 release with my lab installation of
oVirt 3.1 and it fails to play well.

Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type
Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are
both version 3.1. oVirt 3.1 was installed via Dreyou's repo.

In CentOS 6.3 all is fine and the following rpms are installed:

libvirt.x86_640.9.10-21.el6_3.8
libvirt-client.x86_64 0.9.10-21.el6_3.8
libvirt-lock-sanlock.x86_64   0.9.10-21.el6_3.8
libvirt-python.x86_64 0.9.10-21.el6_3.8
vdsm.x86_64   4.10.0-0.46.15.el6
vdsm-cli.noarch   4.10.0-0.46.15.el6
vdsm-python.x86_644.10.0-0.46.15.el6
vdsm-xmlrpc.noarch4.10.0-0.46.15.el6
qemu-kvm.x86_64   2:0.12.1.2-2.295.el6_3.10


uname -a
Linux vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6
03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

virsh cpu capabilities on 6.3:
 cpu
   archx86_64/arch
   modelNehalem/model
   vendorIntel/vendor
   topology sockets='1' cores='4' threads='1'/
   feature name='rdtscp'/
   feature name='pdcm'/
   feature name='xtpr'/
   feature name='tm2'/
   feature name='est'/
   feature name='smx'/
   feature name='vmx'/
   feature name='ds_cpl'/
   feature name='monitor'/
   feature name='dtes64'/
   feature name='pbe'/
   feature name='tm'/
   feature name='ht'/
   feature name='ss'/
   feature name='acpi'/
   feature name='ds'/
   feature name='vme'/
 /cpu

and corresponding cpu features from vdsClient:

cpuCores = 4
cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,
   cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,
   tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,
   pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,
   dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,

pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,
   flexpriority,ept,vpid,model_Conroe,model_Penryn,
   model_Nehalem
cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
cpuSockets = 1
cpuSpeed = 2394.132


So the system was updated to 6.4 using the continuous release repo.

Installed rpms after update to 6.4 (6.3 + CR):

libvirt.x86_640.10.2-18.el6
libvirt-client.x86_64 0.10.2-18.el6
libvirt-lock-sanlock.x86_64   0.10.2-18.el6
libvirt-python.x86_64 0.10.2-18.el6
vdsm.x86_64   4.10.0-0.46.15.el6
vdsm-cli.noarch   4.10.0-0.46.15.el6
vdsm-python.x86_644.10.0-0.46.15.el6
vdsm-xmlrpc.noarch4.10.0-0.46.15.el6
qemu-kvm.x86_64   2:0.12.1.2-2.355.el6_4_4.1


uname -a
Linux vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27
06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

virsh capabilities on 6.4:
 cpu
   archx86_64/arch
   modelNehalem/model
   vendorIntel/vendor
   topology sockets='1' cores='4' threads='1'/
   feature name='rdtscp'/
   feature name='pdcm'/
   feature name='xtpr'/
   feature name='tm2'/
   feature name='est'/
   feature name='smx'/
   feature name='vmx'/
   feature name='ds_cpl'/
   feature name='monitor'/
   feature name='dtes64'/
   feature name='pbe'/
   feature name='tm'/
   feature name='ht'/
   feature name='ss'/
   feature name='acpi'/
   feature name='ds'/
   feature name='vme'/
 /cpu

and corresponding cpu features from vdsClient:

cpuCores = 4
cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,
   cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,
   tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,
   pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,
   dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,

pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,
   flexpriority,ept,vpid,model_coreduo,model_Conroe
cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
cpuSockets = 1
cpuSpeed = 2394.098

Full outputs of virsh capabilities and vdsCaps are attached. The only
difference I can see is that 6.4 exposes one additional cpu flags (sep)
and this seems to break the cpu recognition of vdsm.

Anyone has some hints on how to resolve or debug this further? What more
information can I provide to help?

Best regards
Patrick



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



seems like a vdsm issue - can you check if you have this patch (not
sure its related):

commit 558994f8ffe030acd1b851dfd074f3417681337b
Author: Mark Wu 

Re: [Users] 6.4 CR: oVirt 3.1 breaks with missing cpu features after update to CentOS 6.4 (6.3 + CR)

2013-03-05 Thread Patrick Hurrelmann
On 04.03.2013 21:52, Itamar Heim wrote:
 On 04/03/2013 19:03, Patrick Hurrelmann wrote:
 Hi list,

 I tested the upcoming CentOS 6.4 release with my lab installation of
 oVirt 3.1 and it fails to play well.

 Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type
 Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are
 both version 3.1. oVirt 3.1 was installed via Dreyou's repo.

 In CentOS 6.3 all is fine and the following rpms are installed:

 libvirt.x86_640.9.10-21.el6_3.8
 libvirt-client.x86_64 0.9.10-21.el6_3.8
 libvirt-lock-sanlock.x86_64   0.9.10-21.el6_3.8
 libvirt-python.x86_64 0.9.10-21.el6_3.8
 vdsm.x86_64   4.10.0-0.46.15.el6
 vdsm-cli.noarch   4.10.0-0.46.15.el6
 vdsm-python.x86_644.10.0-0.46.15.el6
 vdsm-xmlrpc.noarch4.10.0-0.46.15.el6
 qemu-kvm.x86_64   2:0.12.1.2-2.295.el6_3.10


 uname -a
 Linux vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6
 03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

 virsh cpu capabilities on 6.3:
  cpu
archx86_64/arch
modelNehalem/model
vendorIntel/vendor
topology sockets='1' cores='4' threads='1'/
feature name='rdtscp'/
feature name='pdcm'/
feature name='xtpr'/
feature name='tm2'/
feature name='est'/
feature name='smx'/
feature name='vmx'/
feature name='ds_cpl'/
feature name='monitor'/
feature name='dtes64'/
feature name='pbe'/
feature name='tm'/
feature name='ht'/
feature name='ss'/
feature name='acpi'/
feature name='ds'/
feature name='vme'/
  /cpu

 and corresponding cpu features from vdsClient:

 cpuCores = 4
 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,
cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,
tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,
pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,
dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,
pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,
flexpriority,ept,vpid,model_Conroe,model_Penryn,
model_Nehalem
 cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
 cpuSockets = 1
 cpuSpeed = 2394.132


 So the system was updated to 6.4 using the continuous release repo.

 Installed rpms after update to 6.4 (6.3 + CR):

 libvirt.x86_640.10.2-18.el6
 libvirt-client.x86_64 0.10.2-18.el6
 libvirt-lock-sanlock.x86_64   0.10.2-18.el6
 libvirt-python.x86_64 0.10.2-18.el6
 vdsm.x86_64   4.10.0-0.46.15.el6
 vdsm-cli.noarch   4.10.0-0.46.15.el6
 vdsm-python.x86_644.10.0-0.46.15.el6
 vdsm-xmlrpc.noarch4.10.0-0.46.15.el6
 qemu-kvm.x86_64   2:0.12.1.2-2.355.el6_4_4.1


 uname -a
 Linux vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27
 06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

 virsh capabilities on 6.4:
  cpu
archx86_64/arch
modelNehalem/model
vendorIntel/vendor
topology sockets='1' cores='4' threads='1'/
feature name='rdtscp'/
feature name='pdcm'/
feature name='xtpr'/
feature name='tm2'/
feature name='est'/
feature name='smx'/
feature name='vmx'/
feature name='ds_cpl'/
feature name='monitor'/
feature name='dtes64'/
feature name='pbe'/
feature name='tm'/
feature name='ht'/
feature name='ss'/
feature name='acpi'/
feature name='ds'/
feature name='vme'/
  /cpu

 and corresponding cpu features from vdsClient:

 cpuCores = 4
 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,
cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,
tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,
pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,
dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,
pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,
flexpriority,ept,vpid,model_coreduo,model_Conroe
 cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
 cpuSockets = 1
 cpuSpeed = 2394.098

 Full outputs of virsh capabilities and vdsCaps are attached. The only
 difference I can see is that 6.4 exposes one additional cpu flags (sep)
 and this seems to break the cpu recognition of vdsm.

 Anyone has some hints on how to resolve or debug this further? What more
 information can I provide to help?

 Best regards
 Patrick



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

 
 seems like a vdsm issue - can you check 

[Users] Migration issue Asking For Help

2013-03-05 Thread xianghuadu
hi  all
 I recently in the research ovirt encounter a problem. 
In the vm migration occurs when the error: Migration failed due to Error: Could 
not connect to peer host.
My environment is:
KVM dell 2950 * 2
storageiscsi-target
vm systemwindows 2008 r2
ovirt-log:
2013-03-05 14:52:23,074 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(QuartzScheduler_Worker-42) [323d7ca8] VM centos 
4cc23d92-8667-4710-9714-a67c0d178fa0 moved from MigratingFrom -- Up
2013-03-05 14:52:23,076 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(QuartzScheduler_Worker-42) [323d7ca8] adding VM 
4cc23d92-8667-4710-9714-a67c0d178fa0 to re-run list
2013-03-05 14:52:23,079 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(QuartzScheduler_Worker-42) [323d7ca8] Rerun vm 
4cc23d92-8667-4710-9714-a67c0d178fa0. Called from vds 205
2013-03-05 14:52:23,085 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(pool-3-thread-49) [323d7ca8] START, MigrateStatusVDSCommand(HostName = 205, 
HostId = 4e7d1ae2-824e-11e2-bb4c-00188be4de29, 
vmId=4cc23d92-8667-4710-9714-a67c0d178fa0), log id: 618085d
2013-03-05 14:52:23,131 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-3-thread-49) [323d7ca8] Failed in MigrateStatusVDS method
2013-03-05 14:52:23,132 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-3-thread-49) [323d7ca8] Error code noConPeer and error message 
VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = 
Could not connect to peer VDS
2013-03-05 14:52:23,134 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-3-thread-49) [323d7ca8] Command 
org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value 
 Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
mStatus   Class Name: 
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode 10
mMessage  Could not connect to peer VDS


2013-03-05 14:52:23,138 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-3-thread-49) [323d7ca8] HostName = 205
2013-03-05 14:52:23,139 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] 
(pool-3-thread-49) [323d7ca8] Command MigrateStatusVDS execution failed. 
Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
MigrateStatusVDS, error = Could not connect to peer VDS
2013-03-05 14:52:23,141 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(pool-3-thread-49) [323d7ca8] FINISH, MigrateStatusVDSCommand, log 

vdsm-log:
Thread-5969::DEBUG::2013-03-05 
14:52:21,312::libvirtvm::283::vm.Vm::(_getDiskLatency) 
vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Disk vda latency not available
Thread-5622::ERROR::2013-03-05 14:52:22,890::vm::200::vm.Vm::(_recover) 
vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Failed to destroy remote VM
Traceback (most recent call last):
  File /usr/share/vdsm/vm.py, line 198, in _recover
self.destServer.destroy(self._vm.id)
  File /usr/lib64/python2.6/xmlrpclib.py, line 1199, in __call__
return self.__send(self.__name, args)
  File /usr/lib64/python2.6/xmlrpclib.py, line 1489, in __request
verbose=self.__verbose
  File /usr/lib64/python2.6/xmlrpclib.py, line 1253, in request
return self._parse_response(h.getfile(), sock)
  File /usr/lib64/python2.6/xmlrpclib.py, line 1382, in _parse_response
response = file.read(1024)
  File /usr/lib64/python2.6/socket.py, line 383, in read
data = self._sock.recv(left)
  File /usr/lib64/python2.6/ssl.py, line 215, in recv
return self.read(buflen)
  File /usr/lib64/python2.6/ssl.py, line 136, in read
return self._sslobj.read(len)
SSLError: The read operation timed out
Thread-5622::ERROR::2013-03-05 14:52:22,909::vm::283::vm.Vm::(run) 
vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Failed to migrate
Traceback (most recent call last):
  File /usr/share/vdsm/vm.py, line 268, in run
self._startUnderlyingMigration()
  File /usr/share/vdsm/libvirtvm.py, line 443, in _startUnderlyingMigration
response = self.destServer.migrationCreate(self._machineParams)
  File /usr/lib64/python2.6/xmlrpclib.py, line 1199, in __call__
return self.__send(self.__name, args)
  File /usr/lib64/python2.6/xmlrpclib.py, line 1489, in __request
verbose=self.__verbose
  File /usr/lib64/python2.6/xmlrpclib.py, line 1253, in request
return self._parse_response(h.getfile(), sock)
  File /usr/lib64/python2.6/xmlrpclib.py, line 1382, in _parse_response
response = file.read(1024)
  File /usr/lib64/python2.6/socket.py, line 383, in read
data = self._sock.recv(left)
  File /usr/lib64/python2.6/ssl.py, line 215, in recv
return self.read(buflen)
  File /usr/lib64/python2.6/ssl.py, line 136, in read
return self._sslobj.read(len)
SSLError: The read operation timed out

Re: [Users] 6.4 CR: oVirt 3.1 breaks with missing cpu features after update to CentOS 6.4 (6.3 + CR)

2013-03-05 Thread Dan Kenigsberg
On Tue, Mar 05, 2013 at 10:21:16AM +0100, Patrick Hurrelmann wrote:
 On 04.03.2013 21:52, Itamar Heim wrote:
  On 04/03/2013 19:03, Patrick Hurrelmann wrote:
  Hi list,
 
  I tested the upcoming CentOS 6.4 release with my lab installation of
  oVirt 3.1 and it fails to play well.
 
  Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type
  Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are
  both version 3.1. oVirt 3.1 was installed via Dreyou's repo.
 
  In CentOS 6.3 all is fine and the following rpms are installed:
 
  libvirt.x86_640.9.10-21.el6_3.8
  libvirt-client.x86_64 0.9.10-21.el6_3.8
  libvirt-lock-sanlock.x86_64   0.9.10-21.el6_3.8
  libvirt-python.x86_64 0.9.10-21.el6_3.8
  vdsm.x86_64   4.10.0-0.46.15.el6
  vdsm-cli.noarch   4.10.0-0.46.15.el6
  vdsm-python.x86_644.10.0-0.46.15.el6
  vdsm-xmlrpc.noarch4.10.0-0.46.15.el6
  qemu-kvm.x86_64   2:0.12.1.2-2.295.el6_3.10
 
 
  uname -a
  Linux vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6
  03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 
  virsh cpu capabilities on 6.3:
   cpu
 archx86_64/arch
 modelNehalem/model
 vendorIntel/vendor
 topology sockets='1' cores='4' threads='1'/
 feature name='rdtscp'/
 feature name='pdcm'/
 feature name='xtpr'/
 feature name='tm2'/
 feature name='est'/
 feature name='smx'/
 feature name='vmx'/
 feature name='ds_cpl'/
 feature name='monitor'/
 feature name='dtes64'/
 feature name='pbe'/
 feature name='tm'/
 feature name='ht'/
 feature name='ss'/
 feature name='acpi'/
 feature name='ds'/
 feature name='vme'/
   /cpu
 
  and corresponding cpu features from vdsClient:
 
  cpuCores = 4
  cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,
 cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,
 tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,
 pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,
 dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,
 pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,
 flexpriority,ept,vpid,model_Conroe,model_Penryn,
 model_Nehalem
  cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
  cpuSockets = 1
  cpuSpeed = 2394.132
 
 
  So the system was updated to 6.4 using the continuous release repo.
 
  Installed rpms after update to 6.4 (6.3 + CR):
 
  libvirt.x86_640.10.2-18.el6
  libvirt-client.x86_64 0.10.2-18.el6
  libvirt-lock-sanlock.x86_64   0.10.2-18.el6
  libvirt-python.x86_64 0.10.2-18.el6
  vdsm.x86_64   4.10.0-0.46.15.el6
  vdsm-cli.noarch   4.10.0-0.46.15.el6
  vdsm-python.x86_644.10.0-0.46.15.el6
  vdsm-xmlrpc.noarch4.10.0-0.46.15.el6
  qemu-kvm.x86_64   2:0.12.1.2-2.355.el6_4_4.1
 
 
  uname -a
  Linux vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27
  06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 
  virsh capabilities on 6.4:
   cpu
 archx86_64/arch
 modelNehalem/model
 vendorIntel/vendor
 topology sockets='1' cores='4' threads='1'/
 feature name='rdtscp'/
 feature name='pdcm'/
 feature name='xtpr'/
 feature name='tm2'/
 feature name='est'/
 feature name='smx'/
 feature name='vmx'/
 feature name='ds_cpl'/
 feature name='monitor'/
 feature name='dtes64'/
 feature name='pbe'/
 feature name='tm'/
 feature name='ht'/
 feature name='ss'/
 feature name='acpi'/
 feature name='ds'/
 feature name='vme'/
   /cpu
 
  and corresponding cpu features from vdsClient:
 
  cpuCores = 4
  cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,
 cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,
 tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,
 pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,
 dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,
 pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,
 flexpriority,ept,vpid,model_coreduo,model_Conroe
  cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
  cpuSockets = 1
  cpuSpeed = 2394.098
 
  Full outputs of virsh capabilities and vdsCaps are attached. The only
  difference I can see is that 6.4 exposes one additional cpu flags (sep)
  and this seems to break the cpu recognition of vdsm.
 
  Anyone has some hints on how to resolve or debug this further? What more
  information can I provide to help?
 
  

Re: [Users] 6.4 CR: oVirt 3.1 breaks with missing cpu features after update to CentOS 6.4 (6.3 + CR)

2013-03-05 Thread Patrick Hurrelmann
On 05.03.2013 10:54, Dan Kenigsberg wrote:
 On Tue, Mar 05, 2013 at 10:21:16AM +0100, Patrick Hurrelmann wrote:
 On 04.03.2013 21:52, Itamar Heim wrote:
 On 04/03/2013 19:03, Patrick Hurrelmann wrote:
 Hi list,

 I tested the upcoming CentOS 6.4 release with my lab installation of
 oVirt 3.1 and it fails to play well.

 Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type
 Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are
 both version 3.1. oVirt 3.1 was installed via Dreyou's repo.

 In CentOS 6.3 all is fine and the following rpms are installed:

 libvirt.x86_640.9.10-21.el6_3.8
 libvirt-client.x86_64 0.9.10-21.el6_3.8
 libvirt-lock-sanlock.x86_64   0.9.10-21.el6_3.8
 libvirt-python.x86_64 0.9.10-21.el6_3.8
 vdsm.x86_64   4.10.0-0.46.15.el6
 vdsm-cli.noarch   4.10.0-0.46.15.el6
 vdsm-python.x86_644.10.0-0.46.15.el6
 vdsm-xmlrpc.noarch4.10.0-0.46.15.el6
 qemu-kvm.x86_64   2:0.12.1.2-2.295.el6_3.10


 uname -a
 Linux vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6
 03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

 virsh cpu capabilities on 6.3:
  cpu
archx86_64/arch
modelNehalem/model
vendorIntel/vendor
topology sockets='1' cores='4' threads='1'/
feature name='rdtscp'/
feature name='pdcm'/
feature name='xtpr'/
feature name='tm2'/
feature name='est'/
feature name='smx'/
feature name='vmx'/
feature name='ds_cpl'/
feature name='monitor'/
feature name='dtes64'/
feature name='pbe'/
feature name='tm'/
feature name='ht'/
feature name='ss'/
feature name='acpi'/
feature name='ds'/
feature name='vme'/
  /cpu

 and corresponding cpu features from vdsClient:

 cpuCores = 4
 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,
cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,
tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,
pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,
dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,
pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,
flexpriority,ept,vpid,model_Conroe,model_Penryn,
model_Nehalem
 cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
 cpuSockets = 1
 cpuSpeed = 2394.132


 So the system was updated to 6.4 using the continuous release repo.

 Installed rpms after update to 6.4 (6.3 + CR):

 libvirt.x86_640.10.2-18.el6
 libvirt-client.x86_64 0.10.2-18.el6
 libvirt-lock-sanlock.x86_64   0.10.2-18.el6
 libvirt-python.x86_64 0.10.2-18.el6
 vdsm.x86_64   4.10.0-0.46.15.el6
 vdsm-cli.noarch   4.10.0-0.46.15.el6
 vdsm-python.x86_644.10.0-0.46.15.el6
 vdsm-xmlrpc.noarch4.10.0-0.46.15.el6
 qemu-kvm.x86_64   2:0.12.1.2-2.355.el6_4_4.1


 uname -a
 Linux vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27
 06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

 virsh capabilities on 6.4:
  cpu
archx86_64/arch
modelNehalem/model
vendorIntel/vendor
topology sockets='1' cores='4' threads='1'/
feature name='rdtscp'/
feature name='pdcm'/
feature name='xtpr'/
feature name='tm2'/
feature name='est'/
feature name='smx'/
feature name='vmx'/
feature name='ds_cpl'/
feature name='monitor'/
feature name='dtes64'/
feature name='pbe'/
feature name='tm'/
feature name='ht'/
feature name='ss'/
feature name='acpi'/
feature name='ds'/
feature name='vme'/
  /cpu

 and corresponding cpu features from vdsClient:

 cpuCores = 4
 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,
cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,
tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,
pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,
dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,
pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,
flexpriority,ept,vpid,model_coreduo,model_Conroe
 cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
 cpuSockets = 1
 cpuSpeed = 2394.098

 Full outputs of virsh capabilities and vdsCaps are attached. The only
 difference I can see is that 6.4 exposes one additional cpu flags (sep)
 and this seems to break the cpu recognition of vdsm.

 Anyone has some hints on how to resolve or debug this further? What more
 information can I provide to help?

 Best regards
 Patrick



 ___
 Users mailing 

Re: [Users] oVirt 3.2: impossiible to delete a snapshot?

2013-03-05 Thread Gianluca Cecchi
On Thu, Feb 28, 2013 at 11:47 AM, noc wrote:

 I'm also running 3.2 stable on F18 and can delete snapshots but as Gianluca
 can't clone. Get the same blank window. Saw the fix but since its a
 java.class change would need a rebuild of 3.2
 Any idea when a wrap up of several bugs found and fixed will be up at
 ovirt.org?

 Thanks,

 Joop

In the mean time my rebuilt packages for fedora 18 with the fix
http://gerrit.ovirt.org/#/c/11254/
and that form my experience give:
- solution for clone from snapshot
- solution to see details of a snapshot in details pane (disks, nic, ecc.)
can be downloaded here...
They are the same version as the official ones, so you have to run
rpm -Uvh --force *rpm
(keep off from the list the
ovirt-engine-setup-plugin-allinone-3.2.0-4.fc18.noarch.rpm in case you
have not that installed)

https://www.dropbox.com/sh/24p620taye7v394/plnq7cQWe5?m

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] 6.4 CR: oVirt 3.1 breaks with missing cpu features after update to CentOS 6.4 (6.3 + CR)

2013-03-05 Thread Dan Kenigsberg
On Tue, Mar 05, 2013 at 11:01:53AM +0100, Patrick Hurrelmann wrote:
 On 05.03.2013 10:54, Dan Kenigsberg wrote:
  On Tue, Mar 05, 2013 at 10:21:16AM +0100, Patrick Hurrelmann wrote:
  On 04.03.2013 21:52, Itamar Heim wrote:
  On 04/03/2013 19:03, Patrick Hurrelmann wrote:
  Hi list,
 
  I tested the upcoming CentOS 6.4 release with my lab installation of
  oVirt 3.1 and it fails to play well.
 
  Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type
  Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are
  both version 3.1. oVirt 3.1 was installed via Dreyou's repo.
 
  In CentOS 6.3 all is fine and the following rpms are installed:
 
  libvirt.x86_640.9.10-21.el6_3.8
  libvirt-client.x86_64 0.9.10-21.el6_3.8
  libvirt-lock-sanlock.x86_64   0.9.10-21.el6_3.8
  libvirt-python.x86_64 0.9.10-21.el6_3.8
  vdsm.x86_64   4.10.0-0.46.15.el6
  vdsm-cli.noarch   4.10.0-0.46.15.el6
  vdsm-python.x86_644.10.0-0.46.15.el6
  vdsm-xmlrpc.noarch4.10.0-0.46.15.el6
  qemu-kvm.x86_64   2:0.12.1.2-2.295.el6_3.10
 
 
  uname -a
  Linux vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6
  03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 
  virsh cpu capabilities on 6.3:
   cpu
 archx86_64/arch
 modelNehalem/model
 vendorIntel/vendor
 topology sockets='1' cores='4' threads='1'/
 feature name='rdtscp'/
 feature name='pdcm'/
 feature name='xtpr'/
 feature name='tm2'/
 feature name='est'/
 feature name='smx'/
 feature name='vmx'/
 feature name='ds_cpl'/
 feature name='monitor'/
 feature name='dtes64'/
 feature name='pbe'/
 feature name='tm'/
 feature name='ht'/
 feature name='ss'/
 feature name='acpi'/
 feature name='ds'/
 feature name='vme'/
   /cpu
 
  and corresponding cpu features from vdsClient:
 
  cpuCores = 4
  cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,
 cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,
 tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,
 pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,
 dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,
 pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,
 flexpriority,ept,vpid,model_Conroe,model_Penryn,
 model_Nehalem
  cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
  cpuSockets = 1
  cpuSpeed = 2394.132
 
 
  So the system was updated to 6.4 using the continuous release repo.
 
  Installed rpms after update to 6.4 (6.3 + CR):
 
  libvirt.x86_640.10.2-18.el6
  libvirt-client.x86_64 0.10.2-18.el6
  libvirt-lock-sanlock.x86_64   0.10.2-18.el6
  libvirt-python.x86_64 0.10.2-18.el6
  vdsm.x86_64   4.10.0-0.46.15.el6
  vdsm-cli.noarch   4.10.0-0.46.15.el6
  vdsm-python.x86_644.10.0-0.46.15.el6
  vdsm-xmlrpc.noarch4.10.0-0.46.15.el6
  qemu-kvm.x86_64   2:0.12.1.2-2.355.el6_4_4.1
 
 
  uname -a
  Linux vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27
  06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 
  virsh capabilities on 6.4:
   cpu
 archx86_64/arch
 modelNehalem/model
 vendorIntel/vendor
 topology sockets='1' cores='4' threads='1'/
 feature name='rdtscp'/
 feature name='pdcm'/
 feature name='xtpr'/
 feature name='tm2'/
 feature name='est'/
 feature name='smx'/
 feature name='vmx'/
 feature name='ds_cpl'/
 feature name='monitor'/
 feature name='dtes64'/
 feature name='pbe'/
 feature name='tm'/
 feature name='ht'/
 feature name='ss'/
 feature name='acpi'/
 feature name='ds'/
 feature name='vme'/
   /cpu
 
  and corresponding cpu features from vdsClient:
 
  cpuCores = 4
  cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,
 cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,
 tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,
 pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,
 dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,
 pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,
 flexpriority,ept,vpid,model_coreduo,model_Conroe
  cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
  cpuSockets = 1
  cpuSpeed = 2394.098
 
  Full outputs of virsh capabilities and vdsCaps are attached. The only
  difference I can see is that 6.4 exposes one additional cpu flags (sep)
  and this seems to break the cpu recognition of vdsm.
 
 

Re: [Users] Migration issue Asking For Help

2013-03-05 Thread Mark Wu

On Tue 05 Mar 2013 05:28:17 PM CST, xianghuadu wrote:

hi  all
 I recently in the research ovirt encounter a problem.
In the vm migration occurs when the error: Migration failed due to Error: Could 
not connect to peer host.
My environment is:
KVM dell 2950 * 2
storageiscsi-target
vm systemwindows 2008 r2
ovirt-log:
2013-03-05 14:52:23,074 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-42) 
[323d7ca8] VM centos 4cc23d92-8667-4710-9714-a67c0d178fa0 moved from MigratingFrom 
-- Up
2013-03-05 14:52:23,076 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(QuartzScheduler_Worker-42) [323d7ca8] adding VM 
4cc23d92-8667-4710-9714-a67c0d178fa0 to re-run list
2013-03-05 14:52:23,079 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(QuartzScheduler_Worker-42) [323d7ca8] Rerun vm 
4cc23d92-8667-4710-9714-a67c0d178fa0. Called from vds 205
2013-03-05 14:52:23,085 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(pool-3-thread-49) [323d7ca8] START, MigrateStatusVDSCommand(HostName = 205, 
HostId = 4e7d1ae2-824e-11e2-bb4c-00188be4de29, 
vmId=4cc23d92-8667-4710-9714-a67c0d178fa0), log id: 618085d
2013-03-05 14:52:23,131 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-3-thread-49) [323d7ca8] Failed in MigrateStatusVDS method
2013-03-05 14:52:23,132 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-3-thread-49) [323d7ca8] Error code noConPeer and error message 
VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = 
Could not connect to peer VDS
2013-03-05 14:52:23,134 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-3-thread-49) [323d7ca8] Command 
org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value

 Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
mStatus   Class Name: 
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode 10
mMessage  Could not connect to peer VDS
2013-03-05 14:52:23,138 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-3-thread-49) [323d7ca8] HostName = 205
2013-03-05 14:52:23,139 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] 
(pool-3-thread-49) [323d7ca8] Command MigrateStatusVDS execution failed. 
Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
MigrateStatusVDS, error = Could not connect to peer VDS
2013-03-05 14:52:23,141 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(pool-3-thread-49) [323d7ca8] FINISH, MigrateStatusVDSCommand, log


vdsm-log:
Thread-5969::DEBUG::2013-03-05 
14:52:21,312::libvirtvm::283::vm.Vm::(_getDiskLatency) 
vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Disk vda latency not available
Thread-5622::ERROR::2013-03-05 14:52:22,890::vm::200::vm.Vm::(_recover) 
vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Failed to destroy remote VM
Traceback (most recent call last):
  File /usr/share/vdsm/vm.py, line 198, in _recover
self.destServer.destroy(self._vm.id)
  File /usr/lib64/python2.6/xmlrpclib.py, line 1199, in __call__
return self.__send(self.__name, args)
  File /usr/lib64/python2.6/xmlrpclib.py, line 1489, in __request
verbose=self.__verbose
  File /usr/lib64/python2.6/xmlrpclib.py, line 1253, in request
return self._parse_response(h.getfile(), sock)
  File /usr/lib64/python2.6/xmlrpclib.py, line 1382, in _parse_response
response = file.read(1024)
  File /usr/lib64/python2.6/socket.py, line 383, in read
data = self._sock.recv(left)
  File /usr/lib64/python2.6/ssl.py, line 215, in recv
return self.read(buflen)
  File /usr/lib64/python2.6/ssl.py, line 136, in read
return self._sslobj.read(len)
SSLError: The read operation timed out
Thread-5622::ERROR::2013-03-05 14:52:22,909::vm::283::vm.Vm::(run) 
vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Failed to migrate
Traceback (most recent call last):
  File /usr/share/vdsm/vm.py, line 268, in run
self._startUnderlyingMigration()
  File /usr/share/vdsm/libvirtvm.py, line 443, in _startUnderlyingMigration
response = self.destServer.migrationCreate(self._machineParams)
  File /usr/lib64/python2.6/xmlrpclib.py, line 1199, in __call__
return self.__send(self.__name, args)
  File /usr/lib64/python2.6/xmlrpclib.py, line 1489, in __request
verbose=self.__verbose
  File /usr/lib64/python2.6/xmlrpclib.py, line 1253, in request
return self._parse_response(h.getfile(), sock)
  File /usr/lib64/python2.6/xmlrpclib.py, line 1382, in _parse_response
response = file.read(1024)
  File /usr/lib64/python2.6/socket.py, line 383, in read
data = self._sock.recv(left)
  File /usr/lib64/python2.6/ssl.py, line 215, in recv
return self.read(buflen)
  File /usr/lib64/python2.6/ssl.py, line 136, in read
return self._sslobj.read(len)
SSLError: The 

Re: [Users] UI lack of ui-plugins

2013-03-05 Thread Yaniv Bronheim
Do you run it after full installation or by standalone.sh ?

In order to by pass those errors in full installation do:
1. create a file /usr/share/ovirt-engine/conf/engine.conf.defaults
2. Add the following properties to the file:
ENGINE_USR=[the username which runs the engine]
ENGINE_ETC=/etc/ovirt-engine
3. restart Jboss service

If you did the three steps and engine still doesn't start, please attach your 
full engine.log

Yaniv.

- Original Message -
From: bigclouds bigclo...@163.com
To: Oved Ourfalli ov...@redhat.com
Cc: users@ovirt.org
Sent: Tuesday, March 5, 2013 3:02:31 AM
Subject: Re: [Users] UI lack of ui-plugins




hi ovedo 
the explorer is waiting all the time. when go to portal 
http://192.168.5.109:8700/UserPortal/org.ovirt.engine.ui.userportal.UserPortal/UserPortal.html
 




webadmin url see nothing at all 
http://192.168.5.109:8700/webadmin/webadmin/WebAdmin.html 


if http://www.ovirt.org/Building_oVirt_engine is not up to date? 
thanks. 







At 2013-03-04 18:20:20,Oved Ourfalli ov...@redhat.com wrote:


- Original Message -
 From: bigclouds bigclo...@163.com
 To: users@ovirt.org
 Sent: Monday, March 4, 2013 12:07:53 PM
 Subject: [Users] UI lack of ui-plugins
 
 
 
 at last i build engine code success, then i start jboss-as.
 go to http://host:8700, when i click user,admin portal, error occur.
 
 
 1. it reports ui-plugins is miss.
 2. $OVIRT_HOME/backend/manager/conf/engine.conf.defaults
 ENGINE_USR=username
 ENGINE_ETC=/etc/ovirt-engine i do not know what ENGINE_USR really is?
 thanks.
 

ENGINE_USR should be something like /usr/share/ovirt-engine
The warning about the ui-plugins directory is probably because it doesn't 
exist (see bug https://bugzilla.redhat.com/show_bug.cgi?id=895933), but that 
shouldn't effect anything besides loading UI plugins you wrote/downloaded, so 
the admin portal should work properly.

As far as I know, the ENGINE_USR is currently used only in order to detect the 
ui-plugins directory.

What happens when you just browse to
http://host:8700/webadmin
?

Does it work?

 
 error -msg:
 
 
 
 2013-03-04 17:39:10,182 INFO
 [org.ovirt.engine.core.bll.DbUserCacheManager]
 (DefaultQuartzScheduler_Worker-1) Start refreshing all users data
 2013-03-04 17:39:10,338 INFO
 [org.ovirt.engine.core.vdsbroker.ResourceManager] (MSC service
 thread 1-1) Finished initializing ResourceManager
 2013-03-04 17:39:10,365 INFO
 [org.ovirt.engine.core.bll.AsyncTaskManager] (MSC service thread
 1-1) Initialization of AsyncTaskManager completed successfully.
 2013-03-04 17:39:10,384 INFO
 [org.ovirt.engine.core.bll.OvfDataUpdater] (MSC service thread 1-1)
 Initialization of OvfDataUpdater completed successfully.
 2013-03-04 17:39:10,388 INFO [org.ovirt.engine.co
 re.bll.VdsLoadBalancer] (MSC service thread 1-1) Start scheduling to
 enable vds load balancer
 2013-03-04 17:39:10,394 INFO
 [org.ovirt.engine.core.bll.VdsLoadBalancer] (MSC service thread 1-1)
 Finished scheduling to enable vds load balancer
 2013-03-04 17:39:10,410 INFO
 [org.ovirt.engine.core.bll.network.MacPoolManager]
 (pool-10-thread-1) Start initializing MacPoolManager
 2013-03-04 17:39:10,450 INFO
 [org.ovirt.engine.core.bll.InitBackendServicesOnStartupBean] (MSC
 service thread 1-1) Init VM custom properties utilities
 2013-03-04 17:39:10,461 INFO
 [org.ovirt.engine.core.bll.network.MacPoolManager]
 (pool-10-thread-1) Finished initializing MacPoolManager
 2013-03-04 17:39:10,470 INFO [org.jboss.as] (MSC service thread 1-2)
 JBAS015951: Admin console listening on http://127.0.0.1:9990
 2013-03-04 17:39:10,471 INFO [org.jboss.as] (MSC service thread 1-2)
 JBAS015874: JBoss AS 7.1.1.Final  Brontes started in 11961ms -
 Started 507 of 594 services (86 services are passive or on-demand)
 2013-03-04 17:39:10,560 INFO [org.jboss.as.server]
 (DeploymentScanner-threads - 2) JBAS018559: Deployed engine.ear
 2013-03-04 17:42:22,716 ERROR [org.jboss.remoting.remote.connection]
 (Remoting ovirtdev read-1) JBREM000200: Remote connection failed:
 java.io.IOException: Received an invalid message length of
 1195725856
 2013-03-04 17:42:56,702 ERROR [org.jboss.remoting.remote.connection]
 (Remoting ovirtdev read-1) JBREM000200: Remote connection failed:
 java.io.IOException: Received an invalid message length of
 1195725856
 2013-03-04 17:45:14,258 INFO [org.hibernate.validator.util.Version]
 (http--0.0.0.0-8700-1) Hibernate Validator 4.2.0.Final
 2013-03-04 17:45:14,385 INFO
 [org.ovirt.engine.core.utils.LocalConfig] (http--0.0.0.0-8700-1)
 Loaded file
 /mnt/ovirt-engine/backend/manager/conf/engine.conf.defaults.
 2013-03-04 17:45:14,386 INFO
 [org.ovirt.engine.core.utils.LocalConfig] (http--0.0.0.0-8700-1)
 Loaded file /etc/sysconfig/ovirt-engine.
 2013-03-04 17:45:14,392 INFO
 [org.ovirt.engine.core.utils.LocalConfig] (http--0.0.0.0-8700-1)
 Value of property ENGINE_DEBUG_ADDRESS is 0.0.0.0:8787.
 2013-03-04 17:45:14,393 INFO
 [org.ovirt.engine.core.utils.LocalConfig] (http--0.0.0.0-8700-1)
 Value of property 

Re: [Users] 6.4 CR: oVirt 3.1 breaks with missing cpu features after update to CentOS 6.4 (6.3 + CR)

2013-03-05 Thread Patrick Hurrelmann
On 05.03.2013 11:14, Dan Kenigsberg wrote:
snip

 My version of vdsm as stated by Dreyou:
 v 4.10.0-0.46 (.15), builded from
 b59c8430b2a511bcea3bc1a954eee4ca1c0f4861 (branch ovirt-3.1)

 I can't see that Ia241b09c96fa16441ba9421f61a2f9a417f0d978 was merged to
 3.1 Branch?

 I applied that patch locally and restarted vdsmd but this does not
 change anything. Supported cpu is still as low as Conroe instead of
 Nehalem. Or is there more to do than patching libvirtvm.py?

 What is libvirt's opinion about your cpu compatibility?

  virsh -r cpu-compare (echo 'cpu 
 match=minimummodelNehalem/modelvendorIntel/vendor/cpu')

 If you do not get Host CPU is a superset of CPU described in bla, then
 the problem is within libvirt.

 Dan.

 Hi Dan,

 virsh -r cpu-compare (echo 'cpu
 match=minimummodelNehalem/modelvendorIntel/vendor/cpu')
 Host CPU is a superset of CPU described in /dev/fd/63

 So libvirt obviously is fine. Something different would have surprised
 my as virsh capabilities seemed correct anyway.
 
 So maybe, just maybe, libvirt has changed their cpu_map, a map that
 ovirt-3.1 had a bug reading.
 
 Would you care to apply http://gerrit.ovirt.org/5035 to see if this is
 it?
 
 Dan.

Hi Dan,

success! Applying that patch made the cpu recognition work again. The
cpu type in admin portal shows again as Nehalem. Output from getVdsCaps:

   cpuCores = 4
   cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,
  mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,
  ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,
  arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,
  aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,
  ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,
  dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,
  model_Conroe,model_coreduo,model_core2duo,model_Penryn,
  model_n270
   cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
   cpuSockets = 1
   cpuSpeed = 2393.769


I compared libvirt's cpu_map.xml on both Centos 6.3 and CentOS 6.4 and
indeed they do differ in large portions. So this patch should probably
be merged to 3.1 branch? I will contact Dreyou and request that this
patch will also be included in his builds. I guess otherwise there will
be quite some fallout after people start picking CentOS 6.4 for oVirt 3.1.

Thanks again and best regards
Patrick

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Ovirt node power off

2013-03-05 Thread Omer Frenkel


- Original Message -
 From: Jakub Bittner j.bitt...@nbu.cz
 To: users@ovirt.org
 Sent: Tuesday, March 5, 2013 2:07:34 PM
 Subject: [Users] Ovirt node power  off
 
 Hello,
 
 we found strange issue. We have 2 nodes node1 and node2, I created
 virtual server called Debian with High available and started it on
 node2. Than I unplug power from node2 (physically). Ovirt management
 found, that node2 is down, but virtual called Debian which runs on it
 (powered off node2) is still active in ovirt management and I can not
 turn it off even I can not switch node2 to maintenance. Is there any
 way
 how to force ovirt management console to shutdown the VM?
 

ovirt need power management (PM) set on the host in order to handle these 
situations,
do you have PM configured? if you have, the node should be automatically 
restarted after few minutes,
and vm will restart on second host.

if not, if you are sure the host is indeed powered off, and the vm is not 
running there,
you can right click the failed host, and click mark host as rebooted
it should fix the problem (vm will move to down and should be auto started on 
second host)

 
 Thank you for help.
 
 Jakub Bittner
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Migration issue Asking For Help

2013-03-05 Thread xianghuadu
hi mark
   I can and you send Chinese email communication 




xianghuadu

From: Mark Wu
Date: 2013-03-05 18:33
To: xianghuadu
CC: users
Subject: Re: [Users] Migration issue Asking For Help
On Tue 05 Mar 2013 05:28:17 PM CST, xianghuadu wrote:
 hi  all
  I recently in the research ovirt encounter a problem.
 In the vm migration occurs when the error: Migration failed due to Error: 
 Could not connect to peer host.
 My environment is:
 KVM dell 2950 * 2
 storageiscsi-target
 vm systemwindows 2008 r2
 ovirt-log:
 2013-03-05 14:52:23,074 INFO  
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
 (QuartzScheduler_Worker-42) [323d7ca8] VM centos 
 4cc23d92-8667-4710-9714-a67c0d178fa0 moved from MigratingFrom -- Up
 2013-03-05 14:52:23,076 INFO  
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
 (QuartzScheduler_Worker-42) [323d7ca8] adding VM 
 4cc23d92-8667-4710-9714-a67c0d178fa0 to re-run list
 2013-03-05 14:52:23,079 ERROR 
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
 (QuartzScheduler_Worker-42) [323d7ca8] Rerun vm 
 4cc23d92-8667-4710-9714-a67c0d178fa0. Called from vds 205
 2013-03-05 14:52:23,085 INFO  
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
 (pool-3-thread-49) [323d7ca8] START, MigrateStatusVDSCommand(HostName = 205, 
 HostId = 4e7d1ae2-824e-11e2-bb4c-00188be4de29, 
 vmId=4cc23d92-8667-4710-9714-a67c0d178fa0), log id: 618085d
 2013-03-05 14:52:23,131 ERROR 
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
 (pool-3-thread-49) [323d7ca8] Failed in MigrateStatusVDS method
 2013-03-05 14:52:23,132 ERROR 
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
 (pool-3-thread-49) [323d7ca8] Error code noConPeer and error message 
 VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS, error = 
 Could not connect to peer VDS
 2013-03-05 14:52:23,134 INFO  
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
 (pool-3-thread-49) [323d7ca8] Command 
 org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value

  Class Name: 
 org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
 mStatus   Class Name: 
 org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
 mCode 10
 mMessage  Could not connect to peer VDS
 2013-03-05 14:52:23,138 INFO  
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
 (pool-3-thread-49) [323d7ca8] HostName = 205
 2013-03-05 14:52:23,139 ERROR 
 [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-49) 
 [323d7ca8] Command MigrateStatusVDS execution failed. Exception: 
 VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
 MigrateStatusVDS, error = Could not connect to peer VDS
 2013-03-05 14:52:23,141 INFO  
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
 (pool-3-thread-49) [323d7ca8] FINISH, MigrateStatusVDSCommand, log


 vdsm-log:
 Thread-5969::DEBUG::2013-03-05 
 14:52:21,312::libvirtvm::283::vm.Vm::(_getDiskLatency) 
 vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Disk vda latency not available
 Thread-5622::ERROR::2013-03-05 14:52:22,890::vm::200::vm.Vm::(_recover) 
 vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Failed to destroy remote VM
 Traceback (most recent call last):
   File /usr/share/vdsm/vm.py, line 198, in _recover
 self.destServer.destroy(self._vm.id)
   File /usr/lib64/python2.6/xmlrpclib.py, line 1199, in __call__
 return self.__send(self.__name, args)
   File /usr/lib64/python2.6/xmlrpclib.py, line 1489, in __request
 verbose=self.__verbose
   File /usr/lib64/python2.6/xmlrpclib.py, line 1253, in request
 return self._parse_response(h.getfile(), sock)
   File /usr/lib64/python2.6/xmlrpclib.py, line 1382, in _parse_response
 response = file.read(1024)
   File /usr/lib64/python2.6/socket.py, line 383, in read
 data = self._sock.recv(left)
   File /usr/lib64/python2.6/ssl.py, line 215, in recv
 return self.read(buflen)
   File /usr/lib64/python2.6/ssl.py, line 136, in read
 return self._sslobj.read(len)
 SSLError: The read operation timed out
 Thread-5622::ERROR::2013-03-05 14:52:22,909::vm::283::vm.Vm::(run) 
 vmId=`4cc23d92-8667-4710-9714-a67c0d178fa0`::Failed to migrate
 Traceback (most recent call last):
   File /usr/share/vdsm/vm.py, line 268, in run
 self._startUnderlyingMigration()
   File /usr/share/vdsm/libvirtvm.py, line 443, in _startUnderlyingMigration
 response = self.destServer.migrationCreate(self._machineParams)
   File /usr/lib64/python2.6/xmlrpclib.py, line 1199, in __call__
 return self.__send(self.__name, args)
   File /usr/lib64/python2.6/xmlrpclib.py, line 1489, in __request
 verbose=self.__verbose
   File /usr/lib64/python2.6/xmlrpclib.py, line 1253, in request
 return self._parse_response(h.getfile(), sock)
   File /usr/lib64/python2.6/xmlrpclib.py, line 1382, in _parse_response
 response = 

Re: [Users] clone vm from snapshot problem in 3.2

2013-03-05 Thread Alissa Bonas


- Original Message -
 From: Gianluca Cecchi gianluca.cec...@gmail.com
 To: Alissa Bonas abo...@redhat.com
 Cc: users users@ovirt.org, Liron Aravot lara...@redhat.com
 Sent: Monday, March 4, 2013 4:33:40 PM
 Subject: Re: [Users] clone vm from snapshot problem in 3.2
 
 On Mon, Mar 4, 2013 at 1:19 PM, Alissa Bonas wrote:
 
  
   1)
   OK, if I clone a powered on VM from a snapshot with two disks
   and
   here
   you can find clone in action with webadmin gui and iotop on node
   that
   shows they are cloning in parallel.. well!
   https://docs.google.com/file/d/0BwoPbcrMv8mvZ3lzY0l1MDc5OVE/edit?usp=sharing
  
   The VM is a slackware 14 32bit with virtio disk that I obtained
   from
   a
   virt-v2v from CentOS 6.3+Qemu/KVM
   The problem is that the cloned VM recognizes the disks in
   reversed
   order
  
   See these images where sl1432 is master slcone is the clone
  
   disk layout in details pane seems equal with boot disk the one
   that
   appears as the second, but the master boots ok, the slave no.
   Disks are swapped
  
   Master VM disk details:
   https://docs.google.com/file/d/0BwoPbcrMv8mvSWNVNFI4bHg4Umc/edit?usp=sharing
  
   Clone VM disks details:
   https://docs.google.com/file/d/0BwoPbcrMv8mvM1N0bVcyNlFPS1U/edit?usp=sharing
  
   Page with the two consoles where you can see that vda of master
   becomes vdb of clone and vice-versa:
   https://docs.google.com/file/d/0BwoPbcrMv8mveFpESEs5V1dUTFE/edit?usp=sharing
  
   Can I swap again in some way? In VMware for example you can see
   and
   edit SCSI IDs of disks...
 
  Can you please provide the engine logs where the boot
  success/failure of master and clone can be seen?
 
 OK.
 here it is:
 https://docs.google.com/file/d/0BwoPbcrMv8mva2Z2dUNJTWlCWHM/edit?usp=sharing
 
 Starting point 15:16 both powered off.
 First message with 15:17 is boot of sl1432b that is the master and
 boots ok.
 
 First message with 15:21 is boot of slclone that is the clone.
 
 The slackware OS uses lilo as boot loader and on master it is
 configured this way at the moment:
 
 root@sl1432b:~# grep -v ^# /etc/lilo.conf
 append= vt.default_utf8=0
 boot = /dev/vda
 
   bitmap = /boot/slack.bmp
   bmp-colors = 255,0,255,0,255,0
   bmp-table = 60,6,1,16
   bmp-timer = 65,27,0,255
 
 
 prompt
 timeout = 50
 change-rules
   reset
 vga = normal
 
 disk=/dev/vda bios=0x80 max-partitions=7
 
 image = /boot/vmlinuz
   root = /dev/vda2
   initrd = /boot/initrd.gz
   label = Linux
   read-only
 
 
 
 The disk=... entry was added to be able to support boot from device
 of type vda in slackware, that doesn't support it ootb..

Hi,

Thanks for additional information.
The engine log actually shows that slclone started ok:
VM slclone c6c56d41-d70d-4b9b-a1cb-8b0c097b89a0 moved from PoweringUp -- Up
Can you explain what problem are you experiencing in that VM?
Also, could provide the vdsm log from the same timeframe?

 Gianluca
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] 6.4 CR: oVirt 3.1 breaks with missing cpu features after update to CentOS 6.4 (6.3 + CR)

2013-03-05 Thread Dan Kenigsberg
On Tue, Mar 05, 2013 at 12:32:31PM +0100, Patrick Hurrelmann wrote:
 On 05.03.2013 11:14, Dan Kenigsberg wrote:
 snip
 
  My version of vdsm as stated by Dreyou:
  v 4.10.0-0.46 (.15), builded from
  b59c8430b2a511bcea3bc1a954eee4ca1c0f4861 (branch ovirt-3.1)
 
  I can't see that Ia241b09c96fa16441ba9421f61a2f9a417f0d978 was merged to
  3.1 Branch?
 
  I applied that patch locally and restarted vdsmd but this does not
  change anything. Supported cpu is still as low as Conroe instead of
  Nehalem. Or is there more to do than patching libvirtvm.py?
 
  What is libvirt's opinion about your cpu compatibility?
 
   virsh -r cpu-compare (echo 'cpu 
  match=minimummodelNehalem/modelvendorIntel/vendor/cpu')
 
  If you do not get Host CPU is a superset of CPU described in bla, then
  the problem is within libvirt.
 
  Dan.
 
  Hi Dan,
 
  virsh -r cpu-compare (echo 'cpu
  match=minimummodelNehalem/modelvendorIntel/vendor/cpu')
  Host CPU is a superset of CPU described in /dev/fd/63
 
  So libvirt obviously is fine. Something different would have surprised
  my as virsh capabilities seemed correct anyway.
  
  So maybe, just maybe, libvirt has changed their cpu_map, a map that
  ovirt-3.1 had a bug reading.
  
  Would you care to apply http://gerrit.ovirt.org/5035 to see if this is
  it?
  
  Dan.
 
 Hi Dan,
 
 success! Applying that patch made the cpu recognition work again. The
 cpu type in admin portal shows again as Nehalem. Output from getVdsCaps:
 
cpuCores = 4
cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,
   mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,
   ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,
   arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,
   aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,
   ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,
   dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,
   model_Conroe,model_coreduo,model_core2duo,model_Penryn,
   model_n270
cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
cpuSockets = 1
cpuSpeed = 2393.769
 
 
 I compared libvirt's cpu_map.xml on both Centos 6.3 and CentOS 6.4 and
 indeed they do differ in large portions. So this patch should probably
 be merged to 3.1 branch? I will contact Dreyou and request that this
 patch will also be included in his builds. I guess otherwise there will
 be quite some fallout after people start picking CentOS 6.4 for oVirt 3.1.
 
 Thanks again and best regards

Thank you for reporting this issue and verifying its fix.

I'm not completely sure that we should keep maintaining the ovirt-3.1
branch upstream - but a build destined for el6.4 must have it.

If you believe we should release a fix version for 3.1, please verify
that http://gerrit.ovirt.org/12723 has no ill effects.

Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 - engine configuration problem

2013-03-05 Thread Piotr Szubiakowski



On 28/02/2013 18:10, Piotr Szubiakowski wrote:




- Original Message -

From: Piotr Szubiakowskipiotr.szubiakow...@nask.pl
To: Eli Mesikaemes...@redhat.com
Cc: users@ovirt.org
Sent: Thursday, February 28, 2013 5:56:30 PM
Subject: Re: [Users] oVirt 3.1 - engine configuration problem



Yes , it can be
only set
ENGINEEARLib to the place where the EAR are (under JBOSS_HOME
deployments dir)
CAEngineKey to point to the certificate key location

the other entries are not related


Hi,
I set these variables using psql command:

psql ovirtdb postgres -c update vdc_options set option_value =
'/opt/jboss/jboss-as-7.1.1.Final/standalone/deployments/engine.ear'
where option_name = 'ENGINEEARLib';
psql ovirtdb postgres -c update vdc_options set option_value =
'/opt/jboss/CA/ca/private/ca.pem' where option_name = 'CAEngineKey';

in database looks like there is everything fine:

ovirtdb=# select option_name, option_value from vdc_options where
option_name IN ('CAEngineKey', 'ENGINEEARLib');
   option_name  |   option_value
--+--- 



   CAEngineKey  | /opt/jboss/CA/ca/private/ca.pem
   ENGINEEARLib |
/opt/jboss/jboss-as-7.1.1.Final/standalone/deployments/engine.ear
(2 wiersze)

However the problem still occurs. I set debug log level. The jboss

Did you restarted the oVirt service ?


Yes I restarted the oVirt service.


you ignored the version field - set it to 'general'.


Hi,
I set option to general and the problem still occurs.  I decided to 
install oVirt from rpm on Fedora and compare the configuration. Many 
thanks for help :-)


Regards,
Piotr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] ovirt 3.1 + iscsi + snapshot

2013-03-05 Thread Alex Leonhardt
Hiya,

Am testing live snapshot-ing at the moment, and my VM is being paused every
time I try to do a snapshot - it wont recover either, once the snapshot is
complete.

This is what I see on the webadmin console :

VM icinga-clone has paused due to storage permissions problem.

Any hints ?

When I stop the VM, then start again, all is good.

Thanks
Alex

-- 

| RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Configure spice plugin for wan

2013-03-05 Thread Vinzenz Feenstra

Hi,

On 03/04/2013 06:59 PM, Gianluca Cecchi wrote:


One problem was setting where to put the .ini file.
For now I put it where the python is called and the service starts now.
But no ip and no applications are populated even after setting service 
to auto and restart win7 vm.

Where to look for now?


Do you have the virtio-serial drivers installed?
If not please do so.
Otherwise you might try to restart vdsmd on the host.


Thanks
Gianluca



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Regards,

Vinzenz Feenstra | Senior Software Engineer
RedHat Engineering Virtualization R  D
Phone: +420 532 294 625
IRC: vfeenstr or evilissimo

Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt 3.1 + iscsi + snapshot

2013-03-05 Thread Alissa Bonas


- Original Message -
 From: Alex Leonhardt alex.t...@gmail.com
 To: oVirt Mailing List users@ovirt.org
 Sent: Tuesday, March 5, 2013 6:12:08 PM
 Subject: [Users] ovirt 3.1 + iscsi + snapshot
 
 
 
 
 
 
 Hiya,
 
 Am testing live snapshot-ing at the moment, and my VM is being paused
 every time I try to do a snapshot - it wont recover either, once the
 snapshot is complete.
 
 This is what I see on the webadmin console :
 
 VM icinga-clone has paused due to storage permissions problem.
 
 Any hints ?
 
 When I stop the VM, then start again, all is good.

Hi,
Please attach engine+vdsm+libvirt logs with the timeframe when you perform the 
live snapshot and the failure.
thanks

 
 Thanks
 Alex
 
 
 
 
 --
 
 
 
 | RHCE | Senior Systems Engineer | www.vcore.co |
 | www.vsearchcloud.com |
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Unable to get managed connection for java:/ENGINEDataSource

2013-03-05 Thread Juan Hernandez

On 03/05/2013 05:24 PM, Dennis Böck wrote:

Dear oVirt-Users,

since I had a typo in my engine-setup I did a engine-cleanup, a reboot and 
started again.
Unfortunately now the engine doesn't come up any more. Obviously there are 
problems accessing the database:

2013-03-05 17:13:08,078 ERROR [org.ovirt.engine.core.bll.Backend] (MSC service 
thread 1-32) Error in getting DB connection. The database is inaccessible. 
Original exception is: DataAccessResourceFailureException: Error retreiving 
database metadata; nested exception is 
org.springframework.jdbc.support.MetaDataAccessException: Could not get 
Connection for extracting meta data; nested exception is 
org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC 
Connection; nested exception is java.sql.SQLException: 
javax.resource.ResourceException: IJ000453: Unable to get managed connection 
for java:/ENGINEDataSource

I already tried:
yum erase ovirt-engine
yum earse postresql-server
and installed it again, but it didn't help.

Does anyone have an idea?

Best regards and thanks in advance
Dennis


In the /etc/sysconfig/ovirt-engine file you should have the details of 
the connection to the database, in particular you should have something 
like this:


  ENGINE_DB_URL=jdbc:postgresql://localhost:5432/engine

Are you using localhost as in this example? If that is the case then 
check that the database is listening in right address and port:


  # netstat --listen --tcp --numeric --program | grep postgres
  tcp0  0 127.0.0.1:5432  0.0.0.0:* 
LISTEN  10424/postgres


Then also check that connection to the database is working:

  psql --host localhost --port 5432 --user engine --password engine
  Password for user engine:
  psql (9.2.3)
  Type help for help.

  engine=

You may also run a simple query to check that the tables are there:

  engine= select count(*) from vdc_options;
   count
  ---
 346
  (1 row)

Hopefully, if there is an error, the output of these commands will give 
you a hint.


--
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 
3ºD, 28016 Madrid, Spain

Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-05 Thread Rob Zwissler
On Mon, Mar 4, 2013 at 11:46 PM, Dan Kenigsberg dan...@redhat.com wrote:
 Rob,

 It seems that a bug in vdsm code is hiding the real issue.
 Could you do a

 sed -i s/ParseError/ElementTree.ParseError /usr/share/vdsm/gluster/cli.py

 restart vdsmd, and retry?

 Bala, would you send a patch fixing the ParseError issue (and adding a
 unit test that would have caught it on time)?


 Regards,
 Dan.

Hi Dan, thanks for the quick response.  I did that, and here's what I
get now from the vdsm.log:

MainProcess|Thread-51::DEBUG::2013-03-05
10:03:40,723::misc::84::Storage.Misc.excCmd::(lambda)
'/usr/sbin/gluster --mode=script volume info --xml' (cwd None)
Thread-52::DEBUG::2013-03-05
10:03:40,731::task::568::TaskManager.Task::(_updateState)
Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::moving from state init -
state preparing
Thread-52::INFO::2013-03-05
10:03:40,732::logUtils::41::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-52::INFO::2013-03-05
10:03:40,732::logUtils::44::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {'4af726ea-e502-4e79-a47c-6c8558ca96ad':
{'delay': '0.00584101676941', 'lastCheck': '0.2', 'code': 0, 'valid':
True}, 'fc0d44ec-528f-4bf9-8913-fa7043daf43b': {'delay':
'0.0503160953522', 'lastCheck': '0.2', 'code': 0, 'valid': True}}
Thread-52::DEBUG::2013-03-05
10:03:40,732::task::1151::TaskManager.Task::(prepare)
Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::finished:
{'4af726ea-e502-4e79-a47c-6c8558ca96ad': {'delay': '0.00584101676941',
'lastCheck': '0.2', 'code': 0, 'valid': True},
'fc0d44ec-528f-4bf9-8913-fa7043daf43b': {'delay': '0.0503160953522',
'lastCheck': '0.2', 'code': 0, 'valid': True}}
Thread-52::DEBUG::2013-03-05
10:03:40,732::task::568::TaskManager.Task::(_updateState)
Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::moving from state
preparing - state finished
Thread-52::DEBUG::2013-03-05
10:03:40,732::resourceManager::830::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-52::DEBUG::2013-03-05
10:03:40,733::resourceManager::864::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-52::DEBUG::2013-03-05
10:03:40,733::task::957::TaskManager.Task::(_decref)
Task=`aa1990a1-8016-4337-a8cd-1b62976032a4`::ref 0 aborting False
Thread-53::DEBUG::2013-03-05
10:03:40,742::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk hdc stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,742::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk vda stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,742::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk hdc latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,742::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk vda latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,743::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk hdc stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,743::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk vda stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,743::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk hdc latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,743::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`8555382a-b3fa-4a4b-a61e-a80da47478a5`::Disk vda latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,744::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk hdc stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,744::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk vda stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,744::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk hdc latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,744::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`a2617d92-6145-4ba2-b40f-d793f037e031`::Disk vda latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,745::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk hdc stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,745::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk vda stats not
available
Thread-53::DEBUG::2013-03-05
10:03:40,745::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk hdc latency not
available
Thread-53::DEBUG::2013-03-05
10:03:40,745::libvirtvm::308::vm.Vm::(_getDiskLatency)
vmId=`c63f8d87-e6bf-49fd-9642-90aefd1aff84`::Disk vda latency not
available
GuestMonitor-xor-q-nis02::DEBUG::2013-03-05
10:03:40,750::libvirtvm::269::vm.Vm::(_getDiskStats)
vmId=`2c59dfa7-442c-46fb-8102-298db1ebc3bf`::Disk hdc stats not
available

[Users] When does oVirt auto-migrate, and what does HA do?

2013-03-05 Thread Rob Zwissler
In what scenarios does oVirt auto-migrate VMs?  I'm aware that it
currently migates VMs when putting a host into maintenance, or when
manually selecting migration via the web interface, but when else will
hosts be migrated?  Is there any automatic compensation for resource
imbalances between hosts?  I could find no documentation on this
subject, if I missed it I apologize!

In a related question, exactly what does enabling HA (Highly
Available) mode do?  The only documentation I could find on this is at
http://www.ovirt.org/OVirt_3.0_Feature_Guide#High_availability but it
is a bit vague, and being from 3.0, possibly out of date.  Can someone
briefly describe the HA migration algorithm?

Thanks,

Rob
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] When does oVirt auto-migrate, and what does HA do?

2013-03-05 Thread Eli Mesika


- Original Message -
 From: Rob Zwissler r...@zwissler.org
 To: users@ovirt.org
 Sent: Tuesday, March 5, 2013 10:09:43 PM
 Subject: [Users] When does oVirt auto-migrate, and what does HA do?
 
 In what scenarios does oVirt auto-migrate VMs?  I'm aware that it
 currently migates VMs when putting a host into maintenance, or when
 manually selecting migration via the web interface, but when else
 will
 hosts be migrated?  Is there any automatic compensation for resource
 imbalances between hosts?  I could find no documentation on this
 subject, if I missed it I apologize!
The following is taken from the upcoming 3.2 docs:

Automatic Virtual Machine Migration
Red Hat Enterprise Virtualization Manager automatically initiates live 
migration of all virtual machines running on a host when the host is moved into 
maintenance mode. The destination host for each virtual machine is assessed as 
the virtual machine is migrated, in order to spread the load across the cluster.
The Manager automatically initiates live migration of virtual machines in order 
to maintain load balancing or power saving levels in line with cluster policy. 
While no cluster policy is defined by default, it is recommended that you 
specify the cluster policy which best suits the needs of your environment. You 
can also disable automatic, or even manual, live migration of specific virtual 
machines where required. 


 
 In a related question, exactly what does enabling HA (Highly
 Available) mode do?  The only documentation I could find on this is
 at
 http://www.ovirt.org/OVirt_3.0_Feature_Guide#High_availability but it
 is a bit vague, and being from 3.0, possibly out of date.  Can
 someone
 briefly describe the HA migration algorithm?

 High availability is recommended for virtual machines running critical 
workloads.
High availability can ensure that virtual machines are restarted in the 
following scenarios:

When a host becomes non-operational due to hardware failure.
When a host is put into maintenance mode for scheduled downtime.
When a host becomes unavailable because it has lost communication with an 
external storage resource.
When a virtual machine fails due to an operating system crash. 

 High availability means that a virtual machine will be automatically restarted 
if its process is interrupted. This happens if the virtual machine is 
terminated by methods other than powering off from within the guest or sending 
the shutdown command from the Manager. When these events occur, the highly 
available virtual machine is automatically restarted, either on its original 
host or another host in the cluster.
High availability is possible because the Red Hat Enterprise Virtualization 
Manager constantly monitors the hosts and storage, and automatically detects 
hardware failure. If host failure is detected, any virtual machine configured 
to be highly available is automatically restarted on another host in the 
cluster. In addition, all virtual machines are monitored, so if the virtual 
machine's operating system crashes, a signal is sent to automatically restart 
the virtual machine.
With high availability, interruption to service is minimal because virtual 
machines are restarted within seconds with no user intervention required. High 
availability keeps your resources balanced by restarting guests on a host with 
low current resource utilization, or based on any workload balancing or power 
saving policies that you configure. This ensures that there is sufficient 
capacity to restart virtual machines at all times. 


 
 Thanks,
 
 Rob
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Configure spice plugin for wan

2013-03-05 Thread Gianluca Cecchi
I was also able to connect to the infra with the windows 7 VM.
After restarting vdsmd I was able to see ip and applications for the VM
Still no wan options in user portal.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Configure spice plugin for wan

2013-03-05 Thread Gianluca Cecchi
On Wed, Mar 6, 2013 at 1:48 AM, Gianluca Cecchi wrote:
 I was also able to connect to the infra with the windows 7 VM.
 After restarting vdsmd I was indeed able to see ip and applications for the 
 VM in web admin gui
 Still no wan options in user portal.

Logs for vdsm of this infra with Windows 7 VM here:
https://docs.google.com/file/d/0BwoPbcrMv8mvTGk1d0x5N2NQZ2c/edit?usp=sharing

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] When does oVirt auto-migrate, and what does HA do?

2013-03-05 Thread Ofri Masad


- Original Message -
 From: Eli Mesika emes...@redhat.com
 To: Rob Zwissler r...@zwissler.org
 Cc: users@ovirt.org
 Sent: Wednesday, March 6, 2013 12:00:47 AM
 Subject: Re: [Users] When does oVirt auto-migrate, and what does HA do?
 
 
 
 - Original Message -
  From: Rob Zwissler r...@zwissler.org
  To: users@ovirt.org
  Sent: Tuesday, March 5, 2013 10:09:43 PM
  Subject: [Users] When does oVirt auto-migrate, and what does HA do?
  
  In what scenarios does oVirt auto-migrate VMs?  I'm aware that it
  currently migates VMs when putting a host into maintenance, or when
  manually selecting migration via the web interface, but when else
  will
  hosts be migrated?  Is there any automatic compensation for
  resource
  imbalances between hosts?  I could find no documentation on this
  subject, if I missed it I apologize!
 The following is taken from the upcoming 3.2 docs:
 
 Automatic Virtual Machine Migration
 Red Hat Enterprise Virtualization Manager automatically initiates
 live migration of all virtual machines running on a host when the
 host is moved into maintenance mode. The destination host for each
 virtual machine is assessed as the virtual machine is migrated, in
 order to spread the load across the cluster.
 The Manager automatically initiates live migration of virtual
 machines in order to maintain load balancing or power saving levels
 in line with cluster policy. While no cluster policy is defined by
 default, it is recommended that you specify the cluster policy which
 best suits the needs of your environment. You can also disable
 automatic, or even manual, live migration of specific virtual
 machines where required.
 

To set the auto cluster auto migration policy (load balancing / power saving) 
you can open the administrator portal and go to the 'Cluster' tab.
In the 'General' sub tab click the 'Edit Policy' button (or click 'edit' on the 
selected cluster).

In order to prevent a specific VM from being automatically migrated you can go 
to the 'Virtual Machines' tab and click 'edit' on the selected machine.
In the pop-up window go to 'Host' tab and set the migration policy to this 
specific machine.
  

 
  
  In a related question, exactly what does enabling HA (Highly
  Available) mode do?  The only documentation I could find on this is
  at
  http://www.ovirt.org/OVirt_3.0_Feature_Guide#High_availability but
  it
  is a bit vague, and being from 3.0, possibly out of date.  Can
  someone
  briefly describe the HA migration algorithm?
 
  High availability is recommended for virtual machines running
  critical workloads.
 High availability can ensure that virtual machines are restarted in
 the following scenarios:
 
 When a host becomes non-operational due to hardware failure.
 When a host is put into maintenance mode for scheduled downtime.
 When a host becomes unavailable because it has lost communication
 with an external storage resource.
 When a virtual machine fails due to an operating system crash.
 
  High availability means that a virtual machine will be automatically
  restarted if its process is interrupted. This happens if the
  virtual machine is terminated by methods other than powering off
  from within the guest or sending the shutdown command from the
  Manager. When these events occur, the highly available virtual
  machine is automatically restarted, either on its original host or
  another host in the cluster.
 High availability is possible because the Red Hat Enterprise
 Virtualization Manager constantly monitors the hosts and storage,
 and automatically detects hardware failure. If host failure is
 detected, any virtual machine configured to be highly available is
 automatically restarted on another host in the cluster. In addition,
 all virtual machines are monitored, so if the virtual machine's
 operating system crashes, a signal is sent to automatically restart
 the virtual machine.
 With high availability, interruption to service is minimal because
 virtual machines are restarted within seconds with no user
 intervention required. High availability keeps your resources
 balanced by restarting guests on a host with low current resource
 utilization, or based on any workload balancing or power saving
 policies that you configure. This ensures that there is sufficient
 capacity to restart virtual machines at all times.
 
 
  
  Thanks,
  
  Rob
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Ovirt + Spice + VDI

2013-03-05 Thread Shu Ming

Mohsen Saeedi:

Hi
I want to know, we are force to install one windows per user? does 
spice can provide a multi remote connection to a single windows XP 
machine?
I want to install one windows XP as virtual desktop and then share it 
with more than one users.is it possible in now or in the futures?
I think you are talking about mutiple sessions in one windows desktop 
servers.I believe you must use RDP protocol to access those 
sessions.  Spice is for one window desktop per user.




Thanks


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
---
?? Shu Ming
Open Virtualization Engineerning; CSTL, IBM Corp.
Tel: 86-10-82451626  Tieline: 9051626 E-mail: shum...@cn.ibm.com or 
shum...@linux.vnet.ibm.com
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, 
Beijing 100193, PRC

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.2 on CentOS with Gluster 3.3

2013-03-05 Thread Dan Kenigsberg
On Tue, Mar 05, 2013 at 10:08:48AM -0800, Rob Zwissler wrote:
 On Mon, Mar 4, 2013 at 11:46 PM, Dan Kenigsberg dan...@redhat.com wrote:
  Rob,
 
  It seems that a bug in vdsm code is hiding the real issue.
  Could you do a
 
  sed -i s/ParseError/ElementTree.ParseError 
  /usr/share/vdsm/gluster/cli.py
 
  restart vdsmd, and retry?
 
  Bala, would you send a patch fixing the ParseError issue (and adding a
  unit test that would have caught it on time)?

 Traceback (most recent call last):
   File /usr/share/vdsm/BindingXMLRPC.py, line 918, in wrapper
 res = f(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 32, in wrapper
 rv = func(*args, **kwargs)
   File /usr/share/vdsm/gluster/api.py, line 56, in volumesList
 return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
   File /usr/share/vdsm/supervdsm.py, line 81, in __call__
 return callMethod()
   File /usr/share/vdsm/supervdsm.py, line 72, in lambda
 **kwargs)
   File string, line 2, in glusterVolumeInfo
   File /usr/lib64/python2.6/multiprocessing/managers.py, line 740,
 in _callmethod
 raise convert_to_error(kind, result)
 AttributeError: class ElementTree has no attribute 'ParseError'

My guess has led us nowhere, since etree.ParseError is simply missing
from python 2.6. It is to be seen only in python 2.7!

That's sad, but something *else* is problematic, since we got to this
error-handling code.

Could you make another try and temporarily replace ParseError with
Exception?

sed -i s/etree.ParseError/Exception/ /usr/share/vdsm/gluster/cli.py

(this sed is relative to the original code).

Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users