[Users] Migration of engine

2013-06-13 Thread Tom Brown
Hi

We use 3.1 on centos 6 and have the need to migrate the engine due to hardware 
issues. Looking at

http://wiki.ovirt.org/Backup_engine_db

Are there any other files apart from the db that might be needed or once the db 
is migrated into a freshly installed engine with the same IP and hostname as 
the original all will be well?

many thanks___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] AD authentication for ovirt manager

2013-04-23 Thread Tom Brown



 Hello Jonathan,
 
 I believe you can use the Red Hat Documentation for this.
 
 https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.1/html/Evaluation_Guide/Evaluation_Guide-VDI.html#Evaluation_Guide-Add_Active_Directory
 
 One of the gotchas that I ran into is that you need to specify the Active 
 Directory as your DNS provider in your resolv.conf file (not sure if it was 
 coincidence or not; but I ran into some issues that went away when I did this)

Has anyone had success doing this with 389 ?

cheers

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] rebooting physical network that ovirt is attached to

2013-03-24 Thread Tom Brown
 
 
 I believe this also answers my other thread, that is is ok to reboot the
 management server at any time? (I have another thread about how sluggish
 mine is running lately, and id like to get it rebooted soon before I go
 on vacation).
 

dont reboot the whole server - just restart the jboss

cheers
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] DL380 G5 - Fails to Activate

2013-01-25 Thread Tom Brown

 I have a couple of old DL380 G5's and i am putting them into their own 
 cluster for testing various things out.
 The install of 3.1 from dreyou goes fine onto them but when they try to 
 activate i get the following
 
 Host xxx.xxx.net.uk moved to Non-Operational state as host does not meet the 
 cluster's minimum CPU level. Missing CPU features : model_Conroe, nx
 
 KVM appears to run just fine on these host and their cpu's are
 
 Intel(R) Xeon(R) CPU5140  @ 2.33GHz
 
 Is it possible to add these in to a 3.1 cluster ??
 
 and now i have managed to find a similar post
 
 # vdsClient -s 0 getVdsCaps | grep -i flags
   cpuFlags = 
 fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,lahf_lm,dts,tpr_shadow
 
 # virsh -r capabilities
 capabilities
 
   host
 uuid134bd567-da9f-43f9-8a2b-c259ed34f938/uuid
 cpu
   archx86_64/arch
   modelkvm32/model
   vendorIntel/vendor
   topology sockets='1' cores='2' threads='1'/
   feature name='lahf_lm'/
   feature name='lm'/
   feature name='syscall'/
   feature name='dca'/
   feature name='pdcm'/
   feature name='xtpr'/
   feature name='cx16'/
   feature name='ssse3'/
   feature name='tm2'/
   feature name='est'/
   feature name='vmx'/
   feature name='ds_cpl'/
   feature name='monitor'/
   feature name='dtes64'/
   feature name='pbe'/
   feature name='tm'/
   feature name='ht'/
   feature name='ss'/
   feature name='acpi'/
   feature name='ds'/
   feature name='vme'/
 /cpu
 power_management
   suspend_disk/
 /power_management
 migration_features
   live/
   uri_transports
 uri_transporttcp/uri_transport
   /uri_transports
 /migration_features
 topology
   cells num='1'
 cell id='0'
   cpus num='2'
 cpu id='0'/
 cpu id='1'/
   /cpus
 /cell
   /cells
 /topology
   /host
 
   guest
 os_typehvm/os_type
 arch name='i686'
   wordsize32/wordsize
   emulator/usr/libexec/qemu-kvm/emulator
   machinerhel6.3.0/machine
   machine canonical='rhel6.3.0'pc/machine
   machinerhel6.2.0/machine
   machinerhel6.1.0/machine
   machinerhel6.0.0/machine
   machinerhel5.5.0/machine
   machinerhel5.4.4/machine
   machinerhel5.4.0/machine
   domain type='qemu'
   /domain
   domain type='kvm'
 emulator/usr/libexec/qemu-kvm/emulator
   /domain
 /arch
 features
   cpuselection/
   deviceboot/
   pae/
   nonpae/
   acpi default='on' toggle='yes'/
   apic default='on' toggle='no'/
 /features
   /guest
 
   guest
 os_typehvm/os_type
 arch name='x86_64'
   wordsize64/wordsize
   emulator/usr/libexec/qemu-kvm/emulator
   machinerhel6.3.0/machine
   machine canonical='rhel6.3.0'pc/machine
   machinerhel6.2.0/machine
   machinerhel6.1.0/machine
   machinerhel6.0.0/machine
   machinerhel5.5.0/machine
   machinerhel5.4.4/machine
   machinerhel5.4.0/machine
   domain type='qemu'
   /domain
   domain type='kvm'
 emulator/usr/libexec/qemu-kvm/emulator
   /domain
 /arch
 features
   cpuselection/
   deviceboot/
   acpi default='on' toggle='yes'/
   apic default='on' toggle='no'/
 /features
   /guest
 
 /capabilities

Hi - any clues here or am i out of luck with these hosts?

thanks


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] DL380 G5 - Fails to Activate

2013-01-25 Thread Tom Brown
 
 
 I have a couple of old DL380 G5's and i am putting them into their own
 cluster for testing various things out.
 The install of 3.1 from dreyou goes fine onto them but when they try to
 activate i get the following
 
 Host xxx.xxx.net.uk moved to Non-Operational state as host does not meet the
 cluster's minimum CPU level. Missing CPU features : model_Conroe, nx
 
 KVM appears to run just fine on these host and their cpu's are
 
 Intel(R) Xeon(R) CPU5140  @ 2.33GHz
 
 Is it possible to add these in to a 3.1 cluster ??
 
 
 and now i have managed to find a similar post
 
 # vdsClient -s 0 getVdsCaps | grep -i flags
 cpuFlags =
 fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,lahf_lm,dts,tpr_shadow
 
 # virsh -r capabilities
 capabilities
 
  host
uuid134bd567-da9f-43f9-8a2b-c259ed34f938/uuid
cpu
  archx86_64/arch
  modelkvm32/model
  vendorIntel/vendor
  topology sockets='1' cores='2' threads='1'/
  feature name='lahf_lm'/
  feature name='lm'/
  feature name='syscall'/
  feature name='dca'/
  feature name='pdcm'/
  feature name='xtpr'/
  feature name='cx16'/
  feature name='ssse3'/
  feature name='tm2'/
  feature name='est'/
  feature name='vmx'/
  feature name='ds_cpl'/
  feature name='monitor'/
  feature name='dtes64'/
  feature name='pbe'/
  feature name='tm'/
  feature name='ht'/
  feature name='ss'/
  feature name='acpi'/
  feature name='ds'/
  feature name='vme'/
/cpu
power_management
  suspend_disk/
/power_management
migration_features
  live/
  uri_transports
uri_transporttcp/uri_transport
  /uri_transports
/migration_features
topology
  cells num='1'
cell id='0'
  cpus num='2'
cpu id='0'/
cpu id='1'/
  /cpus
/cell
  /cells
/topology
  /host
 
  guest
os_typehvm/os_type
arch name='i686'
  wordsize32/wordsize
  emulator/usr/libexec/qemu-kvm/emulator
  machinerhel6.3.0/machine
  machine canonical='rhel6.3.0'pc/machine
  machinerhel6.2.0/machine
  machinerhel6.1.0/machine
  machinerhel6.0.0/machine
  machinerhel5.5.0/machine
  machinerhel5.4.4/machine
  machinerhel5.4.0/machine
  domain type='qemu'
  /domain
  domain type='kvm'
emulator/usr/libexec/qemu-kvm/emulator
  /domain
/arch
features
  cpuselection/
  deviceboot/
  pae/
  nonpae/
  acpi default='on' toggle='yes'/
  apic default='on' toggle='no'/
/features
  /guest
 
  guest
os_typehvm/os_type
arch name='x86_64'
  wordsize64/wordsize
  emulator/usr/libexec/qemu-kvm/emulator
  machinerhel6.3.0/machine
  machine canonical='rhel6.3.0'pc/machine
  machinerhel6.2.0/machine
  machinerhel6.1.0/machine
  machinerhel6.0.0/machine
  machinerhel5.5.0/machine
  machinerhel5.4.4/machine
  machinerhel5.4.0/machine
  domain type='qemu'
  /domain
  domain type='kvm'
emulator/usr/libexec/qemu-kvm/emulator
  /domain
/arch
features
  cpuselection/
  deviceboot/
  acpi default='on' toggle='yes'/
  apic default='on' toggle='no'/
/features
  /guest
 
 /capabilities
 
 
 Hi - any clues here or am i out of luck with these hosts?
 
 thanks
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 
 Hi,
 how about the kvm-ok tool result? Is it responding:
 
 INFO: /dev/kvm exists
 KVM acceleration can be used

for posterity - 

http://www.linkedin.com/groups/Rhev-3-Beta-Proliant-DL380-2536011.S.93285316

This solved it for me - 

cheers

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] DL380 G5 - Fails to Activate

2013-01-24 Thread Tom Brown
Hi

I have a couple of old DL380 G5's and i am putting them into their own cluster 
for testing various things out.
The install of 3.1 from dreyou goes fine onto them but when they try to 
activate i get the following

Host xxx.xxx.net.uk moved to Non-Operational state as host does not meet the 
cluster's minimum CPU level. Missing CPU features : model_Conroe, nx

KVM appears to run just fine on these host and their cpu's are

Intel(R) Xeon(R) CPU5140  @ 2.33GHz

Is it possible to add these in to a 3.1 cluster ??

thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] DL380 G5 - Fails to Activate

2013-01-24 Thread Tom Brown

 
 
 
 I have a couple of old DL380 G5's and i am putting them into their own 
 cluster for testing various things out.
 The install of 3.1 from dreyou goes fine onto them but when they try to 
 activate i get the following
 
 Host xxx.xxx.net.uk moved to Non-Operational state as host does not meet the 
 cluster's minimum CPU level. Missing CPU features : model_Conroe, nx
 
 KVM appears to run just fine on these host and their cpu's are
 
 Intel(R) Xeon(R) CPU5140  @ 2.33GHz
 
 Is it possible to add these in to a 3.1 cluster ??

and now i have managed to find a similar post

# vdsClient -s 0 getVdsCaps | grep -i flags
cpuFlags = 
fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,lahf_lm,dts,tpr_shadow

# virsh -r capabilities
capabilities

  host
uuid134bd567-da9f-43f9-8a2b-c259ed34f938/uuid
cpu
  archx86_64/arch
  modelkvm32/model
  vendorIntel/vendor
  topology sockets='1' cores='2' threads='1'/
  feature name='lahf_lm'/
  feature name='lm'/
  feature name='syscall'/
  feature name='dca'/
  feature name='pdcm'/
  feature name='xtpr'/
  feature name='cx16'/
  feature name='ssse3'/
  feature name='tm2'/
  feature name='est'/
  feature name='vmx'/
  feature name='ds_cpl'/
  feature name='monitor'/
  feature name='dtes64'/
  feature name='pbe'/
  feature name='tm'/
  feature name='ht'/
  feature name='ss'/
  feature name='acpi'/
  feature name='ds'/
  feature name='vme'/
/cpu
power_management
  suspend_disk/
/power_management
migration_features
  live/
  uri_transports
uri_transporttcp/uri_transport
  /uri_transports
/migration_features
topology
  cells num='1'
cell id='0'
  cpus num='2'
cpu id='0'/
cpu id='1'/
  /cpus
/cell
  /cells
/topology
  /host

  guest
os_typehvm/os_type
arch name='i686'
  wordsize32/wordsize
  emulator/usr/libexec/qemu-kvm/emulator
  machinerhel6.3.0/machine
  machine canonical='rhel6.3.0'pc/machine
  machinerhel6.2.0/machine
  machinerhel6.1.0/machine
  machinerhel6.0.0/machine
  machinerhel5.5.0/machine
  machinerhel5.4.4/machine
  machinerhel5.4.0/machine
  domain type='qemu'
  /domain
  domain type='kvm'
emulator/usr/libexec/qemu-kvm/emulator
  /domain
/arch
features
  cpuselection/
  deviceboot/
  pae/
  nonpae/
  acpi default='on' toggle='yes'/
  apic default='on' toggle='no'/
/features
  /guest

  guest
os_typehvm/os_type
arch name='x86_64'
  wordsize64/wordsize
  emulator/usr/libexec/qemu-kvm/emulator
  machinerhel6.3.0/machine
  machine canonical='rhel6.3.0'pc/machine
  machinerhel6.2.0/machine
  machinerhel6.1.0/machine
  machinerhel6.0.0/machine
  machinerhel5.5.0/machine
  machinerhel5.4.4/machine
  machinerhel5.4.0/machine
  domain type='qemu'
  /domain
  domain type='kvm'
emulator/usr/libexec/qemu-kvm/emulator
  /domain
/arch
features
  cpuselection/
  deviceboot/
  acpi default='on' toggle='yes'/
  apic default='on' toggle='no'/
/features
  /guest

/capabilities


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] KVM version not showing in Ovirt Manager

2013-01-23 Thread Tom Brown



 Tom,
 you should have qemu-kvm on Fedora, CentOS and such
 qemu-kvm-rhev on RHEL hosts are supposed to work with downstream RHEV 
 product. How did you get them there?
 You can modify that if you want in vdsm/caps.py in _getKeyPackages() function

http://www.dreyou.org/ovirt/vdsm/Packages/

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] KVM version not showing in Ovirt Manager

2013-01-23 Thread Tom Brown
 
 so…I guess - either do not build it with -rhev(regular upstream/fedora 
 packages are without the suffix) or build vdsm with the corresponding 
 modification in caps.py
 
 

yes - i am not sure why those packages are in dreyou's repo, they weren't there 
a couple of weeks ago when the last HV was built as that one just pulled the 
stock CentOS ones.

I may just leave it and wait until 3.2 comes along with official RHEL/CentOS 
packages

thanks

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] KVM version not showing in Ovirt Manager

2013-01-22 Thread Tom Brown
Hi

I have just added another HV to a cluster and its up and running fine. I can 
run VM's on it and migrate fro other HV's onto it. I do note however that in 
the manager there is no KVM version listed as installed however on other HV's 
in the cluster there is a version present.

I see that the KVM version is slightly different on this new host but as i said 
apart from this visual issue everything appear to be running fine. These HV's 
are CentOS 6.3 using dreyou 3.1

Node where KVM version not showing in the manager

node003 ~]# rpm -qa | grep kvm
qemu-kvm-rhev-0.12.1.2-2.295.el6.10.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.295.el6.10.x86_64

Node where KVM version is showing in the manager

node002 ~]# rpm -qa | grep kvm
qemu-kvm-tools-0.12.1.2-2.295.el6_3.8.x86_64
qemu-kvm-0.12.1.2-2.295.el6_3.8.x86_64

thanks


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] KVM version not showing in Ovirt Manager

2013-01-22 Thread Tom Brown



 please issue the following commands on both hv's:
 
 vdsClinet -s 0 getVdsCaps  vdsClient -s 0 getVdsStats
 
 I would like to make sure vdsm is indeed report them to the engine.
 

i appear to not have that command in my PATH on either node - 

Which package provides it?

node002 ~]# yum whatprovides */vdsClinet
Loaded plugins: fastestmirror, priorities, rhnplugin, security
This system is receiving updates from RHN Classic or RHN Satellite.
Loading mirror speeds from cached hostfile
 * epel: mirrors.coreix.net
centos-6-x86_64-01012013/filelists  

   |  11 MB 00:00 
epel/filelists_db   

   | 6.9 MB 00:01 
spacewalk-client/filelists  

   | 5.7 kB 00:00 
vdsm/filelists  

   | 9.8 kB 00:00 
No Matches found
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] KVM version not showing in Ovirt Manager

2013-01-22 Thread Tom Brown



 I think that it's a typo and the right command is: vdsClient .
 The second command doesn't have the typo :).
 

Working node

node002 ~]# vdsClient -s 0 getVdsCaps  vdsClient -s 0 getVdsStats  
/tmp/node002.log
HBAInventory = {'iSCSI': [{'InitiatorName': 
'iqn.1994-05.com.redhat:31b77320b5e6'}], 'FC': []}
ISCSIInitiatorName = iqn.1994-05.com.redhat:31b77320b5e6
bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': 
'', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': 
{}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}
bridges = {'ovirtmgmt': {'addr': '10.192.42.196', 'cfg': {'DELAY': '0', 
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 
'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 
'off', 'ports': ['vnet0', 'vnet1', 'eth0', 'vnet2']}}
clusterLevels = ['3.0', '3.1', '3.2']
cpuCores = 4
cpuFlags = 
fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_Penryn
cpuModel = Intel(R) Xeon(R) CPU   W3520  @ 2.67GHz
cpuSockets = 1
cpuSpeed = 2666.908
emulatedMachines = ['rhel6.3.0', 'pc', 'rhel6.2.0', 'rhel6.1.0', 
'rhel6.0.0', 'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0']
guestOverhead = 65
hooks = {}
kvmEnabled = true
lastClient = 10.192.42.207
lastClientIface = ovirtmgmt
management_ip = 
memSize = 7853
netConfigDirty = False
networks = {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': 
'10.192.42.196', 'cfg': {'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 
'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': 
'1500', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': 
'10.192.42.1', 'ports': ['vnet0', 'vnet1', 'eth0', 'vnet2']}}
nics = {'eth0': {'addr': '', 'cfg': {'DEVICE': 'eth0', 'BRIDGE': 
'ovirtmgmt', 'BOOTPROTO': 'dhcp', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'mtu': 
'1500', 'netmask': '', 'hwaddr': 'd4:85:64:09:34:08', 'speed': 1000}}
operatingSystem = {'release': '3.el6.centos.9', 'version': '6', 'name': 
'RHEL'}
packages2 = {'kernel': {'release': '279.14.1.el6.x86_64', 'buildtime': 
1352245389.0, 'version': '2.6.32'}, 'spice-server': {'release': '10.el6', 
'buildtime': 1340375889, 'version': '0.10.1'}, 'vdsm': {'release': 
'0.77.20.el6', 'buildtime': 1351246440, 'version': '4.10.1'}, 'qemu-kvm': 
{'release': '2.295.el6_3.8', 'buildtime': 1354636067, 'version': '0.12.1.2'}, 
'libvirt': {'release': '21.el6_3.6', 'buildtime': 1353577785, 'version': 
'0.9.10'}, 'qemu-img': {'release': '2.295.el6_3.8', 'buildtime': 1354636067, 
'version': '0.12.1.2'}}
reservedMem = 321
software_revision = 0.77
software_version = 4.10
supportedENGINEs = ['3.0', '3.1']
supportedProtocols = ['2.2', '2.3']
uuid = 55414E03-C241-11DF-BBDA-64093408D485_d4:85:64:09:34:08
version_name = Snow Man
vlans = {}
vmTypes = ['kvm']

Non Working node

node003 ~]# vdsClient -s 0 getVdsCaps  vdsClient -s 0 getVdsStats
HBAInventory = {'iSCSI': [{'InitiatorName': 
'iqn.1994-05.com.redhat:9a7b944e2160'}], 'FC': []}
ISCSIInitiatorName = iqn.1994-05.com.redhat:9a7b944e2160
bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': 
'', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': 
{}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 
'bond1': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 
'hwaddr': '00:00:00:00:00:00'}}
bridges = {'ovirtmgmt': {'addr': '10.192.42.144', 'cfg': {'DELAY': '0', 
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 
'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 
'off', 'ports': ['vnet1', 'eth0', 'vnet0']}}
clusterLevels = ['3.0', '3.1', '3.2']
cpuCores = 4
cpuFlags = 
fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_Penryn
cpuModel = Intel(R) Xeon(R) CPU   W3520  @ 2.67GHz
cpuSockets = 1
cpuSpeed = 2666.752
emulatedMachines = ['rhel6.3.0', 'pc', 'rhel6.2.0', 'rhel6.1.0', 
'rhel6.0.0', 'rhel5.5.0', 

[Users] Adding additional network(s)

2013-01-22 Thread Tom Brown
Hi

I am setting up another DC, that is managed by my sole management
node, and this DC will have a requirement that the VM's will need an
additional storage NIC. This NIC is for NFS/CIFS traffic and is
independent of the oVirt VM's disks.

I have cabled the additional physical NIC in the HV's as this network
is non routed storage, and I where to go from here. Whats the next
step needed to add the NIC to the DC and then i presume adding the NIC
to the VM's is straight forward.

thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] egine-iso-uploader error

2013-01-17 Thread Tom Brown

On 17 Jan 2013, at 11:31, Juan Jose wrote:

 Hello everybody,
 
 I have been able to solve my NFS problems and now I have configured ISO 
 resource and a data resource in datacenter but when I try to execute the 
 command engine-iso-uploader I can see below error:
 
 [root@ovirt-engine ~]# engine-iso-uploader -v list
 Please provide the REST API password for the admin@internal oVirt Engine user 
 (CTRL+D to abort): 
 ERROR: [ERROR]::ca_file (CA certificate) must be specified for SSL connection.
 INFO: Use the -h option to see usage.
 DEBUG: Configuration:
 DEBUG: command: list
 DEBUG: Traceback (most recent call last):
 DEBUG:   File /bin/engine-iso-uploader, line 931, in module
 DEBUG: isoup = ISOUploader(conf)
 DEBUG:   File /bin/engine-iso-uploader, line 331, in __init__
 DEBUG: self.list_all_ISO_storage_domains()
 DEBUG:   File /bin/engine-iso-uploader, line 381, in 
 list_all_ISO_storage_domains
 DEBUG: if not self._initialize_api():
 DEBUG:   File /bin/engine-iso-uploader, line 358, in _initialize_api
 DEBUG: password=self.configuration.get(passwd))
 DEBUG:   File /usr/lib/python2.7/site-packages/ovirtsdk/api.py, line 78, in 
 __init__
 DEBUG: debug=debug
 DEBUG:   File 
 /usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py,
  line 47, in __init__
 DEBUG: debug=debug))
 DEBUG:   File /usr/lib/python2.7/site-packages/ovirtsdk/web/connection.py, 
 line 38, in __init__
 DEBUG: timeout=timeout)
 DEBUG:   File /usr/lib/python2.7/site-packages/ovirtsdk/web/connection.py, 
 line 102, in __createConnection
 DEBUG: raise NoCertificatesError
 DEBUG: NoCertificatesError: [ERROR]::ca_file (CA certificate) must be 
 specified for SSL connection.
 
 I have one Fedora 17 oVirt engine 3.1 installed with a Fedora 17 host. 
 Someone can show me what is the problem and how is it solved, please?
 
 Many thanks in avanced,
 
 Juanjo.

LMGTFY

http://www.mail-archive.com/users@ovirt.org/msg05399.html



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] API usage - 3.1

2013-01-15 Thread Tom Brown
 
 
 thankyou - i am however finding it hard to attach an iso image to a VM - I 
 have the console etc sorted however what would be required to do the same 
 with an iso image?
 
 The below is working apart from the iso
 
 thanks
 
 vm_template = vm
 name%s/name
 cluster
   nameDefault/name
 /cluster
 template
   nameBlank/name
 /template
 typeserver/type
 memory536870912/memory
 display
   typevnc/type
 /display
 os
   boot dev=hd/
   boot dev=cdrom/
 /os
 cdrom
   cdrom id=cobbler-base.iso/
 /cdrom
 /vm
 

i have also tried 

cdrom 
file id=rhel-server-6.0-x86_64-dvd.iso/

/cdrom 

as per the docs but no joy ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Adding Iscsi storage to Ovirt All-in-one config

2013-01-14 Thread Tom Brown

 true. until we support mixed storage.

is that an item on the roadmap?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Adding Iscsi storage to Ovirt All-in-one config

2013-01-14 Thread Tom Brown
 
 
 true. until we support mixed storage.
 
 is that an item on the roadmap?
 
 
 yes. usually dubbed SDM[1], but has several milestones needed before we get 
 there.
 [1] granularity of storage domain, rather than SPM - pool level granualrity.
 

Great to hear - any clues on a general non committed to rough date?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] API usage - 3.1

2013-01-11 Thread Tom Brown
Trying to get going adding VM's via the API and so far have managed to get 
quite far - I am however facing this

vm_template = vm
name%s/name
cluster
  nameDefault/name
/cluster
template
  nameBlank/name
/template
vm_typeserver/vm_type
memory536870912/memory
os
  boot dev=hd/
/os
/vm

The VM is created but the type ends up being a desktop and not a server - 

What did i do wrong?

thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-09 Thread Tom Brown
-7c7f48b60adf`::Failed to migrate
 Traceback (most recent call last):
  File /usr/share/vdsm/vm.py, line 245, in run
self._startUnderlyingMigration()
  File /usr/share/vdsm/libvirtvm.py, line 474, in _startUnderlyingMigration
None, maxBandwidth)
  File /usr/share/vdsm/libvirtvm.py, line 510, in f
ret = attr(*args, **kwargs)
  File /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py, line 
 83, in wrapper
ret = f(*args, **kwargs)
  File /usr/lib64/python2.6/site-packages/libvirt.py, line 1103, in 
 migrateToURI2
if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
 dom=self)
 libvirtError: internal error Process exited while reading console log output: 
 
 any chance you attach libvirtd.log and qemu log (/var/log/libvirt/qemu/{}.log?
 
 Danken - any insights?
 
 - Original Message -
 From: Tom Brown t...@ng23.net
 To: Roy Golan rgo...@redhat.com
 Cc: Haim Ateya hat...@redhat.com, users@ovirt.org
 Sent: Tuesday, January 8, 2013 11:50:26 AM
 Subject: Re: [Users] oVirt 3.1 - VM Migration Issue
 
 
 can you attach the same snip from the src VDSM 10.192.42.196 as
 well?
 
 The log is pretty chatty therefore i did another migration attempt
 and snipd'd the new
 log from both sides.
 
 see attached
 
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-09 Thread Tom Brown


 libvirtError: internal error Process exited while reading console log outpu
 could this be related to selinux? can you try disabling it and see if 
 migration succeeds?

It was indeed the case! my src node was set to disabled and my destination node 
was enforcing, this was due to the destination being the first HV built and 
therefore provisioned slightly differently, my kickstart server is a VM in the 
pool.

Its interesting that a VM can be provisioned onto a node that is set to 
enforcing and yet not migrated to.

thanks

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] What do you want to see in oVirt next?

2013-01-04 Thread Tom Brown

 - Original Message -
 From: Tom Brown t...@ng23.net
 To: users@oVirt.org users@ovirt.org
 Sent: Thursday, January 3, 2013 7:47:00 PM
 Subject: Re: [Users] What do you want to see in oVirt next?
 
 I'd like to see a spice client for Mac - i know of various ports but
 as of now on 10.8 i have to use VNC which is not ideal. I now spice
 != ovirt however they are very linked at this time.
 
 What do you mean? a client or a browser plugin?
 

i suppose ideally both - a browser plugin would be great as then interaction 
between the GUI and the console would be seamless with the ability to use a 
standalone client if it were required.

thanks

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] ISO import issue

2013-01-04 Thread Tom Brown
Hi

I have in the past imported iso's without issue however since returning from 
the holiday break i tried today and got this

[root@ovirt-manager ~]# engine-iso-uploader -v -i iso upload openfiler.iso 
Please provide the REST API password for the admin@internal oVirt Engine user 
(CTRL+D to abort): 
ERROR: [ERROR]::ca_file (CA certificate) must be specified for SSL connection.
INFO: Use the -h option to see usage.
DEBUG: Configuration:
DEBUG: command: upload
DEBUG: Traceback (most recent call last):
DEBUG:   File /usr/bin/engine-iso-uploader, line 931, in module
DEBUG: isoup = ISOUploader(conf)
DEBUG:   File /usr/bin/engine-iso-uploader, line 333, in __init__
DEBUG: self.upload_to_storage_domain()
DEBUG:   File /usr/bin/engine-iso-uploader, line 677, in 
upload_to_storage_domain
DEBUG: (id, address, path) = 
self.get_host_and_path_from_ISO_domain(self.configuration.get('iso_domain'))
DEBUG:   File /usr/bin/engine-iso-uploader, line 424, in 
get_host_and_path_from_ISO_domain
DEBUG: if not self._initialize_api():
DEBUG:   File /usr/bin/engine-iso-uploader, line 358, in _initialize_api
DEBUG: password=self.configuration.get(passwd))
DEBUG:   File /usr/lib/python2.6/site-packages/ovirtsdk/api.py, line 78, in 
__init__
DEBUG: debug=debug
DEBUG:   File 
/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/connectionspool.py, 
line 47, in __init__
DEBUG: debug=debug))
DEBUG:   File /usr/lib/python2.6/site-packages/ovirtsdk/web/connection.py, 
line 38, in __init__
DEBUG: timeout=timeout)
DEBUG:   File /usr/lib/python2.6/site-packages/ovirtsdk/web/connection.py, 
line 102, in __createConnection
DEBUG: raise NoCertificatesError
DEBUG: NoCertificatesError: [ERROR]::ca_file (CA certificate) must be specified 
for SSL connection.

Nothing has changed from my side so i wonder if there are any ideas as to why 
this has just started happening ?

thanks

[root@ovirt-manager ovirt-engine]# rpm -qa | grep ovirt
ovirt-image-uploader-3.1.0-16.el6.noarch
ovirt-engine-backend-3.1.0-3.19.el6.noarch
ovirt-engine-setup-3.1.0-3.19.el6.noarch
ovirt-engine-notification-service-3.1.0-3.19.el6.noarch
ovirt-log-collector-3.1.0-16.el6.noarch
ovirt-iso-uploader-3.1.0-16.el6.noarch
ovirt-engine-jbossas711-1-0.x86_64
ovirt-engine-dbscripts-3.1.0-3.19.el6.noarch
ovirt-engine-genericapi-3.1.0-3.19.el6.noarch
ovirt-engine-webadmin-portal-3.1.0-3.19.el6.noarch
ovirt-engine-config-3.1.0-3.19.el6.noarch
ovirt-engine-3.1.0-3.19.el6.noarch
ovirt-engine-sdk-3.2.0.2-1.el6.noarch
ovirt-release-el6-4-2.noarch
ovirt-engine-restapi-3.1.0-3.19.el6.noarch
ovirt-engine-userportal-3.1.0-3.19.el6.noarch
ovirt-engine-tools-common-3.1.0-3.19.el6.noarch
ovirt-engine-cli-3.2.0.5-1.el6.noarch

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ISO import issue

2013-01-04 Thread Tom Brown

 
 
 
 I have in the past imported iso's without issue however since returning from 
 the holiday break i tried today and got this
 
 [root@ovirt-manager ~]# engine-iso-uploader -v -i iso upload openfiler.iso 
 Please provide the REST API password for the admin@internal oVirt Engine user 
 (CTRL+D to abort): 
 ERROR: [ERROR]::ca_file (CA certificate) must be specified for SSL connection.
 INFO: Use the -h option to see usage.
 DEBUG: Configuration:
 DEBUG: command: upload
 DEBUG: Traceback (most recent call last):
 DEBUG:   File /usr/bin/engine-iso-uploader, line 931, in module
 DEBUG: isoup = ISOUploader(conf)
 DEBUG:   File /usr/bin/engine-iso-uploader, line 333, in __init__
 DEBUG: self.upload_to_storage_domain()
 DEBUG:   File /usr/bin/engine-iso-uploader, line 677, in 
 upload_to_storage_domain
 DEBUG: (id, address, path) = 
 self.get_host_and_path_from_ISO_domain(self.configuration.get('iso_domain'))
 DEBUG:   File /usr/bin/engine-iso-uploader, line 424, in 
 get_host_and_path_from_ISO_domain
 DEBUG: if not self._initialize_api():
 DEBUG:   File /usr/bin/engine-iso-uploader, line 358, in _initialize_api
 DEBUG: password=self.configuration.get(passwd))
 DEBUG:   File /usr/lib/python2.6/site-packages/ovirtsdk/api.py, line 78, in 
 __init__
 DEBUG: debug=debug
 DEBUG:   File 
 /usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/connectionspool.py,
  line 47, in __init__
 DEBUG: debug=debug))
 DEBUG:   File /usr/lib/python2.6/site-packages/ovirtsdk/web/connection.py, 
 line 38, in __init__
 DEBUG: timeout=timeout)
 DEBUG:   File /usr/lib/python2.6/site-packages/ovirtsdk/web/connection.py, 
 line 102, in __createConnection
 DEBUG: raise NoCertificatesError
 DEBUG: NoCertificatesError: [ERROR]::ca_file (CA certificate) must be 
 specified for SSL connection.
 
 Nothing has changed from my side so i wonder if there are any ideas as to why 
 this has just started happening ?
 
 thanks
 
 [root@ovirt-manager ovirt-engine]# rpm -qa | grep ovirt
 ovirt-image-uploader-3.1.0-16.el6.noarch
 ovirt-engine-backend-3.1.0-3.19.el6.noarch
 ovirt-engine-setup-3.1.0-3.19.el6.noarch
 ovirt-engine-notification-service-3.1.0-3.19.el6.noarch
 ovirt-log-collector-3.1.0-16.el6.noarch
 ovirt-iso-uploader-3.1.0-16.el6.noarch
 ovirt-engine-jbossas711-1-0.x86_64
 ovirt-engine-dbscripts-3.1.0-3.19.el6.noarch
 ovirt-engine-genericapi-3.1.0-3.19.el6.noarch
 ovirt-engine-webadmin-portal-3.1.0-3.19.el6.noarch
 ovirt-engine-config-3.1.0-3.19.el6.noarch
 ovirt-engine-3.1.0-3.19.el6.noarch
 ovirt-engine-sdk-3.2.0.2-1.el6.noarch
 ovirt-release-el6-4-2.noarch
 ovirt-engine-restapi-3.1.0-3.19.el6.noarch
 ovirt-engine-userportal-3.1.0-3.19.el6.noarch
 ovirt-engine-tools-common-3.1.0-3.19.el6.noarch
 ovirt-engine-cli-3.2.0.5-1.el6.noarch
 

solved - ovirt-engine-sdk mismatch issue, seems installing ovirt-engine-cli 
updated that package and i hadn't noticed

thanks

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] oVirt 3.1 - VM Migration Issue

2013-01-03 Thread Tom Brown

Hi

I seem to have an issue with a single VM and migration, other VM's can migrate 
OK - When migrating from the GUI it appears to just hang but in the engine.log 
i see the following

2013-01-03 14:03:10,359 INFO  [org.ovirt.engine.core.bll.VdsSelector] 
(ajp--0.0.0.0-8009-59) Checking for a specific VDS only - 
id:a2d84a1e-3e18-11e2-8851-3cd92b4c8e89, name:ovirt-node.domain-name, 
host_name(ip):10.192.42.165
2013-01-03 14:03:10,411 INFO  
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (pool-3-thread-48) 
[4d32917d] Running command: MigrateVmToServerCommand internal: false. Entities 
affected :  ID: 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 Type: VM
2013-01-03 14:03:10,413 INFO  [org.ovirt.engine.core.bll.VdsSelector] 
(pool-3-thread-48) [4d32917d] Checking for a specific VDS only - 
id:a2d84a1e-3e18-11e2-8851-3cd92b4c8e89, name:ovirt-node.domain-name, 
host_name(ip):10.192.42.165
2013-01-03 14:03:11,028 INFO  
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-48) 
[4d32917d] START, MigrateVDSCommand(vdsId = 
1a52b722-43a1-11e2-af96-3cd92b4c8e89, 
vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806, srcHost=10.192.42.196, 
dstVdsId=a2d84a1e-3e18-11e2-8851-3cd92b4c8e89, dstHost=10.192.42.165:54321, 
migrationMethod=ONLINE), log id: 5011789b
2013-01-03 14:03:11,030 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(pool-3-thread-48) [4d32917d] VdsBroker::migrate::Entered 
(vm_guid=9dc63ce4-0f76-4963-adfe-6f8eb1a44806, srcHost=10.192.42.196, 
dstHost=10.192.42.165:54321,  method=online
2013-01-03 14:03:11,031 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(pool-3-thread-48) [4d32917d] START, MigrateBrokerVDSCommand(vdsId = 
1a52b722-43a1-11e2-af96-3cd92b4c8e89, 
vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806, srcHost=10.192.42.196, 
dstVdsId=a2d84a1e-3e18-11e2-8851-3cd92b4c8e89, dstHost=10.192.42.165:54321, 
migrationMethod=ONLINE), log id: 7cd53864
2013-01-03 14:03:11,041 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(pool-3-thread-48) [4d32917d] FINISH, MigrateBrokerVDSCommand, log id: 7cd53864
2013-01-03 14:03:11,086 INFO  
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (pool-3-thread-48) 
[4d32917d] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 5011789b
2013-01-03 14:03:11,606 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(QuartzScheduler_Worker-29) vds::refreshVmList vm id 
9dc63ce4-0f76-4963-adfe-6f8eb1a44806 is migrating to vds ovirt-node.domain-name 
ignoring it in the refresh till migration is done
2013-01-03 14:03:12,836 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(QuartzScheduler_Worker-36) VM test002.domain-name 
9dc63ce4-0f76-4963-adfe-6f8eb1a44806 moved from MigratingFrom -- Up
2013-01-03 14:03:12,837 INFO  
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(QuartzScheduler_Worker-36) adding VM 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 to 
re-run list
2013-01-03 14:03:12,852 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
(QuartzScheduler_Worker-36) Rerun vm 9dc63ce4-0f76-4963-adfe-6f8eb1a44806. 
Called from vds ovirt-node002.domain-name
2013-01-03 14:03:12,855 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(pool-3-thread-48) START, MigrateStatusVDSCommand(vdsId = 
1a52b722-43a1-11e2-af96-3cd92b4c8e89, 
vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806), log id: 4721a1f3
2013-01-03 14:03:12,864 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-3-thread-48) Failed in MigrateStatusVDS method
2013-01-03 14:03:12,865 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-3-thread-48) Error code migrateErr and error message VDSGenericException: 
VDSErrorException: Failed to MigrateStatusVDS, error = Fatal error during 
migration
2013-01-03 14:03:12,865 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-3-thread-48) Command 
org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand return value 
 Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
mStatus   Class Name: 
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode 12
mMessage  Fatal error during migration


2013-01-03 14:03:12,866 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-3-thread-48) Vds: ovirt-node002.itvonline.ads
2013-01-03 14:03:12,867 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] 
(pool-3-thread-48) Command MigrateStatusVDS execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
MigrateStatusVDS, error = Fatal error during migration
2013-01-03 14:03:12,867 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] 
(pool-3-thread-48) FINISH, MigrateStatusVDSCommand, log id: 4721a1f3

Does anyone have any idea what this might be? I am using 3.1 from dreyou as 
these are CentOS 6 

Re: [Users] oVirt 3.1 - VM Migration Issue

2013-01-03 Thread Tom Brown


 interesting, please search for migrationCreate command on desination host and 
 search for ERROR afterwords, what do you see?
 
 - Original Message -
 From: Tom Brown t...@ng23.net
 To: users@ovirt.org
 Sent: Thursday, January 3, 2013 4:12:05 PM
 Subject: [Users] oVirt 3.1 - VM Migration Issue
 
 
 Hi
 
 I seem to have an issue with a single VM and migration, other VM's
 can migrate OK - When migrating from the GUI it appears to just hang
 but in the engine.log i see the following
 
 2013-01-03 14:03:10,359 INFO  [org.ovirt.engine.core.bll.VdsSelector]
 (ajp--0.0.0.0-8009-59) Checking for a specific VDS only -
 id:a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
 name:ovirt-node.domain-name, host_name(ip):10.192.42.165
 2013-01-03 14:03:10,411 INFO
 [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
 (pool-3-thread-48) [4d32917d] Running command:
 MigrateVmToServerCommand internal: false. Entities affected :  ID:
 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 Type: VM
 2013-01-03 14:03:10,413 INFO  [org.ovirt.engine.core.bll.VdsSelector]
 (pool-3-thread-48) [4d32917d] Checking for a specific VDS only -
 id:a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
 name:ovirt-node.domain-name, host_name(ip):10.192.42.165
 2013-01-03 14:03:11,028 INFO
 [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
 (pool-3-thread-48) [4d32917d] START, MigrateVDSCommand(vdsId =
 1a52b722-43a1-11e2-af96-3cd92b4c8e89,
 vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806, srcHost=10.192.42.196,
 dstVdsId=a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
 dstHost=10.192.42.165:54321, migrationMethod=ONLINE), log id:
 5011789b
 2013-01-03 14:03:11,030 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
 (pool-3-thread-48) [4d32917d] VdsBroker::migrate::Entered
 (vm_guid=9dc63ce4-0f76-4963-adfe-6f8eb1a44806,
 srcHost=10.192.42.196, dstHost=10.192.42.165:54321,  method=online
 2013-01-03 14:03:11,031 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
 (pool-3-thread-48) [4d32917d] START, MigrateBrokerVDSCommand(vdsId =
 1a52b722-43a1-11e2-af96-3cd92b4c8e89,
 vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806, srcHost=10.192.42.196,
 dstVdsId=a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
 dstHost=10.192.42.165:54321, migrationMethod=ONLINE), log id:
 7cd53864
 2013-01-03 14:03:11,041 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
 (pool-3-thread-48) [4d32917d] FINISH, MigrateBrokerVDSCommand, log
 id: 7cd53864
 2013-01-03 14:03:11,086 INFO
 [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
 (pool-3-thread-48) [4d32917d] FINISH, MigrateVDSCommand, return:
 MigratingFrom, log id: 5011789b
 2013-01-03 14:03:11,606 INFO
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (QuartzScheduler_Worker-29) vds::refreshVmList vm id
 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 is migrating to vds
 ovirt-node.domain-name ignoring it in the refresh till migration is
 done
 2013-01-03 14:03:12,836 INFO
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (QuartzScheduler_Worker-36) VM test002.domain-name
 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 moved from MigratingFrom -- Up
 2013-01-03 14:03:12,837 INFO
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (QuartzScheduler_Worker-36) adding VM
 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 to re-run list
 2013-01-03 14:03:12,852 ERROR
 [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
 (QuartzScheduler_Worker-36) Rerun vm
 9dc63ce4-0f76-4963-adfe-6f8eb1a44806. Called from vds
 ovirt-node002.domain-name
 2013-01-03 14:03:12,855 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
 (pool-3-thread-48) START, MigrateStatusVDSCommand(vdsId =
 1a52b722-43a1-11e2-af96-3cd92b4c8e89,
 vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806), log id: 4721a1f3
 2013-01-03 14:03:12,864 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (pool-3-thread-48) Failed in MigrateStatusVDS method
 2013-01-03 14:03:12,865 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (pool-3-thread-48) Error code migrateErr and error message
 VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS,
 error = Fatal error during migration
 2013-01-03 14:03:12,865 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (pool-3-thread-48) Command
 org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand
 return value
 Class Name:
 org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
 mStatus   Class Name:
 org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
 mCode 12
 mMessage  Fatal error during migration
 
 
 2013-01-03 14:03:12,866 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (pool-3-thread-48) Vds: ovirt-node002.itvonline.ads
 2013-01-03 14:03:12,867 ERROR
 [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-48)
 Command MigrateStatusVDS execution failed. Exception:
 VDSErrorException: VDSGenericException: VDSErrorException

Re: [Users] Newbie Host Install issue

2013-01-03 Thread Tom Brown


 I am running fedora 17 and was able to get ovirt running on the machine but I 
 am unable to add that same machine as a 'host'  
 
 I am sure I am missing something simple...
 
 
 2013-01-03 08:05:15,821 INFO  [org.ovirt.engine.core.bll.VdsInstaller] 
 (pool-3-thread-15) [435d4bf] Installation of 192.168.1.64. Assigning unique 
 id 00:24:21:e2:1d:36 to Host. (Stage: Get the unique vds id)
 2013-01-03 08:05:15,825 ERROR 
 [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] 
 (pool-3-thread-15) /bin/bash: /usr/sbin/dmidecode: No such file or directory
 
 2013-01-03 08:05:15,825 ERROR [org.ovirt.engine.core.bll.VdsInstaller] 
 (pool-3-thread-15) [435d4bf] Installation of 192.168.1.64. Error: /bin/bash: 
 /usr/sbin/dmidecode: No such file or directory
 . (Stage: Upload Installation script to Host)
 2013-01-03 08:05:15,826 INFO  [org.ovirt.engine.core.bll.InstallerMessages] 
 (pool-3-thread-15) [435d4bf] VDS message: /bin/bash: /usr/sbin/dmidecode: No 
 such file or directory
 2013-01-03 08:05:15,826 INFO  
 [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] 
 (pool-3-thread-15) RunSSHCommand returns true
 2013-01-03 08:05:15,826 INFO  [org.ovirt.engine.core.bll.VdsInstaller] 
 (pool-3-thread-15) [435d4bf] Installation of 192.168.1.64. Executing 
 installation stage. (Stage: Upload Installation script to Host)
 2013-01-03 08:05:15,827 INFO  
 [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] 
 (pool-3-thread-15) Uploading file 
 /usr/share/ovirt-engine/scripts/vds_installer.py to 
 /tmp/vds_installer_094faf65-68ea-4b9d-a8ce-ed731b33437a.py on 192.168.1.64
 2013-01-03 08:05:15,828 INFO  
 [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] 
 (pool-3-thread-15) Uploading file 
 /usr/share/ovirt-engine/scripts/vds_installer.py to 
 /tmp/vds_installer_094faf65-68ea-4b9d-a8ce-ed731b33437a.py on 192.168.1.64
 2013-01-03 08:05:19,572 INFO  [org.ovirt.engine.core.bll.VdsInstaller] 
 (pool-3-thread-15) [435d4bf] Installation of 192.168.1.64. successfully done 
 sftp operation ( Stage: Upload Installation script to Host)
 2013-01-03 08:05:19,573 INFO  
 [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] 
 (pool-3-thread-15) return true
 2013-01-03 08:05:19,575 INFO  
 [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] 
 (pool-3-thread-15) Uploading file /tmp/firewall.conf7265875011780210442.tmp 
 to /tmp/firewall.conf.094faf65-68ea-4b9d-a8ce-ed731b33437a on 192.168.1.64
 2013-01-03 08:05:19,576 INFO  
 [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] 
 (pool-3-thread-15) Uploading file /tmp/firewall.conf7265875011780210442.tmp 
 to /tmp/firewall.conf.094faf65-68ea-4b9d-a8ce-ed731b33437a on 192.168.1.64
 2013-01-03 08:05:23,392 INFO  [org.ovirt.engine.core.bll.VdsInstaller] 
 (pool-3-thread-15) [435d4bf] Installation of 192.168.1.64. successfully done 
 sftp operation ( Stage: Upload Installation script to Host)
 2013-01-03 08:05:23,393 INFO  
 [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] 
 (pool-3-thread-15) return true
 2013-01-03 08:05:23,394 INFO  [org.ovirt.engine.core.bll.VdsInstaller] 
 (pool-3-thread-15) [435d4bf] Installation of 192.168.1.64. Executing 
 installation stage. (Stage: Running first installation script on Host)
 2013-01-03 08:05:23,395 INFO  [org.ovirt.engine.core.bll.VdsInstaller] 
 (pool-3-thread-15) [435d4bf] Installation of 192.168.1.64. Sending SSH 
 Command chmod +x /tmp/vds_installer_094faf65-68ea-4b9d-a8ce-ed731b33437a.py; 
 /tmp/vds_installer_094faf65-68ea-4b9d-a8ce-ed731b33437a.py -c 
 'ssl=true;management_port=54321' -O 'rut3.com' -t 2013-01-03T16:05:15 -f 
 /tmp/firewall.conf.094faf65-68ea-4b9d-a8ce-ed731b33437a -p 80 -b   
 http://virt03.att.net:80/Components/vds/ 
 http://virt03.att.net:80/Components/vds/ 192.168.1.64 
 094faf65-68ea-4b9d-a8ce-ed731b33437a False. (Stage: Running first 
 installation script on Host)
 2013-01-03 08:05:23,397 INFO  
 [org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] 
 (pool-3-thread-15) Invoking chmod +x 
 /tmp/vds_installer_094faf65-68ea-4b9d-a8ce-ed731b33437a.py; 
 /tmp/vds_installer_094faf65-68ea-4b9d-a8ce-ed731b33437a.py -c 
 'ssl=true;management_port=54321' -O 'rut3.com' -t 2013-01-03T16:05:15 -f 
 /tmp/firewall.conf.094faf65-68ea-4b9d-a8ce-ed731b33437a -p 80 -b   
 http://virt03.att.net:80/Components/vds/ 
 http://virt03.att.net:80/Components/vds/ 192.168.1.64 
 094faf65-68ea-4b9d-a8ce-ed731b33437a False on 192.168.1.64
 2013-01-03 08:05:24,400 INFO  [org.ovirt.engine.core.bll.VdsInstaller] 
 (pool-3-thread-15) [435d4bf] Installation of 192.168.1.64. Received message: 
 BSTRAP component='INSTALLER' status='OK' message='Test platform succeeded'/
 BSTRAP component='INSTALLER LIB' status='OK' message='Install library 
 already exists'/
 BSTRAP component='INSTALLER' status='OK' message='vds_bootstrap.py download 
 succeeded'/
 BSTRAP component='RHN_REGISTRATION' status='OK' message='Host properly 
 registered with RHN/Satellite.'/
 . FYI. (Stage: Running 

Re: [Users] What do you want to see in oVirt next?

2013-01-03 Thread Tom Brown

 
 For me, I'd like to see official rpms for RHEL6/CentOS6. According to
 the traffic on this list quite a lot are using Dreyou's packages.
 
 I'm going to second this strongly! Official support would be very much 
 appreciated. Bonus points for supporting a migration from the dreyou 
 packages. No offense to dreyou, of course, just rather be better supported by 
 the official line on Centos 6.x.
 

and one more for good measure!

 Better support/integration of windows based SPICE clients would also be much 
 appreciated, I have many end users on Windows, and it's been a chore to keep 
 it working so far. This includes the client drivers for windows VMs to 
 support the SPICE display for multiple displays. More of a client side thing, 
 I know, but a desired feature in my environment.
 

I'd like to see a spice client for Mac - i know of various ports but as of now 
on 10.8 i have to use VNC which is not ideal. I now spice != ovirt however they 
are very linked at this time.

 Thanks for the continued progress and support as well!
 

indeed!

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users