[Users] Windows install / activation issues.

2012-06-10 Thread Robert Middleswarth
I am slowly doing my stage two testing now.  I have been finding very 
few issues that is pretty good since 3.1 is just going into feature 
freeze but I did find one that is kinda a show stopper and weird for 
me.  I installed windows 2003 using my Dell CD on my Dell Hardware.  On 
both ESXi and Xen Server windows authorization fine because the 
installer sees the Dell bias in some way but under oVirt it doesn't.  
From my research it looks like the feature is called SLP.  Is there 
anyway to pass the Dell SLP info to the installer like it is done under 
both Xen and Vmware ESXi?  I assume it would be a libVirt option of some 
kind I just can't find it.


Thanks
Robert

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Getting ovirt to update VMs to "Up" state sooner

2012-06-10 Thread Andrey F.
On Sun, Jun 10, 2012 at 12:55 AM, Dan Kenigsberg  wrote:
> On Thu, Jun 07, 2012 at 12:26:30PM +0300, Yaniv Kaul wrote:
>> On 06/07/2012 05:23 AM, Andrey F. wrote:
>> >Hi everybody,
>> >
>> >We are trying to optimize our continuous integration system around
>> >our ovirt-management software (i.e. we call into the ovirt apis to
>> >create VMs, do funny stuff with the VMs, etc.). For our tests, we
>> >deployed a VM that boots extremely quickly (2 seconds) in hoping
>> >to speed up our tests. Unfortunately, ovirt is not updating the
>> >state of the VM to "Up" as fast as we'd like. VDSM show the
>> >following in the logs:
>> >
>> >Thread-420807::DEBUG::2012-06-06
>> >22:14:47,662::clientIF::54::vds::(wrapper) [10.225.52.7]::call
>> >*getVmStats *with ('e3b250c9-a5ca-42da-bd8f-245a3ef4bbfb',) {}
>> >Thread-420807::DEBUG::2012-06-06
>> >22:14:47,663::libvirtvm::222::vm.Vm::(_getDiskStats)
>> >vmId=`e3b250c9-a5ca-42da-bd8f-245a3ef4bbfb`::*Disk stats not
>> >available
>> >*Thread-420807::DEBUG::2012-06-06
>> >22:14:47,663::libvirtvm::251::vm.Vm::(_getDiskLatency)
>> >vmId=`e3b250c9-a5ca-42da-bd8f-245a3ef4bbfb`::*Disk latency not
>> >available
>> >*Thread-420807::DEBUG::2012-06-06
>> >22:14:47,663::clientIF::59::vds::(wrapper) return getVmStats with
>> >{'status': {'message': 'Done', 'code': 0}, 'statsList':
>> >[{'status': 'Up', 'username': 'Unknown', 'memUsage': '0',
>
> As you can see, Vdsm reports 'Up' by now. It should have reported
> 'Powering up' for 60 seconds (unless an internal guest agent reports
> that it is up before that).
>
> While the Vm is not reported as 'Up' what is its reproted state?
>
>>
>> Do you have the guest agent installed on the guest?
>> Y.
>>

Sorry for a slow response guys. The state is "Powering up" as Dan has
described. It is taking me a while to install the guest agent on the
VM because it is Tiny Core Linux. Thanks for the tip. I'll post back
if it the agent doesn't work.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VDSM errors on fedora17

2012-06-10 Thread Dan Kenigsberg
On Thu, Jun 07, 2012 at 05:35:06PM +0300, Itamar Heim wrote:
> On 06/07/2012 10:10 AM, Haim Ateya wrote:
> >
> >
> >- Original Message -
> >>From: ov...@qip.ru
> >>To: users@ovirt.org
> >>Sent: Thursday, June 7, 2012 9:22:35 AM
> >>Subject: [Users] VDSM errors on fedora17
> >>
> >>
> >>VDSM from http:// www . ovirt .org/releases/beta/fedora/17/ www.
> >>ovirt .org/releases/beta/fedora/17/ ovirt
> >>-engine-dbscripts-3.1.0_0001-0. gitf 093e0.fc17.noarch.rpm">  gitf
> >>093e0
> >>
> >>VDSM started and working but
> >>
> >>1. after start there was error in vdsm .log
> >>
> >>MainThread:: ERROR ::2012-06-07 10:05:23,883::clientIF::151::vds::(_
> >>prepareBindings ) Unable to load the rest server module. Please make
> >>sure it is installed.
> >
> >please install: vdsm-rest-4.9.6-2.gite952471.fc17.noarch.rpm
> 
> should this be an error, or only an INFO that the REST bindings are
> not installed?

I think INFO should have been enough.

Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Getting ovirt to update VMs to "Up" state sooner

2012-06-10 Thread Dan Kenigsberg
On Thu, Jun 07, 2012 at 12:26:30PM +0300, Yaniv Kaul wrote:
> On 06/07/2012 05:23 AM, Andrey F. wrote:
> >Hi everybody,
> >
> >We are trying to optimize our continuous integration system around
> >our ovirt-management software (i.e. we call into the ovirt apis to
> >create VMs, do funny stuff with the VMs, etc.). For our tests, we
> >deployed a VM that boots extremely quickly (2 seconds) in hoping
> >to speed up our tests. Unfortunately, ovirt is not updating the
> >state of the VM to "Up" as fast as we'd like. VDSM show the
> >following in the logs:
> >
> >Thread-420807::DEBUG::2012-06-06
> >22:14:47,662::clientIF::54::vds::(wrapper) [10.225.52.7]::call
> >*getVmStats *with ('e3b250c9-a5ca-42da-bd8f-245a3ef4bbfb',) {}
> >Thread-420807::DEBUG::2012-06-06
> >22:14:47,663::libvirtvm::222::vm.Vm::(_getDiskStats)
> >vmId=`e3b250c9-a5ca-42da-bd8f-245a3ef4bbfb`::*Disk stats not
> >available
> >*Thread-420807::DEBUG::2012-06-06
> >22:14:47,663::libvirtvm::251::vm.Vm::(_getDiskLatency)
> >vmId=`e3b250c9-a5ca-42da-bd8f-245a3ef4bbfb`::*Disk latency not
> >available
> >*Thread-420807::DEBUG::2012-06-06
> >22:14:47,663::clientIF::59::vds::(wrapper) return getVmStats with
> >{'status': {'message': 'Done', 'code': 0}, 'statsList':
> >[{'status': 'Up', 'username': 'Unknown', 'memUsage': '0',

As you can see, Vdsm reports 'Up' by now. It should have reported
'Powering up' for 60 seconds (unless an internal guest agent reports
that it is up before that).

While the Vm is not reported as 'Up' what is its reproted state?

> 
> Do you have the guest agent installed on the guest?
> Y.
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Host CPU type is not compatible with Cluster Properties

2012-06-10 Thread Itamar Heim

On 06/10/2012 10:01 AM, Mohsen Saeedi wrote:

It's Intel Xeon CPU 5130.

Itamar, as you said i run that command and i paste the command output below:

[root@kvm01 ~]# vdsClient -s 0 getVdsCaps grep -i flags


sorry, i meant:
vdsClient -s 0 getVdsCaps | grep -i flags

the pipe would have made this filster only this line:
>  cpuFlags =
> 
fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,tm2,ssse3,cx16,xtpr,pdcm,dca,lahf_lm,dts,tpr_shadow



anyway:
1. it shows no models on this host recognized by libvirt.
2. as a first step, try to reboot your machine, set DX (Disable Execute) 
flag in bios, to a *cold* boot and check again for

vdsClient -s 0 getVdsCaps | grep -i flags

which should show some model in the list.


 HBAInventory = {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:9c1d1149e962'}], 'FC': []}
 ISCSIInitiatorName = iqn.1994-05.com.redhat:9c1d1149e962
 bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500',
'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0':
{'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [],
'hwaddr': '00:00:00:00:00:00'}, 'bond1': {'addr': '', 'cfg': {}, 'mtu':
'1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'},
'bond2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves':
[], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {},
'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}
 clusterLevels = ['3.0', '3.1']
 cpuCores = 4
 cpuFlags =
fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,tm2,ssse3,cx16,xtpr,pdcm,dca,lahf_lm,dts,tpr_shadow
 cpuModel = Intel(R) Xeon(R) CPU5130  @ 2.00GHz
 cpuSockets = 2
 cpuSpeed = 1999.793
 emulatedMachines = ['rhel6.2.0', 'pc', 'rhel6.1.0', 'rhel6.0.0',
'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0']
 guestOverhead = 65
 hooks = {'before_vm_start': {'50_vhostmd': {'md5':
'2aa9ac48ef07de3c94e3428975e9df1a'}, '10_simpleqemu': {'md5':
'2c88a35172c02f2125fafe39c1c95fa9'}}, 'after_vm_destroy': {'50_vhostmd':
{'md5': '47f8d385859e4c3c96113d8ff446b261'}}, 'before_vm_dehibernate':
{'50_vhostmd': {'md5': '2aa9ac48ef07de3c94e3428975e9df1a'}},
'before_vm_migrate_destination': {'50_vhostmd': {'md5':
'2aa9ac48ef07de3c94e3428975e9df1a'}}}
 kvmEnabled = true
 lastClient = 217.219.236.9
 lastClientIface = ovirtmgmt
 management_ip =
 memSize = 7868
 networks = {'ovirtmgmt': {'addr': '217.219.236.9', 'cfg': {'DNS2':
'4.2.2.1', 'DNS1': '8.8.8.8', 'IPADDR': '217.219.236.9', 'GATEWAY':
'217.219.236.20', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK':
'255.255.255.0', 'BOOTPROTO': 'none', 'DEVICE': 'ovirtmgmt', 'TYPE':
'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0',
'stp': 'off', 'bridged': True, 'gateway': '217.219.236.20', 'ports':
['eth2']}}
 nics = {'eth2': {'hwaddr': '00:1b:78:02:5f:a6', 'netmask': '',
'speed': 1000, 'addr': '', 'mtu': '1500'}, 'eth1': {'hwaddr':
'00:23:7d:28:db:ba', 'netmask': '', 'speed': 0, 'addr': '', 'mtu':
'1500'}, 'eth0': {'hwaddr': '00:23:7d:28:db:b8', 'netmask':
'255.255.255.0', 'speed': 1000, 'addr': '192.168.1.241', 'mtu': '1500'}}
 operatingSystem = {'release': '2.el6.centos.7', 'version': '6',
'name': 'oVirt Node'}
 packages2 = {'kernel': {'release': '220.17.1.el6.x86_64',
'buildtime': 1337110297.0, 'version': '2.6.32'}, 'spice-server':
{'release': '5.el6', 'buildtime': '1323304307', 'version': '0.8.2'},
'vdsm': {'release': '0.274.git937a4b7.el6', 'buildtime': '1339052732',
'version': '4.9.6'}, 'qemu-kvm': {'release': '2.209.el6_2.5',
'buildtime': '1336984850', 'version': '0.12.1.2'}, 'libvirt':
{'release': '23.el6_2.8', 'buildtime': '1334928354', 'version':
'0.9.4'}, 'qemu-img': {'release': '2.209.el6_2.5', 'buildtime':
'1336984850', 'version': '0.12.1.2'}}
 reservedMem = 321
 software_revision = 0.274
 software_version = 4.9
 supportedProtocols = ['2.2', '2.3']
 supportedRHEVMs = ['3.0', '3.1']
 uuid = 34313638-3934-435A-4A37-313230394E30_00:1b:78:02:5f:a6
 version_name = Snow Man
 vlans = {}
 vmTypes = ['kvm']

Thanks


/*Haim Ateya */ wrote on Sat, 09 Jun 2012 11:36:46
-0400 (EDT):


- Original Message -

From: "Itamar Heim"
To: "Mohsen Saeedi"
Cc:users@ovirt.org
Sent: Saturday, June 9, 2012 5:35:45 PM
Subject: Re: [Users] Host CPU type is not compatible with Cluster Properties

On 06/09/2012 04:02 PM, Mohsen Saeedi wrote:

Hi

I have problem with cpu type in Ovirt. I have a HP server with
Intel(R)
Xeon(R) CPU  5130  @ 2.00GHz. when i try to add this host as Ovirt
host
in default cluster, i get the error message:

Host CPU type is not compatible wit

Re: [Users] Host CPU type is not compatible with Cluster Properties

2012-06-10 Thread Mohsen Saeedi

  
  

  It's Intel Xeon CPU 5130.
  Itamar, as you said i run that command and i paste the command
output below:
  [root@kvm01 ~]# vdsClient -s 0 getVdsCaps grep -i flags
    HBAInventory = {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:9c1d1149e962'}], 'FC': []}
    ISCSIInitiatorName = iqn.1994-05.com.redhat:9c1d1149e962
    bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500',
'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'},
'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '',
'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond1': {'addr':
'', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [],
'hwaddr': '00:00:00:00:00:00'}, 'bond2': {'addr': '', 'cfg': {},
'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu':
'1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}}
    clusterLevels = ['3.0', '3.1']
    cpuCores = 4
    cpuFlags =
fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,tm2,ssse3,cx16,xtpr,pdcm,dca,lahf_lm,dts,tpr_shadow
    cpuModel = Intel(R) Xeon(R) CPU    5130  @ 2.00GHz
    cpuSockets = 2
    cpuSpeed = 1999.793
    emulatedMachines = ['rhel6.2.0', 'pc', 'rhel6.1.0',
'rhel6.0.0', 'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0']
    guestOverhead = 65
    hooks = {'before_vm_start': {'50_vhostmd': {'md5':
'2aa9ac48ef07de3c94e3428975e9df1a'}, '10_simpleqemu': {'md5':
'2c88a35172c02f2125fafe39c1c95fa9'}}, 'after_vm_destroy':
{'50_vhostmd': {'md5': '47f8d385859e4c3c96113d8ff446b261'}},
'before_vm_dehibernate': {'50_vhostmd': {'md5':
'2aa9ac48ef07de3c94e3428975e9df1a'}},
'before_vm_migrate_destination': {'50_vhostmd': {'md5':
'2aa9ac48ef07de3c94e3428975e9df1a'}}}
    kvmEnabled = true
    lastClient = 217.219.236.9
    lastClientIface = ovirtmgmt
    management_ip = 
    memSize = 7868
    networks = {'ovirtmgmt': {'addr': '217.219.236.9', 'cfg':
{'DNS2': '4.2.2.1', 'DNS1': '8.8.8.8', 'IPADDR':
'217.219.236.9', 'GATEWAY': '217.219.236.20', 'DELAY': '0',
'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO':
'none', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT':
'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off',
'bridged': True, 'gateway': '217.219.236.20', 'ports':
['eth2']}}
    nics = {'eth2': {'hwaddr': '00:1b:78:02:5f:a6', 'netmask':
'', 'speed': 1000, 'addr': '', 'mtu': '1500'}, 'eth1':
{'hwaddr': '00:23:7d:28:db:ba', 'netmask': '', 'speed': 0,
'addr': '', 'mtu': '1500'}, 'eth0': {'hwaddr':
'00:23:7d:28:db:b8', 'netmask': '255.255.255.0', 'speed': 1000,
'addr': '192.168.1.241', 'mtu': '1500'}}
    operatingSystem = {'release': '2.el6.centos.7', 'version':
'6', 'name': 'oVirt Node'}
    packages2 = {'kernel': {'release': '220.17.1.el6.x86_64',
'buildtime': 1337110297.0, 'version': '2.6.32'}, 'spice-server':
{'release': '5.el6', 'buildtime': '1323304307', 'version':
'0.8.2'}, 'vdsm': {'release': '0.274.git937a4b7.el6',
'buildtime': '1339052732', 'version': '4.9.6'}, 'qemu-kvm':
{'release': '2.209.el6_2.5', 'buildtime': '1336984850',
'version': '0.12.1.2'}, 'libvirt': {'release': '23.el6_2.8',
'buildtime': '1334928354', 'version': '0.9.4'}, 'qemu-img':
{'release': '2.209.el6_2.5', 'buildtime': '1336984850',
'version': '0.12.1.2'}}
    reservedMem = 321
    software_revision = 0.274
    software_version = 4.9
    supportedProtocols = ['2.2', '2.3']
    supportedRHEVMs = ['3.0', '3.1']
    uuid =
34313638-3934-435A-4A37-313230394E30_00:1b:78:02:5f:a6
    version_name = Snow Man
    vlans = {}
    vmTypes = ['kvm']
  Thanks
  
  
  Haim
  Ateya  wrote on Sat, 09 Jun
  2012 11:36:46 -0400 (EDT):

  

- Original Message -

  
From: "Itamar Heim" 
To: "Mohsen Saeedi" 
Cc: users@ovirt.org
Sent: Saturday, June 9, 2012 5:35:45 PM
Subject: Re: [Users] Host CPU type is not compatible with Cluster Properties

On 06/09/2012 04:02 PM, Mohsen Saeedi wrote:


  Hi

I have problem with cpu type in Ovirt. I have a HP server with
Intel(R)
Xeon(R) CPU  5130  @ 2.00GHz. when i try to add this host as Ovirt
host
in default cluster, i get the error message:

Host CPU type is not compatible with Cluster Properties

I know this CPU support VT-x and it's enable in bios.