Re: [ovirt-users] Dumb question: exclamation mark next to VM?

2016-02-05 Thread Patrick Hurrelmann
On 05.02.2016 08:56, Nicolas Ecarnot wrote:
> Le 04/02/2016 22:35, Colin Coe a écrit :
>> Is the oVirt agent up to date?
>
> yum -y upgrade
> ... [blah blah blah]
> ... reboot
> and then :
>
> # cat /etc/centos-release
> CentOS Linux release 7.2.1511 (Core)
>
> # rpm -qa|grep -i agent
> ovirt-guest-agent-common-1.0.11-1.el7.noarch
> qemu-guest-agent-2.3.0-4.el7.x86_64
>
> Exclamation mark is still there.
>
Hi,

this is probably https://bugzilla.redhat.com/show_bug.cgi?id=1281871.
I'm facing the same on 3.6.1. Maybe time to vote for this bug?

Regards
Patrick

--

Lobster SCM GmbH, Hindenburgstraße 15, D-82343 Pöcking
HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Host loses all network configuration on update to oVirt 3.5.4

2015-09-07 Thread Patrick Hurrelmann
On 07.09.2015 14:44, Patrick Hurrelmann wrote:
> On 07.09.2015 13:54, Dan Kenigsberg wrote:
>> On Mon, Sep 07, 2015 at 11:47:48AM +0200, Patrick Hurrelmann wrote:
>>> On 06.09.2015 11:30, Dan Kenigsberg wrote:
>>>> On Fri, Sep 04, 2015 at 10:26:39AM +0200, Patrick Hurrelmann wrote:
>>>>> Hi all,
>>>>>
>>>>> I just updated my existing oVirt 3.5.3 installation (iSCSI hosted-engine 
>>>>> on
>>>>> CentOS 7.1). The engine update went fine. Updating the hosts succeeds 
>>>>> until the
>>>>> first reboot. After a reboot the host does not come up again. It is 
>>>>> missing all
>>>>> network configuration. All network cfgs in /etc/sysconfig/network-scripts 
>>>>> are
>>>>> missing except ifcfg-lo. The host boots up without working networking. 
>>>>> Using
>>>>> IPMI and config backups, I was able to restore the lost network configs. 
>>>>> Once
>>>>> these are restored and the host is rebooted again all seems to be back to 
>>>>> good.
>>>>> This has now happend to 2 updated hosts (this installation has a total of 
>>>>> 4
>>>>> hosts, so 2 more to debug/try). I'm happy to assist in furter debugging.
>>>>>
>>>>> Before updating the second host, I gathered some information. All these 
>>>>> hosts
>>>>> have 3 physical nics. One is used for the ovirtmgmt bridge and the other 
>>>>> 2 are
>>>>> used for iSCSI storage vlans.
>>>>>
>>>>> ifcfgs before update:
>>>>>
>>>>> /etc/sysconfig/network-scripts/ifcfg-em1
>>>>> # Generated by VDSM version 4.16.20-0.el7.centos
>>>>> DEVICE=em1
>>>>> HWADDR=d0:67:e5:f0:e5:c6
>>>>> BRIDGE=ovirtmgmt
>>>>> ONBOOT=yes
>>>>> NM_CONTROLLED=no
>>>> /etc/sysconfig/network-scripts/ifcfg-lo
>>>>> DEVICE=lo
>>>>> IPADDR=127.0.0.1
>>>>> NETMASK=255.0.0.0
>>>>> NETWORK=127.0.0.0
>>>>> # If you're having problems with gated making 127.0.0.0/8 a martian,
>>>>> # you can change this to something else (255.255.255.255, for example)
>>>>> BROADCAST=127.255.255.255
>>>>> ONBOOT=yes
>>>>> NAME=loopback
>>>>>
>>>>> /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt
>>>>> # Generated by VDSM version 4.16.20-0.el7.centos
>>>>> DEVICE=ovirtmgmt
>>>>> TYPE=Bridge
>>>>> DELAY=0
>>>>> STP=off
>>>>> ONBOOT=yes
>>>>> IPADDR=1.2.3.16
>>>>> NETMASK=255.255.255.0
>>>>> GATEWAY=1.2.3.11
>>>>> BOOTPROTO=none
>>>>> DEFROUTE=yes
>>>>> NM_CONTROLLED=no
>>>>> HOTPLUG=no
>>>>>
>>>>> /etc/sysconfig/network-scripts/ifcfg-p4p1
>>>>> # Generated by VDSM version 4.16.20-0.el7.centos
>>>>> DEVICE=p4p1
>>>>> HWADDR=68:05:ca:01:bc:0c
>>>>> ONBOOT=no
>>>>> IPADDR=4.5.7.102
>>>>> NETMASK=255.255.255.0
>>>>> BOOTPROTO=none
>>>>> MTU=9000
>>>>> DEFROUTE=no
>>>>> NM_CONTROLLED=no
>>>>>
>>>>> /etc/sysconfig/network-scripts/ifcfg-p3p1
>>>>> # Generated by VDSM version 4.16.20-0.el7.centos
>>>>> DEVICE=p3p1
>>>>> HWADDR=68:05:ca:18:86:45
>>>>> ONBOOT=no
>>>>> IPADDR=4.5.6.102
>>>>> NETMASK=255.255.255.0
>>>>> BOOTPROTO=none
>>>>> MTU=9000
>>>>> DEFROUTE=no
>>>>> NM_CONTROLLED=no
>>>>>
>>>>> /etc/sysconfig/network-scripts/ifcfg-lo
>>>>>
>>>>>
>>>>> ip link before update:
>>>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode 
>>>>> DEFAULT
>>>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>>>> 2: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN 
>>>>> mode DEFAULT
>>>>> link/ether 46:50:22:7a:f3:9d brd ff:ff:ff:ff:ff:ff
>>>>> 3: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master 
>>>>> ovirtmgmt state UP mode DEFAULT qlen 1000
>>>>> link/ether d0:67:e5:f0:e5:c6 brd ff:ff:ff:f

Re: [ovirt-users] Host loses all network configuration on update to oVirt 3.5.4

2015-09-07 Thread Patrick Hurrelmann
On 07.09.2015 13:54, Dan Kenigsberg wrote:
> On Mon, Sep 07, 2015 at 11:47:48AM +0200, Patrick Hurrelmann wrote:
>> On 06.09.2015 11:30, Dan Kenigsberg wrote:
>>> On Fri, Sep 04, 2015 at 10:26:39AM +0200, Patrick Hurrelmann wrote:
>>>> Hi all,
>>>>
>>>> I just updated my existing oVirt 3.5.3 installation (iSCSI hosted-engine on
>>>> CentOS 7.1). The engine update went fine. Updating the hosts succeeds 
>>>> until the
>>>> first reboot. After a reboot the host does not come up again. It is 
>>>> missing all
>>>> network configuration. All network cfgs in /etc/sysconfig/network-scripts 
>>>> are
>>>> missing except ifcfg-lo. The host boots up without working networking. 
>>>> Using
>>>> IPMI and config backups, I was able to restore the lost network configs. 
>>>> Once
>>>> these are restored and the host is rebooted again all seems to be back to 
>>>> good.
>>>> This has now happend to 2 updated hosts (this installation has a total of 4
>>>> hosts, so 2 more to debug/try). I'm happy to assist in furter debugging.
>>>>
>>>> Before updating the second host, I gathered some information. All these 
>>>> hosts
>>>> have 3 physical nics. One is used for the ovirtmgmt bridge and the other 2 
>>>> are
>>>> used for iSCSI storage vlans.
>>>>
>>>> ifcfgs before update:
>>>>
>>>> /etc/sysconfig/network-scripts/ifcfg-em1
>>>> # Generated by VDSM version 4.16.20-0.el7.centos
>>>> DEVICE=em1
>>>> HWADDR=d0:67:e5:f0:e5:c6
>>>> BRIDGE=ovirtmgmt
>>>> ONBOOT=yes
>>>> NM_CONTROLLED=no
>>> /etc/sysconfig/network-scripts/ifcfg-lo
>>>> DEVICE=lo
>>>> IPADDR=127.0.0.1
>>>> NETMASK=255.0.0.0
>>>> NETWORK=127.0.0.0
>>>> # If you're having problems with gated making 127.0.0.0/8 a martian,
>>>> # you can change this to something else (255.255.255.255, for example)
>>>> BROADCAST=127.255.255.255
>>>> ONBOOT=yes
>>>> NAME=loopback
>>>>
>>>> /etc/sysconfig/network-scripts/ifcfg-ovirtmgmt
>>>> # Generated by VDSM version 4.16.20-0.el7.centos
>>>> DEVICE=ovirtmgmt
>>>> TYPE=Bridge
>>>> DELAY=0
>>>> STP=off
>>>> ONBOOT=yes
>>>> IPADDR=1.2.3.16
>>>> NETMASK=255.255.255.0
>>>> GATEWAY=1.2.3.11
>>>> BOOTPROTO=none
>>>> DEFROUTE=yes
>>>> NM_CONTROLLED=no
>>>> HOTPLUG=no
>>>>
>>>> /etc/sysconfig/network-scripts/ifcfg-p4p1
>>>> # Generated by VDSM version 4.16.20-0.el7.centos
>>>> DEVICE=p4p1
>>>> HWADDR=68:05:ca:01:bc:0c
>>>> ONBOOT=no
>>>> IPADDR=4.5.7.102
>>>> NETMASK=255.255.255.0
>>>> BOOTPROTO=none
>>>> MTU=9000
>>>> DEFROUTE=no
>>>> NM_CONTROLLED=no
>>>>
>>>> /etc/sysconfig/network-scripts/ifcfg-p3p1
>>>> # Generated by VDSM version 4.16.20-0.el7.centos
>>>> DEVICE=p3p1
>>>> HWADDR=68:05:ca:18:86:45
>>>> ONBOOT=no
>>>> IPADDR=4.5.6.102
>>>> NETMASK=255.255.255.0
>>>> BOOTPROTO=none
>>>> MTU=9000
>>>> DEFROUTE=no
>>>> NM_CONTROLLED=no
>>>>
>>>> /etc/sysconfig/network-scripts/ifcfg-lo
>>>>
>>>>
>>>> ip link before update:
>>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode 
>>>> DEFAULT
>>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>>> 2: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN mode 
>>>> DEFAULT
>>>> link/ether 46:50:22:7a:f3:9d brd ff:ff:ff:ff:ff:ff
>>>> 3: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master 
>>>> ovirtmgmt state UP mode DEFAULT qlen 1000
>>>> link/ether d0:67:e5:f0:e5:c6 brd ff:ff:ff:ff:ff:ff
>>>> 4: p3p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast state 
>>>> UP mode DEFAULT qlen 1000
>>>> link/ether 68:05:ca:18:86:45 brd ff:ff:ff:ff:ff:ff
>>>> 5: p4p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast state 
>>>> UP mode DEFAULT qlen 1000
>>>> link/ether 68:05:ca:01:bc:0c brd ff:ff:ff:ff:ff:ff
>>>> 7: ovirtmgmt: <BROADCAST,MULTICAST,U

Re: [ovirt-users] Hosted engine: sending ioctl 5401 to a partition!

2014-11-27 Thread Patrick Hurrelmann
On 21.11.2014 22:28, Chris Adams wrote:
 I have set up oVirt with hosted engine, on an iSCSI volume.  On both
 nodes, the kernel logs the following about every 10 seconds:
 
 Nov 21 15:27:49 node8 kernel: ovirt-ha-broker: sending ioctl 5401 to a 
 partition!
 
 Is this a known bug, something that I need to address, etc.?

I'm seeing the same on EL7 hosts with hosted-engine on iSCSI. (on 3.5
and 3.5.1 snapshot)

Regards
Patrick

-- 
Lobster SCM GmbH, Hindenburgstraße 15, D-82343 Pöcking

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Help - Cannot run VM. Invalid time zone for given OS type.

2014-01-03 Thread Patrick Hurrelmann
On 02.01.2014 20:12, Dan Ferris wrote: Hi,

 Has anyone run across this error:

 Cannot run VM. Invalid time zone for given OS type.

 The OS type for these VMs is set to Linux Other.  They were all exported
 from an Ovirt 3.2 cluster and are being reimported into an Ovirt 3.3
 cluster.  None of these VMs will boot.  We also can't delete them
 because delete protection is enabled and we can't edit the VM to turn
 off delete protection.

 Does anyone know what this error means exactly and how to fix it?

 Thanks,

 Dan

Hi Dan,

I had the very problem myself. The fix for it is quite easy, but
requires manual editing of one database table.

In table vm_static find your non-starting vms (they propably all have
an empty string set as timezone in column time_zone) and update that
column to null. There was a recent change in the timezone code and it
now fails when the timezone is an empty string, but works fine if it null.

Regards
Patrick

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Guest Agent

2013-11-25 Thread Patrick Hurrelmann
On 25.11.2013 12:13, Gianluca Cecchi wrote:
 On Mon, Nov 25, 2013 at 11:59 AM, Vinzenz Feenstra wrote:

 This should be fixed now :-)
 https://admin.fedoraproject.org/updates/ovirt-guest-agent-1.0.8-5.el5

 
 Hi, I get now this on CentOS 5.10 x86_64 system
 
 [g.cecchi@c510 ~]$ sudo /sbin/service ovirt-guest-agent start
 Starting ovirt-guest-agent: /bin/chown: `ovirtagent:ovirtagent': invalid group
[  OK  ]
 [g.cecchi@c510 ~]$
 [g.cecchi@c510 ~]$ sudo /sbin/service ovirt-guest-agent status
 ovirt-guest-agent dead but pid file exists
 
 
 in /var/log/ovirt-guest-agent.log
 
 MainThread::INFO::2013-11-22
 17:15:30,579::ovirt-guest-agent::37::root::Starting oVirt guest agent
 MainThread::ERROR::2013-11-22
 17:15:31,251::ovirt-guest-agent::117::root::Unhandled exception in
 oVirt guest agent!
 Traceback (most recent call last):
   File /usr/share/ovirt-guest-agent/ovirt-guest-agent.py, line 111, in ?
 agent.run(daemon, pidfile)
   File /usr/share/ovirt-guest-agent/ovirt-guest-agent.py, line 47, in run
 f = file(pidfile, w)
 IOError: [Errno 13] Permission denied: '/var/run/ovirt-guest-agent.pid'
 
 Gianluca

If you had rhev-guest-agent installed before, then manually remove the
user rhevagent and group rhevagent before installing ovirt-guest-agent.
the ovirt-guest-agent reuses the same uid and gid, but fails to add them
upon install when the rhev user and group is still existing.

Regards
Patrick

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Low quality of el6 vdsm rpms

2013-11-12 Thread Patrick Hurrelmann
Hi all,

sorry for this rant, but...

I now tried several times to test the beta 3.3.1 rpms, but they can't
even be installed in the most times. One time it required a future
selinux-policy, although the needed selinux fix was delivered in a much
lower version. Now the rpms have broken requirements. It requires
hostname instead of /bin/hostname. This broken requirement is not
included in the vdsm 3.3 branch, so I wonder where it comes from?
Anyway. So I proceeded and tried to build vdsm myself once again.
Currently the build fails with (but worked fine some days ago):

/usr/bin/pep8 --exclude=config.py,constants.py --filename '*.py,*.py.in' \
client lib/cpopen/*.py lib/vdsm/*.py lib/vdsm/*.py.in tests
vds_bootstrap vdsm-tool vdsm/*.py vdsm/*.py.in vdsm/netconf
vdsm/sos/vdsm.py.in vdsm/storage vdsm/vdsm vdsm_api vdsm_hooks vdsm_reg
vdsm/storage/imageRepository/formatConverter.py:280:29: E128
continuation line under-indented for visual indent


- How can the quality of the vdsm builds be increased? It is frustrating
to spend time on testing and then the hosts cannot even be installed to
broken vdsm rpms.
- How are the builds prepared? Is there a Jenkins job that prepares
stable rpms in addition to the nightly job? Or is this totally
handcrafted?
- How can it be that the rpm spec differs between the 3.3 branch and
released rpms? What is the source/branch for el6 vdsm rpms? Maybe I'm
just tracking on the wrong source tree...

Thx and Regards
Patrick


-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Low quality of el6 vdsm rpms

2013-11-12 Thread Patrick Hurrelmann
On 12.11.2013 11:07, Assaf Muller wrote:
 Regarding the pep8 breakage - Try updating your pep8.
 

Hi,

thanks for the hint, but according to
http://www.ovirt.org/Vdsm_Developers the latest python-pep8
(python-pep8-1.3.3-3.el6) for el6 is already installed.

And further digging shows that probably
http://gerrit.ovirt.org/#/c/21055/ was not yet merged to 3.3.

Regards
Patrick

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Low quality of el6 vdsm rpms

2013-11-12 Thread Patrick Hurrelmann
On 12.11.2013 11:31, Sandro Bonazzola wrote:
 Il 12/11/2013 10:34, Patrick Hurrelmann ha scritto:
 Hi all,

 sorry for this rant, but...

 I now tried several times to test the beta 3.3.1 rpms, but they can't
 even be installed in the most times.
 
 I'm glad to read you're testing 3.3.1. May I ask you to add yourself to
 http://www.ovirt.org/Testing/Ovirt_3.3.1_testing ?

Will do. I just finished migrating an old 3.1 el6 installation to a
fresh 3.3.1.

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Low quality of el6 vdsm rpms

2013-11-12 Thread Patrick Hurrelmann
On 12.11.2013 19:15, Mike Burns wrote:
 On 11/12/2013 03:51 PM, Douglas Schilling Landgraf wrote:

 Indeed, that's bad. It has been included from a patch only on Fedora
 koji build rawhide. The others points here already have been answered by
 others developers. Anyway, we have updated the build.

 We would appreciate if you could continue the tests with the new test
 build:

 F19: http://koji.fedoraproject.org/koji/taskinfo?taskID=6172359
 F20: http://koji.fedoraproject.org/koji/taskinfo?taskID=6172521
 EL6: http://koji.fedoraproject.org/koji/taskinfo?taskID=6172612

 This last update includes the following patches:
 - The require hostname fix
 - upgrade-fix-v3ResetMetaVolSize-argument
 - lvm-Do-not-use-udev-cache-for-obtaining-device-list
 - Fix-ballooning-rules-for-computing-the-minimum-avail
 - Avoid-M2Crypto-races
 - spec-declare-we-provide-an-existing-python-cpopen
 - configuring-selinux-allowing-qemu-kvm-to-generate-co

 @Mike, can you please update the testing candidate repo?

 
 Packages updated in beta repo
 

Thanks for the new builds. I will try them tomorrow.

Patrick

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Low quality of el6 vdsm rpms

2013-11-12 Thread Patrick Hurrelmann
On 12.11.2013 15:33, Dan Kenigsberg wrote:
 I suspect you are not interested in excuses for each of the failures,
 let us look forwards. My conclusions are:
 - Do not require non-yet-existing rpms. If we require a feature that is
   not yet in Fedora/Centos, we must wait. This is already in effect, see
   for example http://gerrit.ovirt.org/#/c/20248/ and
   http://gerrit.ovirt.org/19545
 
 - There's a Jenkins job to enforce the former requirement of spec
   requirement. David, Sandro, any idea why it is not running these days?
 
 - Keep the docs updated. Our Jenkins slaves have pep8-1.4.6, so we
   should update
   http://www.ovirt.org/Vdsm_Developers#Installing_required_packages
   accordingly - and more importantly, make that version available.
 
   Sandro, who built the python-pep8-1.4.6 that sits on the el6 Jenkins
   slave? Could you make it publicly available? (I can volunteer
   http://danken.fedorapeople.org again)

Yes, exactly. It wasn't my intention to blame anyone. I just wanted to
express how hard it can be to test and show up some points for future
work ;) I'm looking forward to the recent QA plans.

There has been an impressive overall improvement in the project over the
last year, but there is still room for improvement. Thanks all.

Patrick

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] 6.4 CR: oVirt 3.1 breaks with missing cpu features after update to CentOS 6.4 (6.3 + CR)

2013-03-11 Thread Patrick Hurrelmann
 Alright, just verified it. A vm started on a 6.3 host can be
 successfully migrated to the new 6.4 host and then back to any other 6.3
 host. It just won't migrate a vm started on 6.4 to any host running 6.3.
 
 This surprises me. Engine should have used the same emulatedMachine
 value, independent of the initial host. Could you share the vdsm.log
 lines mentionioning emulatedMachine in both cases?
 
 Dan.

Hi Dan,

sorry for coming back this late. I checked it and the default
emulatedMachine is pc. And machine pc differs in the definition between
6.3 and 6.4.

virsh 6.3:
  guest
os_typehvm/os_type
arch name='x86_64'
  wordsize64/wordsize
  emulator/usr/libexec/qemu-kvm/emulator
  machinerhel6.3.0/machine
  machine canonical='rhel6.3.0'pc/machine
  machinerhel6.2.0/machine
  machinerhel6.1.0/machine
  machinerhel6.0.0/machine
  machinerhel5.5.0/machine
  machinerhel5.4.4/machine
  machinerhel5.4.0/machine
  domain type='qemu'
  /domain
  domain type='kvm'
emulator/usr/libexec/qemu-kvm/emulator
  /domain
/arch
features
  cpuselection/
  deviceboot/
  acpi default='on' toggle='yes'/
  apic default='on' toggle='no'/
/features
  /guest


virsh 6.4:
  guest
os_typehvm/os_type
arch name='x86_64'
  wordsize64/wordsize
  emulator/usr/libexec/qemu-kvm/emulator
  machinerhel6.4.0/machine
  machine canonical='rhel6.4.0'pc/machine
  machinerhel6.3.0/machine
  machinerhel6.2.0/machine
  machinerhel6.1.0/machine
  machinerhel6.0.0/machine
  machinerhel5.5.0/machine
  machinerhel5.4.4/machine
  machinerhel5.4.0/machine
  domain type='qemu'
  /domain
  domain type='kvm'
emulator/usr/libexec/qemu-kvm/emulator
  /domain
/arch
features
  cpuselection/
  deviceboot/
  acpi default='on' toggle='yes'/
  apic default='on' toggle='no'/
/features
  /guest

Regards
Patrick

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] 6.4 CR: oVirt 3.1 breaks with missing cpu features after update to CentOS 6.4 (6.3 + CR)

2013-03-07 Thread Patrick Hurrelmann
On 05.03.2013 13:49, Dan Kenigsberg wrote:
 On Tue, Mar 05, 2013 at 12:32:31PM +0100, Patrick Hurrelmann wrote:
 On 05.03.2013 11:14, Dan Kenigsberg wrote:
 snip

 My version of vdsm as stated by Dreyou:
 v 4.10.0-0.46 (.15), builded from
 b59c8430b2a511bcea3bc1a954eee4ca1c0f4861 (branch ovirt-3.1)

 I can't see that Ia241b09c96fa16441ba9421f61a2f9a417f0d978 was merged to
 3.1 Branch?

 I applied that patch locally and restarted vdsmd but this does not
 change anything. Supported cpu is still as low as Conroe instead of
 Nehalem. Or is there more to do than patching libvirtvm.py?

 What is libvirt's opinion about your cpu compatibility?

  virsh -r cpu-compare (echo 'cpu 
 match=minimummodelNehalem/modelvendorIntel/vendor/cpu')

 If you do not get Host CPU is a superset of CPU described in bla, then
 the problem is within libvirt.

 Dan.

 Hi Dan,

 virsh -r cpu-compare (echo 'cpu
 match=minimummodelNehalem/modelvendorIntel/vendor/cpu')
 Host CPU is a superset of CPU described in /dev/fd/63

 So libvirt obviously is fine. Something different would have surprised
 my as virsh capabilities seemed correct anyway.

 So maybe, just maybe, libvirt has changed their cpu_map, a map that
 ovirt-3.1 had a bug reading.

 Would you care to apply http://gerrit.ovirt.org/5035 to see if this is
 it?

 Dan.

 Hi Dan,

 success! Applying that patch made the cpu recognition work again. The
 cpu type in admin portal shows again as Nehalem. Output from getVdsCaps:

cpuCores = 4
cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,
   mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,
   ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,
   arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,
   aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,
   ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,
   dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,
   model_Conroe,model_coreduo,model_core2duo,model_Penryn,
   model_n270
cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
cpuSockets = 1
cpuSpeed = 2393.769


 I compared libvirt's cpu_map.xml on both Centos 6.3 and CentOS 6.4 and
 indeed they do differ in large portions. So this patch should probably
 be merged to 3.1 branch? I will contact Dreyou and request that this
 patch will also be included in his builds. I guess otherwise there will
 be quite some fallout after people start picking CentOS 6.4 for oVirt 3.1.

 Thanks again and best regards
 
 Thank you for reporting this issue and verifying its fix.
 
 I'm not completely sure that we should keep maintaining the ovirt-3.1
 branch upstream - but a build destined for el6.4 must have it.
 
 If you believe we should release a fix version for 3.1, please verify
 that http://gerrit.ovirt.org/12723 has no ill effects.
 
 Dan.

I did none additional tests and the new CentOS 6.4 host failed start or
migrate any vm. It always boils down to:

Thread-43::ERROR::2013-03-07
15:02:51,950::task::853::TaskManager.Task::(_setError)
Task=`52a9f96f-3dfd-4bcf-8d7a-db14e650b4c1`::Unexpected error
Traceback (most recent call last):
  File /usr/share/vdsm/storage/task.py, line 861, in _run
return fn(*args, **kargs)
  File /usr/share/vdsm/logUtils.py, line 38, in wrapper
res = f(*args, **kwargs)
  File /usr/share/vdsm/storage/hsm.py, line 2551, in getVolumeSize
apparentsize = str(volume.Volume.getVSize(sdUUID, imgUUID, volUUID,
bs=1))
  File /usr/share/vdsm/storage/volume.py, line 283, in getVSize
return mysd.getVolumeClass().getVSize(mysd, imgUUID, volUUID, bs)
  File /usr/share/vdsm/storage/blockVolume.py, line 101, in getVSize
return int(int(lvm.getLV(sdobj.sdUUID, volUUID).size) / bs)
  File /usr/share/vdsm/storage/lvm.py, line 772, in getLV
lv = _lvminfo.getLv(vgName, lvName)
  File /usr/share/vdsm/storage/lvm.py, line 567, in getLv
lvs = self._reloadlvs(vgName)
  File /usr/share/vdsm/storage/lvm.py, line 419, in _reloadlvs
self._lvs.pop((vgName, lvName), None)
  File /usr/lib64/python2.6/contextlib.py, line 34, in __exit__
self.gen.throw(type, value, traceback)
  File /usr/share/vdsm/storage/misc.py, line 1219, in acquireContext
yield self
  File /usr/share/vdsm/storage/lvm.py, line 404, in _reloadlvs
lv = makeLV(*fields)
  File /usr/share/vdsm/storage/lvm.py, line 218, in makeLV
attrs = _attr2NamedTuple(args[LV._fields.index(attr)],
LV_ATTR_BITS, LV_ATTR)
  File /usr/share/vdsm/storage/lvm.py, line 188, in _attr2NamedTuple
attrs = Attrs(*values)
TypeError: __new__() takes exactly 9 arguments (10 given)

and followed by:

Thread-43::ERROR::2013-03-07
15:02:51,987::dispatcher::69::Storage.Dispatcher.Protect::(run)
__new__() takes exactly 9 arguments (10 given)
Traceback (most recent call last):
  File /usr/share/vdsm/storage/dispatcher.py, line 61, in run
result = ctask.prepare(self.func, *args, **kwargs)
  File /usr/share/vdsm

Re: [Users] 6.4 CR: oVirt 3.1 breaks with missing cpu features after update to CentOS 6.4 (6.3 + CR)

2013-03-07 Thread Patrick Hurrelmann
On 07.03.2013 16:18, Dan Kenigsberg wrote:
 On Thu, Mar 07, 2013 at 03:59:27PM +0100, Patrick Hurrelmann wrote:
 On 05.03.2013 13:49, Dan Kenigsberg wrote:
 On Tue, Mar 05, 2013 at 12:32:31PM +0100, Patrick Hurrelmann wrote:
 On 05.03.2013 11:14, Dan Kenigsberg wrote:
 snip

 My version of vdsm as stated by Dreyou:
 v 4.10.0-0.46 (.15), builded from
 b59c8430b2a511bcea3bc1a954eee4ca1c0f4861 (branch ovirt-3.1)

 I can't see that Ia241b09c96fa16441ba9421f61a2f9a417f0d978 was merged 
 to
 3.1 Branch?

 I applied that patch locally and restarted vdsmd but this does not
 change anything. Supported cpu is still as low as Conroe instead of
 Nehalem. Or is there more to do than patching libvirtvm.py?

 What is libvirt's opinion about your cpu compatibility?

  virsh -r cpu-compare (echo 'cpu 
 match=minimummodelNehalem/modelvendorIntel/vendor/cpu')

 If you do not get Host CPU is a superset of CPU described in bla, then
 the problem is within libvirt.

 Dan.

 Hi Dan,

 virsh -r cpu-compare (echo 'cpu
 match=minimummodelNehalem/modelvendorIntel/vendor/cpu')
 Host CPU is a superset of CPU described in /dev/fd/63

 So libvirt obviously is fine. Something different would have surprised
 my as virsh capabilities seemed correct anyway.

 So maybe, just maybe, libvirt has changed their cpu_map, a map that
 ovirt-3.1 had a bug reading.

 Would you care to apply http://gerrit.ovirt.org/5035 to see if this is
 it?

 Dan.

 Hi Dan,

 success! Applying that patch made the cpu recognition work again. The
 cpu type in admin portal shows again as Nehalem. Output from getVdsCaps:

cpuCores = 4
cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,
   mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,
   ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,
   arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,
   aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,
   ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,
   dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,
   model_Conroe,model_coreduo,model_core2duo,model_Penryn,
   model_n270
cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
cpuSockets = 1
cpuSpeed = 2393.769


 I compared libvirt's cpu_map.xml on both Centos 6.3 and CentOS 6.4 and
 indeed they do differ in large portions. So this patch should probably
 be merged to 3.1 branch? I will contact Dreyou and request that this
 patch will also be included in his builds. I guess otherwise there will
 be quite some fallout after people start picking CentOS 6.4 for oVirt 3.1.

 Thanks again and best regards

 Thank you for reporting this issue and verifying its fix.

 I'm not completely sure that we should keep maintaining the ovirt-3.1
 branch upstream - but a build destined for el6.4 must have it.

 If you believe we should release a fix version for 3.1, please verify
 that http://gerrit.ovirt.org/12723 has no ill effects.

 Dan.

 I did none additional tests and the new CentOS 6.4 host failed start or
 migrate any vm. It always boils down to:

 Thread-43::ERROR::2013-03-07
 15:02:51,950::task::853::TaskManager.Task::(_setError)
 Task=`52a9f96f-3dfd-4bcf-8d7a-db14e650b4c1`::Unexpected error
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/task.py, line 861, in _run
 return fn(*args, **kargs)
   File /usr/share/vdsm/logUtils.py, line 38, in wrapper
 res = f(*args, **kwargs)
   File /usr/share/vdsm/storage/hsm.py, line 2551, in getVolumeSize
 apparentsize = str(volume.Volume.getVSize(sdUUID, imgUUID, volUUID,
 bs=1))
   File /usr/share/vdsm/storage/volume.py, line 283, in getVSize
 return mysd.getVolumeClass().getVSize(mysd, imgUUID, volUUID, bs)
   File /usr/share/vdsm/storage/blockVolume.py, line 101, in getVSize
 return int(int(lvm.getLV(sdobj.sdUUID, volUUID).size) / bs)
   File /usr/share/vdsm/storage/lvm.py, line 772, in getLV
 lv = _lvminfo.getLv(vgName, lvName)
   File /usr/share/vdsm/storage/lvm.py, line 567, in getLv
 lvs = self._reloadlvs(vgName)
   File /usr/share/vdsm/storage/lvm.py, line 419, in _reloadlvs
 self._lvs.pop((vgName, lvName), None)
   File /usr/lib64/python2.6/contextlib.py, line 34, in __exit__
 self.gen.throw(type, value, traceback)
   File /usr/share/vdsm/storage/misc.py, line 1219, in acquireContext
 yield self
   File /usr/share/vdsm/storage/lvm.py, line 404, in _reloadlvs
 lv = makeLV(*fields)
   File /usr/share/vdsm/storage/lvm.py, line 218, in makeLV
 attrs = _attr2NamedTuple(args[LV._fields.index(attr)],
 LV_ATTR_BITS, LV_ATTR)
   File /usr/share/vdsm/storage/lvm.py, line 188, in _attr2NamedTuple
 attrs = Attrs(*values)
 TypeError: __new__() takes exactly 9 arguments (10 given)

 and followed by:

 Thread-43::ERROR::2013-03-07
 15:02:51,987::dispatcher::69::Storage.Dispatcher.Protect::(run)
 __new__() takes exactly 9 arguments (10 given)
 Traceback (most recent call

Re: [Users] 6.4 CR: oVirt 3.1 breaks with missing cpu features after update to CentOS 6.4 (6.3 + CR)

2013-03-05 Thread Patrick Hurrelmann
On 04.03.2013 21:52, Itamar Heim wrote:
 On 04/03/2013 19:03, Patrick Hurrelmann wrote:
 Hi list,

 I tested the upcoming CentOS 6.4 release with my lab installation of
 oVirt 3.1 and it fails to play well.

 Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type
 Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are
 both version 3.1. oVirt 3.1 was installed via Dreyou's repo.

 In CentOS 6.3 all is fine and the following rpms are installed:

 libvirt.x86_640.9.10-21.el6_3.8
 libvirt-client.x86_64 0.9.10-21.el6_3.8
 libvirt-lock-sanlock.x86_64   0.9.10-21.el6_3.8
 libvirt-python.x86_64 0.9.10-21.el6_3.8
 vdsm.x86_64   4.10.0-0.46.15.el6
 vdsm-cli.noarch   4.10.0-0.46.15.el6
 vdsm-python.x86_644.10.0-0.46.15.el6
 vdsm-xmlrpc.noarch4.10.0-0.46.15.el6
 qemu-kvm.x86_64   2:0.12.1.2-2.295.el6_3.10


 uname -a
 Linux vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6
 03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

 virsh cpu capabilities on 6.3:
  cpu
archx86_64/arch
modelNehalem/model
vendorIntel/vendor
topology sockets='1' cores='4' threads='1'/
feature name='rdtscp'/
feature name='pdcm'/
feature name='xtpr'/
feature name='tm2'/
feature name='est'/
feature name='smx'/
feature name='vmx'/
feature name='ds_cpl'/
feature name='monitor'/
feature name='dtes64'/
feature name='pbe'/
feature name='tm'/
feature name='ht'/
feature name='ss'/
feature name='acpi'/
feature name='ds'/
feature name='vme'/
  /cpu

 and corresponding cpu features from vdsClient:

 cpuCores = 4
 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,
cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,
tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,
pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,
dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,
pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,
flexpriority,ept,vpid,model_Conroe,model_Penryn,
model_Nehalem
 cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
 cpuSockets = 1
 cpuSpeed = 2394.132


 So the system was updated to 6.4 using the continuous release repo.

 Installed rpms after update to 6.4 (6.3 + CR):

 libvirt.x86_640.10.2-18.el6
 libvirt-client.x86_64 0.10.2-18.el6
 libvirt-lock-sanlock.x86_64   0.10.2-18.el6
 libvirt-python.x86_64 0.10.2-18.el6
 vdsm.x86_64   4.10.0-0.46.15.el6
 vdsm-cli.noarch   4.10.0-0.46.15.el6
 vdsm-python.x86_644.10.0-0.46.15.el6
 vdsm-xmlrpc.noarch4.10.0-0.46.15.el6
 qemu-kvm.x86_64   2:0.12.1.2-2.355.el6_4_4.1


 uname -a
 Linux vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27
 06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

 virsh capabilities on 6.4:
  cpu
archx86_64/arch
modelNehalem/model
vendorIntel/vendor
topology sockets='1' cores='4' threads='1'/
feature name='rdtscp'/
feature name='pdcm'/
feature name='xtpr'/
feature name='tm2'/
feature name='est'/
feature name='smx'/
feature name='vmx'/
feature name='ds_cpl'/
feature name='monitor'/
feature name='dtes64'/
feature name='pbe'/
feature name='tm'/
feature name='ht'/
feature name='ss'/
feature name='acpi'/
feature name='ds'/
feature name='vme'/
  /cpu

 and corresponding cpu features from vdsClient:

 cpuCores = 4
 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,
cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,
tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,
pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,
dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,
pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,
flexpriority,ept,vpid,model_coreduo,model_Conroe
 cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
 cpuSockets = 1
 cpuSpeed = 2394.098

 Full outputs of virsh capabilities and vdsCaps are attached. The only
 difference I can see is that 6.4 exposes one additional cpu flags (sep)
 and this seems to break the cpu recognition of vdsm.

 Anyone has some hints on how to resolve or debug this further? What more
 information can I provide to help?

 Best regards
 Patrick



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

 
 seems like a vdsm issue - can you check

Re: [Users] 6.4 CR: oVirt 3.1 breaks with missing cpu features after update to CentOS 6.4 (6.3 + CR)

2013-03-05 Thread Patrick Hurrelmann
On 05.03.2013 10:54, Dan Kenigsberg wrote:
 On Tue, Mar 05, 2013 at 10:21:16AM +0100, Patrick Hurrelmann wrote:
 On 04.03.2013 21:52, Itamar Heim wrote:
 On 04/03/2013 19:03, Patrick Hurrelmann wrote:
 Hi list,

 I tested the upcoming CentOS 6.4 release with my lab installation of
 oVirt 3.1 and it fails to play well.

 Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type
 Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are
 both version 3.1. oVirt 3.1 was installed via Dreyou's repo.

 In CentOS 6.3 all is fine and the following rpms are installed:

 libvirt.x86_640.9.10-21.el6_3.8
 libvirt-client.x86_64 0.9.10-21.el6_3.8
 libvirt-lock-sanlock.x86_64   0.9.10-21.el6_3.8
 libvirt-python.x86_64 0.9.10-21.el6_3.8
 vdsm.x86_64   4.10.0-0.46.15.el6
 vdsm-cli.noarch   4.10.0-0.46.15.el6
 vdsm-python.x86_644.10.0-0.46.15.el6
 vdsm-xmlrpc.noarch4.10.0-0.46.15.el6
 qemu-kvm.x86_64   2:0.12.1.2-2.295.el6_3.10


 uname -a
 Linux vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6
 03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

 virsh cpu capabilities on 6.3:
  cpu
archx86_64/arch
modelNehalem/model
vendorIntel/vendor
topology sockets='1' cores='4' threads='1'/
feature name='rdtscp'/
feature name='pdcm'/
feature name='xtpr'/
feature name='tm2'/
feature name='est'/
feature name='smx'/
feature name='vmx'/
feature name='ds_cpl'/
feature name='monitor'/
feature name='dtes64'/
feature name='pbe'/
feature name='tm'/
feature name='ht'/
feature name='ss'/
feature name='acpi'/
feature name='ds'/
feature name='vme'/
  /cpu

 and corresponding cpu features from vdsClient:

 cpuCores = 4
 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,
cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,
tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,
pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,
dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,
pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,
flexpriority,ept,vpid,model_Conroe,model_Penryn,
model_Nehalem
 cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
 cpuSockets = 1
 cpuSpeed = 2394.132


 So the system was updated to 6.4 using the continuous release repo.

 Installed rpms after update to 6.4 (6.3 + CR):

 libvirt.x86_640.10.2-18.el6
 libvirt-client.x86_64 0.10.2-18.el6
 libvirt-lock-sanlock.x86_64   0.10.2-18.el6
 libvirt-python.x86_64 0.10.2-18.el6
 vdsm.x86_64   4.10.0-0.46.15.el6
 vdsm-cli.noarch   4.10.0-0.46.15.el6
 vdsm-python.x86_644.10.0-0.46.15.el6
 vdsm-xmlrpc.noarch4.10.0-0.46.15.el6
 qemu-kvm.x86_64   2:0.12.1.2-2.355.el6_4_4.1


 uname -a
 Linux vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27
 06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

 virsh capabilities on 6.4:
  cpu
archx86_64/arch
modelNehalem/model
vendorIntel/vendor
topology sockets='1' cores='4' threads='1'/
feature name='rdtscp'/
feature name='pdcm'/
feature name='xtpr'/
feature name='tm2'/
feature name='est'/
feature name='smx'/
feature name='vmx'/
feature name='ds_cpl'/
feature name='monitor'/
feature name='dtes64'/
feature name='pbe'/
feature name='tm'/
feature name='ht'/
feature name='ss'/
feature name='acpi'/
feature name='ds'/
feature name='vme'/
  /cpu

 and corresponding cpu features from vdsClient:

 cpuCores = 4
 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,
cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,
tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,
pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,
dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,
pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,
flexpriority,ept,vpid,model_coreduo,model_Conroe
 cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
 cpuSockets = 1
 cpuSpeed = 2394.098

 Full outputs of virsh capabilities and vdsCaps are attached. The only
 difference I can see is that 6.4 exposes one additional cpu flags (sep)
 and this seems to break the cpu recognition of vdsm.

 Anyone has some hints on how to resolve or debug this further? What more
 information can I provide to help?

 Best regards
 Patrick



 ___
 Users mailing

Re: [Users] 6.4 CR: oVirt 3.1 breaks with missing cpu features after update to CentOS 6.4 (6.3 + CR)

2013-03-05 Thread Patrick Hurrelmann
On 05.03.2013 11:14, Dan Kenigsberg wrote:
snip

 My version of vdsm as stated by Dreyou:
 v 4.10.0-0.46 (.15), builded from
 b59c8430b2a511bcea3bc1a954eee4ca1c0f4861 (branch ovirt-3.1)

 I can't see that Ia241b09c96fa16441ba9421f61a2f9a417f0d978 was merged to
 3.1 Branch?

 I applied that patch locally and restarted vdsmd but this does not
 change anything. Supported cpu is still as low as Conroe instead of
 Nehalem. Or is there more to do than patching libvirtvm.py?

 What is libvirt's opinion about your cpu compatibility?

  virsh -r cpu-compare (echo 'cpu 
 match=minimummodelNehalem/modelvendorIntel/vendor/cpu')

 If you do not get Host CPU is a superset of CPU described in bla, then
 the problem is within libvirt.

 Dan.

 Hi Dan,

 virsh -r cpu-compare (echo 'cpu
 match=minimummodelNehalem/modelvendorIntel/vendor/cpu')
 Host CPU is a superset of CPU described in /dev/fd/63

 So libvirt obviously is fine. Something different would have surprised
 my as virsh capabilities seemed correct anyway.
 
 So maybe, just maybe, libvirt has changed their cpu_map, a map that
 ovirt-3.1 had a bug reading.
 
 Would you care to apply http://gerrit.ovirt.org/5035 to see if this is
 it?
 
 Dan.

Hi Dan,

success! Applying that patch made the cpu recognition work again. The
cpu type in admin portal shows again as Nehalem. Output from getVdsCaps:

   cpuCores = 4
   cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,
  mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,
  ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,
  arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,
  aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,
  ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,
  dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,
  model_Conroe,model_coreduo,model_core2duo,model_Penryn,
  model_n270
   cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
   cpuSockets = 1
   cpuSpeed = 2393.769


I compared libvirt's cpu_map.xml on both Centos 6.3 and CentOS 6.4 and
indeed they do differ in large portions. So this patch should probably
be merged to 3.1 branch? I will contact Dreyou and request that this
patch will also be included in his builds. I guess otherwise there will
be quite some fallout after people start picking CentOS 6.4 for oVirt 3.1.

Thanks again and best regards
Patrick

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] 6.4 CR: oVirt 3.1 breaks with missing cpu features after update to CentOS 6.4 (6.3 + CR)

2013-03-04 Thread Patrick Hurrelmann
Hi list,

I tested the upcoming CentOS 6.4 release with my lab installation of
oVirt 3.1 and it fails to play well.

Background: freshly installed CentOS 6.3 host in a Nehalem CPU-type
Cluster with 2 other hosts. Storage is iSCSI. Datacenter and Cluster are
both version 3.1. oVirt 3.1 was installed via Dreyou's repo.

In CentOS 6.3 all is fine and the following rpms are installed:

libvirt.x86_640.9.10-21.el6_3.8
libvirt-client.x86_64 0.9.10-21.el6_3.8
libvirt-lock-sanlock.x86_64   0.9.10-21.el6_3.8
libvirt-python.x86_64 0.9.10-21.el6_3.8
vdsm.x86_64   4.10.0-0.46.15.el6
vdsm-cli.noarch   4.10.0-0.46.15.el6
vdsm-python.x86_644.10.0-0.46.15.el6
vdsm-xmlrpc.noarch4.10.0-0.46.15.el6
qemu-kvm.x86_64   2:0.12.1.2-2.295.el6_3.10


uname -a
Linux vh-test1.mydomain.com 2.6.32-279.22.1.el6.x86_64 #1 SMP Wed Feb 6
03:10:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

virsh cpu capabilities on 6.3:
cpu
  archx86_64/arch
  modelNehalem/model
  vendorIntel/vendor
  topology sockets='1' cores='4' threads='1'/
  feature name='rdtscp'/
  feature name='pdcm'/
  feature name='xtpr'/
  feature name='tm2'/
  feature name='est'/
  feature name='smx'/
  feature name='vmx'/
  feature name='ds_cpl'/
  feature name='monitor'/
  feature name='dtes64'/
  feature name='pbe'/
  feature name='tm'/
  feature name='ht'/
  feature name='ss'/
  feature name='acpi'/
  feature name='ds'/
  feature name='vme'/
/cpu

and corresponding cpu features from vdsClient:

   cpuCores = 4
   cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,
  cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,
  tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,
  pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,
  dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,
  pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,
  flexpriority,ept,vpid,model_Conroe,model_Penryn,
  model_Nehalem
   cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
   cpuSockets = 1
   cpuSpeed = 2394.132


So the system was updated to 6.4 using the continuous release repo.

Installed rpms after update to 6.4 (6.3 + CR):

libvirt.x86_640.10.2-18.el6
libvirt-client.x86_64 0.10.2-18.el6
libvirt-lock-sanlock.x86_64   0.10.2-18.el6
libvirt-python.x86_64 0.10.2-18.el6
vdsm.x86_64   4.10.0-0.46.15.el6
vdsm-cli.noarch   4.10.0-0.46.15.el6
vdsm-python.x86_644.10.0-0.46.15.el6
vdsm-xmlrpc.noarch4.10.0-0.46.15.el6
qemu-kvm.x86_64   2:0.12.1.2-2.355.el6_4_4.1


uname -a
Linux vh-test1.mydomain.com 2.6.32-358.0.1.el6.x86_64 #1 SMP Wed Feb 27
06:06:45 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

virsh capabilities on 6.4:
cpu
  archx86_64/arch
  modelNehalem/model
  vendorIntel/vendor
  topology sockets='1' cores='4' threads='1'/
  feature name='rdtscp'/
  feature name='pdcm'/
  feature name='xtpr'/
  feature name='tm2'/
  feature name='est'/
  feature name='smx'/
  feature name='vmx'/
  feature name='ds_cpl'/
  feature name='monitor'/
  feature name='dtes64'/
  feature name='pbe'/
  feature name='tm'/
  feature name='ht'/
  feature name='ss'/
  feature name='acpi'/
  feature name='ds'/
  feature name='vme'/
/cpu

and corresponding cpu features from vdsClient:

   cpuCores = 4
   cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,
  cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,
  tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,
  pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,
  dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,
  pdcm,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,
  flexpriority,ept,vpid,model_coreduo,model_Conroe
   cpuModel = Intel(R) Xeon(R) CPU   X3430  @ 2.40GHz
   cpuSockets = 1
   cpuSpeed = 2394.098

Full outputs of virsh capabilities and vdsCaps are attached. The only
difference I can see is that 6.4 exposes one additional cpu flags (sep)
and this seems to break the cpu recognition of vdsm.

Anyone has some hints on how to resolve or debug this further? What more
information can I provide to help?

Best regards
Patrick
-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
vdsClient -s 0 getVdsCaps
HBAInventory = {'iSCSI': [{'InitiatorName': 
'iqn.2012-09.com.mydomain:vh-test1'}], 'FC': []}
ISCSIInitiatorName = iqn.2012-09.com.mydomain:vh-test1
bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': 
'', 'slaves': [], 'hwaddr': 

Re: [Users] Local storage domain fails to attach after host reboot

2013-01-25 Thread Patrick Hurrelmann
On 24.01.2013 18:05, Patrick Hurrelmann wrote:
 Hi list,
 
 after rebooting one host (single host dc with local storage) the local
 storage domain can't be attached again. The host was set to maintenance
 mode and all running vms were shutdown prior the reboot.
 
 Vdsm keeps logging the following errors:
 
 Thread-1266::ERROR::2013-01-24
 17:51:46,042::task::853::TaskManager.Task::(_setError)
 Task=`a0c11f61-8bcf-4f76-9923-43e8b9cc1424`::Unexpected error
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/task.py, line 861, in _run
 return fn(*args, **kargs)
   File /usr/share/vdsm/logUtils.py, line 38, in wrapper
 res = f(*args, **kwargs)
   File /usr/share/vdsm/storage/hsm.py, line 817, in connectStoragePool
 return self._connectStoragePool(spUUID, hostID, scsiKey, msdUUID,
 masterVersion, options)
   File /usr/share/vdsm/storage/hsm.py, line 859, in _connectStoragePool
 res = pool.connect(hostID, scsiKey, msdUUID, masterVersion)
   File /usr/share/vdsm/storage/sp.py, line 641, in connect
 self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
   File /usr/share/vdsm/storage/sp.py, line 1109, in __rebuild
 self.masterDomain = self.getMasterDomain(msdUUID=msdUUID,
 masterVersion=masterVersion)
   File /usr/share/vdsm/storage/sp.py, line 1448, in getMasterDomain
 raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)
 StoragePoolMasterNotFound: Cannot find master domain:
 'spUUID=c9b86219-0d51-44c3-a7de-e0fe07e2c9e6,
 msdUUID=00ed91f3-43be-41be-8c05-f3786588a1ad'
 
 and
 
 Thread-1268::ERROR::2013-01-24
 17:51:49,073::task::853::TaskManager.Task::(_setError)
 Task=`95b7f58b-afe0-47bd-9ebd-21d3224f5165`::Unexpected error
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/task.py, line 861, in _run
 return fn(*args, **kargs)
   File /usr/share/vdsm/logUtils.py, line 38, in wrapper
 res = f(*args, **kwargs)
   File /usr/share/vdsm/storage/hsm.py, line 528, in getSpmStatus
 pool = self.getPool(spUUID)
   File /usr/share/vdsm/storage/hsm.py, line 265, in getPool
 raise se.StoragePoolUnknown(spUUID)
 StoragePoolUnknown: Unknown pool id, pool not connected:
 ('c9b86219-0d51-44c3-a7de-e0fe07e2c9e6',)
 
 while engine logs:
 
 2013-01-24 17:51:46,050 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (QuartzScheduler_Worker-43) [49026692] Command
 org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand
 return value
  Class Name:
 org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
 mStatus   Class Name:
 org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
 mCode 304
 mMessage  Cannot find master domain:
 'spUUID=c9b86219-0d51-44c3-a7de-e0fe07e2c9e6,
 msdUUID=00ed91f3-43be-41be-8c05-f3786588a1ad'
 
 
 Vdsm and engine logs are also attached. I set the affected host back to
 maintenance. How can I recover from this and attach the storage domain
 again? If more information is needed, please do not hesitate to request it.
 
 This is on CentOS 6.3 using Dreyou's rpms. Installed versions on host:
 
 vdsm.x86_64 4.10.0-0.44.14.el6
 vdsm-cli.noarch 4.10.0-0.44.14.el6
 vdsm-python.x86_64  4.10.0-0.44.14.el6
 vdsm-xmlrpc.noarch  4.10.0-0.44.14.el6
 
 Engine:
 
 ovirt-engine.noarch 3.1.0-3.19.el6
 ovirt-engine-backend.noarch 3.1.0-3.19.el6
 ovirt-engine-cli.noarch 3.1.0.7-1.el6
 ovirt-engine-config.noarch  3.1.0-3.19.el6
 ovirt-engine-dbscripts.noarch   3.1.0-3.19.el6
 ovirt-engine-genericapi.noarch  3.1.0-3.19.el6
 ovirt-engine-jbossas711.x86_64  1-0
 ovirt-engine-notification-service.noarch3.1.0-3.19.el6
 ovirt-engine-restapi.noarch 3.1.0-3.19.el6
 ovirt-engine-sdk.noarch 3.1.0.5-1.el6
 ovirt-engine-setup.noarch   3.1.0-3.19.el6
 ovirt-engine-tools-common.noarch3.1.0-3.19.el6
 ovirt-engine-userportal.noarch  3.1.0-3.19.el6
 ovirt-engine-webadmin-portal.noarch 3.1.0-3.19.el6
 ovirt-image-uploader.noarch 3.1.0-16.el6
 ovirt-iso-uploader.noarch   3.1.0-16.el6
 ovirt-log-collector.noarch  3.1.0-16.el6
 
 
 Thanks and regards
 Patrick

Ok, managed to solve it. I force removed the datacenter and reinstalled
the host. I added a new local storage to it and re-created the vms (disk
images were moved and renamed from old non working local storage).

So this host is up an running again.

Regards
Patrick


-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Attaching export domain to dc fails

2013-01-24 Thread Patrick Hurrelmann
Hi list,

in one datacenter I'm facing problems with my export storage. The dc is
of type single host with local storage. On the host I see that the nfs
export domain is still connected, but the engine does not show this and
therefore it cannot be used for exports or detached.

Trying to add attach the export domain again fails. The following is
logged n vdsm:

Thread-1902159::ERROR::2013-01-24
17:11:45,474::task::853::TaskManager.Task::(_setError)
Task=`4bc15024-7917-4599-988f-2784ce43fbe7`::Unexpected error
Traceback (most recent call last):
  File /usr/share/vdsm/storage/task.py, line 861, in _run
return fn(*args, **kargs)
  File /usr/share/vdsm/logUtils.py, line 38, in wrapper
res = f(*args, **kwargs)
  File /usr/share/vdsm/storage/hsm.py, line 960, in attachStorageDomain
pool.attachSD(sdUUID)
  File /usr/share/vdsm/storage/securable.py, line 63, in wrapper
return f(self, *args, **kwargs)
  File /usr/share/vdsm/storage/sp.py, line 924, in attachSD
dom.attach(self.spUUID)
  File /usr/share/vdsm/storage/sd.py, line 442, in attach
raise se.StorageDomainAlreadyAttached(pools[0], self.sdUUID)
StorageDomainAlreadyAttached: Storage domain already attached to pool:
'domain=cd23808b-136a-4b33-a80c-f2581eab022d,
pool=d95c53ca-9cef-4db2-8858-bf4937bd8c14'

It won't let me attach the export domain saying that it is already
attached. Manually umounting the export domain on the host results in
the same error on subsequent attach.

This is on CentOS 6.3 using Dreyou's rpms. Installed versions on host:

vdsm.x86_64 4.10.0-0.44.14.el6
vdsm-cli.noarch 4.10.0-0.44.14.el6
vdsm-python.x86_64  4.10.0-0.44.14.el6
vdsm-xmlrpc.noarch  4.10.0-0.44.14.el6

Engine:

ovirt-engine.noarch 3.1.0-3.19.el6
ovirt-engine-backend.noarch 3.1.0-3.19.el6
ovirt-engine-cli.noarch 3.1.0.7-1.el6
ovirt-engine-config.noarch  3.1.0-3.19.el6
ovirt-engine-dbscripts.noarch   3.1.0-3.19.el6
ovirt-engine-genericapi.noarch  3.1.0-3.19.el6
ovirt-engine-jbossas711.x86_64  1-0
ovirt-engine-notification-service.noarch3.1.0-3.19.el6
ovirt-engine-restapi.noarch 3.1.0-3.19.el6
ovirt-engine-sdk.noarch 3.1.0.5-1.el6
ovirt-engine-setup.noarch   3.1.0-3.19.el6
ovirt-engine-tools-common.noarch3.1.0-3.19.el6
ovirt-engine-userportal.noarch  3.1.0-3.19.el6
ovirt-engine-webadmin-portal.noarch 3.1.0-3.19.el6
ovirt-image-uploader.noarch 3.1.0-16.el6
ovirt-iso-uploader.noarch   3.1.0-16.el6
ovirt-log-collector.noarch  3.1.0-16.el6

How can this be recovered to a sane state? If more information is
needed, please do not hesitate to request it.

Thanks and regards
Patrick

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
Thread-1902157::DEBUG::2013-01-24 17:11:36,039::BindingXMLRPC::156::vds::(wrapper) [xxx.xxx.xxx.190]
Thread-1902157::DEBUG::2013-01-24 17:11:36,039::task::588::TaskManager.Task::(_updateState) Task=`a686738c-ff6f-43ad-8966-8ec158dc7c2e`::moving from state init - state preparing
Thread-1902157::INFO::2013-01-24 17:11:36,039::logUtils::37::dispatcher::(wrapper) Run and protect: validateStorageServerConnection(domType=1, spUUID='----', conList=[{'connection': 'xxx.xxx.xxx.191:/data/ovirt-export-fra', 'iqn': '', 'portal': '', 'user': '', 'password': '**', 'id': '5207d7da-2655-4843-b126-3252e38beafa', 'port': ''}], options=None)
Thread-1902157::INFO::2013-01-24 17:11:36,039::logUtils::39::dispatcher::(wrapper) Run and protect: validateStorageServerConnection, Return response: {'statuslist': [{'status': 0, 'id': '5207d7da-2655-4843-b126-3252e38beafa'}]}
Thread-1902157::DEBUG::2013-01-24 17:11:36,039::task::1172::TaskManager.Task::(prepare) Task=`a686738c-ff6f-43ad-8966-8ec158dc7c2e`::finished: {'statuslist': [{'status': 0, 'id': '5207d7da-2655-4843-b126-3252e38beafa'}]}
Thread-1902157::DEBUG::2013-01-24 17:11:36,040::task::588::TaskManager.Task::(_updateState) Task=`a686738c-ff6f-43ad-8966-8ec158dc7c2e`::moving from state preparing - state finished
Thread-1902157::DEBUG::2013-01-24 17:11:36,040::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1902157::DEBUG::2013-01-24 17:11:36,040::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1902157::DEBUG::2013-01-24 17:11:36,040::task::978::TaskManager.Task::(_decref) Task=`a686738c-ff6f-43ad-8966-8ec158dc7c2e`::ref 0 aborting False
Thread-1902158::DEBUG::2013-01-24 17:11:36,057::BindingXMLRPC::156::vds::(wrapper) [xxx.xxx.xxx.190]
Thread-1902158::DEBUG::2013-01-24 

Re: [Users] Attaching export domain to dc fails

2013-01-24 Thread Patrick Hurrelmann
On 24.01.2013 18:08, Dafna Ron wrote:
 Before you do this be sure that the export domain is *really* *not
 attached to* *any* *DC*!
 if you look under the storage main tab it should appear as unattached or
 it should not be in the setup or under a DC in any other setup at all.
 
 1. go to the export domain's metadata located under the domain dom_md
 (example)
 
 72ec1321-a114-451f-bee1-6790cbca1bc6/dom_md/metadata
 
 2. (backup the metadata before you edit it!) 
 vim the metadata and remove the pool's uuid value from POOL_UUID field
 leaving: 'POOL_UUID='
 also remove the SHA_CKSUM (remove entire entry - not just the value)
 
 so for example my metadata was this:
 
 CLASS=Backup
 DESCRIPTION=BlaBla
 IOOPTIMEOUTSEC=1
 LEASERETRIES=3
 LEASETIMESEC=5
 LOCKPOLICY=
 LOCKRENEWALINTERVALSEC=5
 MASTER_VERSION=0
 POOL_UUID=cee3603b-2308-4973-97a8-480f7d6d2132
 REMOTE_PATH=BlaBla.com:/volumes/bla/BlaBla
 ROLE=Regular
 SDUUID=72ec1321-a114-451f-bee1-6790cbca1bc6
 TYPE=NFS
 VERSION=0
 _SHA_CKSUM=95bf1c9b8a75b077fe65d782e86b4c4c331a765d
 
 
 it will be this:
 
 CLASS=Backup
 DESCRIPTION=BlaBla
 IOOPTIMEOUTSEC=1
 LEASERETRIES=3
 LEASETIMESEC=5
 LOCKPOLICY=
 LOCKRENEWALINTERVALSEC=5
 MASTER_VERSION=0
 POOL_UUID=
 REMOTE_PATH=BlaBla.com:/volumes/bla/BlaBla
 ROLE=Regular
 SDUUID=72ec1321-a114-451f-bee1-6790cbca1bc6
 TYPE=NFS
 VERSION=0
 
 
 you should be able to attach the domain after this change.
 

Uh, that was fast! Thank you very much. Problem solved. Export domain,
is back to life :)

This mailing list is really incredible and that valuable.

Regards
Patrick

-- 
-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] What do you want to see in oVirt next?

2013-01-09 Thread Patrick Hurrelmann
On 09.01.2013 15:48, Joern Ott wrote:
 
 
 -Original Message-
 From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf
 Of Rick Beldin
 Sent: Dienstag, 8. Januar 2013 15:54
 To: Itamar Heim
 Cc: users@ovirt.org
 Subject: Re: [Users] What do you want to see in oVirt next?



 On Tue 08 Jan 2013 04:41:01 AM EST, Itamar Heim wrote:
 On 01/07/2013 06:11 PM, Rick Beldin wrote:
 - cleaner work-flow in creating and associating storage, especially
 NFS


 Some of this is no doubt my newbie-ness to ovirt.  Most of my comments
 below have to do with the manager.

 I started playing with RHEV 3.1 since I am responsible for delivering support
 on it but just yesterday started playing with Fedora 17 and all-in-one for 
 ease
 of setup.  I will try and document the specifics, but the things I recall 
 were
 things like:

 - inability to setup NFS storage on the cluster until I had added a host.  I 
 think
   I missed some key concept here, but my feeling was that I
 would/could/should
   setup the manager (engine) first and then add virtual hosts.
 Instead, there
   seems to be a procedural step.

 - along those lines the UI could do more to guide a user, a first-time wizard
   that would guide you through cluster and datacenter setups that are
 independent
   of the hosts.  Guide Me is a good start, but perhaps it needs some
 expansion.

 - better error reporting from engine back to admin user during admin
 operations.


 - here's an example.  Just installed AIO on Fedora 17.  After going through
   everything, it says 'Install Failed'.   The Event entry has a
 correlation id,
   which can be used to figure out what went wrong.  (I guess).  It seems like
   there could be more information provided to the admin on what to do next
 aside
   install again?  Tooltip on what to do with correlation id? 
 
 Whenever you report a problem here in the list, you get asked for engine and 
 vdsm logs. So, these logs are essential and a way to access (at least the 
 relevant parts) via the GUI would be perfect. Ideally, every task should have 
 a unique ID and this ID should show up in the engine logs as well as the vdsm 
 logs on the nodes in a way that they could easily filtered.
 
 My dream would be a message like Install failed as a clickable link which 
 then opens a log viewer and shows the engine log filtered by this ID as well 
 as the vdsm log from the involved node filtered by this ID.
 
 Kind regards
 Jörn

Very good idea, I like it very much!

Regards
Patrick

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] What do you want to see in oVirt next?

2013-01-06 Thread Patrick Hurrelmann
On 03.01.2013 17:25, Patrick Hurrelmann wrote:
 On 03.01.2013 17:08, Itamar Heim wrote:
 Hi Everyone,

 as we wrap oVirt 3.2, I wanted to check with oVirt users on what they 
 find good/useful in oVirt, and what they would like to see 
 improved/added in coming versions?

 Thanks,
 Itamar
 
 For me, I'd like to see official rpms for RHEL6/CentOS6. According to
 the traffic on this list quite a lot are using Dreyou's packages.
 
 But I'm really looking forward to oVirt 3.2 (reading all those commit
 whets my appetite) :)
 
 Regards
 Patrick

And after thinking a bit more about it, this is what I like to see in
addition:
- clustered engine (eliminate this SPOF)
- when FreeIPA is used for authentication, make use of its CA and
generate certificates using ipa-getcert

Regards
Patrick

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] What do you want to see in oVirt next?

2013-01-03 Thread Patrick Hurrelmann
On 03.01.2013 17:08, Itamar Heim wrote:
 Hi Everyone,
 
 as we wrap oVirt 3.2, I wanted to check with oVirt users on what they 
 find good/useful in oVirt, and what they would like to see 
 improved/added in coming versions?
 
 Thanks,
 Itamar

For me, I'd like to see official rpms for RHEL6/CentOS6. According to
the traffic on this list quite a lot are using Dreyou's packages.

But I'm really looking forward to oVirt 3.2 (reading all those commit
whets my appetite) :)

Regards
Patrick

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] What do you want to see in oVirt next?

2013-01-03 Thread Patrick Hurrelmann
On 03.01.2013 23:13, Moran Goldboim wrote:
 On 01/03/2013 07:42 PM, Darrell Budic wrote:

 On Jan 3, 2013, at 10:25 AM, Patrick Hurrelmann wrote:

 On 03.01.2013 17:08, Itamar Heim wrote:
 Hi Everyone,

 as we wrap oVirt 3.2, I wanted to check with oVirt users on what they
 find good/useful in oVirt, and what they would like to see
 improved/added in coming versions?

 Thanks,
Itamar

 For me, I'd like to see official rpms for RHEL6/CentOS6. According to
 the traffic on this list quite a lot are using Dreyou's packages.

 I'm going to second this strongly! Official support would be very much
 appreciated. Bonus points for supporting a migration from the dreyou
 packages. No offense to dreyou, of course, just rather be better
 supported by the official line on Centos 6.x.
 
 EL6 rpms are planned to be delivered with 3.2 GA version, and nightly
 builds from there on.
 hopefully we can push it to 3.2 beta.
 
 Moran.
 

Great news! Thanks a lot.

Regards
Patrick

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM stuck in state Not Responding

2012-09-28 Thread Patrick Hurrelmann
 Is there anything I can do to reset that stuck state and bring the VM
 back to live?

 Best regards
 Patrick

 
 try moving all vm's from that host (migrate them to the other hosts), 
 then fence it (or shutdown manually and right click, confirm shutdown) 
 to try and release the vm from it.

In the web interface it is shown with icon for stopped VMs and an empty
host, but status is showing Not Responding. So the stuck VM is not
assigned to any host? All 3 hosts and the engine itself have already
been rebooted since the storage crash (The hosts one by one and going to
maintenance before).

Regards
Patrick

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SPM not selected after host failed

2012-09-20 Thread Patrick Hurrelmann
On 20.09.2012 16:13, Itamar Heim wrote:
 On 09/20/2012 05:09 PM, Patrick Hurrelmann wrote:
 On 20.09.2012 16:01, Itamar Heim wrote:
 Power management is configured for both nodes. But this might be the
 problem: we use the integrated IPMI over LAN power management - and
 if I pull the plug on the machine the power management becomes un-
 available, too.

 Could this be the problem?

 yes... no auto recovery if can't verify node was fenced.
 for your tests, maybe power off the machine for your tests as opposed to
 no power?

 Ugh, this is ugly. I'm evaluating oVirt currently myself and have
 already suffered from a dead PSU that took down IPMI as well. I really
 don't want to imagine what happens if the host with SPM goes down due to
 a power failure :/ Is there really no other way? I guess multiple fence
 devices are not possible right now. E.g. first try to fence via IPMI and
 if that fails pull the plug via APC MasterSwitch. Any thoughts?
 
 SPM would be down until you manually confirm shutdown in this case.
 SPM doesn't affect running VMs on NFS/posix/local domains, and only 
 thinly provisioned VMs on block storage (iscsi/FC).
 
 question, if no power, would the APC still work?
 why not just use it to fence instead of IPMI?
 
 (and helping us close the gap on support for multiple fence devices 
 would be great)
 

Ok, maybe I wasn't precise enough. With power failure I actually meant a
broken PSU on the server and I won't be running any local/NFS storage
but only iSCSI.
But you're right with your point that in such situation fencing via APC
would be sufficient. I was mixing my different environments. My lab only
has IPMI right now, while the live environment most likely will have APC
as well.

Regards
Patrick

-- 
Lobster LOGsuite GmbH, Münchner Straße 15a, D-82319 Starnberg

HRB 178831, Amtsgericht München
Geschäftsführer: Dr. Martin Fischer, Rolf Henrich
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users