Re: [ovirt-users] vdsm problem

2017-01-19 Thread Стаценко Константин Юрьевич
Fixed. This is a Docker/SElinux problem, if someone interested…

From: Ilya Fedotov [mailto:kosh...@gmail.com]
Sent: Friday, January 20, 2017 9:14 AM
To: Стаценко Константин Юрьевич 
Cc: users 
Subject: Re: [ovirt-users] vdsm problem

Dear Konstantin,




 Read the instruction for installation before
 Where did you see  CentOS7.3   ?


 with br, Ilya





oVirt 4.0.6 Release Notes

The oVirt Project is pleased to announce the availability of 4.0.6 Release as 
of January 10, 2017.

oVirt is an open source alternative to VMware™ vSphere™, and provides an 
awesome KVM management interface for multi-node virtualization. This release is 
available now for Red Hat Enterprise Linux 7.2, CentOS Linux 7.2 (or similar).

To find out more about features which were added in previous oVirt releases, 
check out the previous versions release 
notes. For a 
general overview of oVirt, read the Quick Start 
Guide and the about 
oVirt page.

An updated documentation has been provided by our downstream Red Hat 
Virtualization

2017-01-20 9:02 GMT+03:00 Стаценко Константин Юрьевич 
>:
Anyone ?

From: users-boun...@ovirt.org 
[mailto:users-boun...@ovirt.org] On Behalf Of 
Стаценко Константин Юрьевич
Sent: Thursday, January 19, 2017 5:08 PM
To: users >
Subject: [ovirt-users] vdsm problem

Hello!
Today, after installing some of the updates, vdsmd suddenly dies. Running oVirt 
4.0.6 CentOS 7.3.
It cannot start any more:

# journalctl -xe

-- Subject: Unit vdsmd.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit vdsmd.service has begun starting up.
Jan 19 18:03:31 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: vdsm: Running mkdirs
Jan 19 18:03:31 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: vdsm: Running configure_coredump
Jan 19 18:03:31 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: vdsm: Running configure_vdsm_logs
Jan 19 18:03:31 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: vdsm: Running wait_for_network
Jan 19 18:03:31 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: vdsm: Running run_init_hooks
Jan 19 18:03:31 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: vdsm: Running upgraded_version_check
Jan 19 18:03:31 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: vdsm: Running check_is_configured
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
sasldblistusers2[20115]: DIGEST-MD5 common mech free
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: Error:
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: One of the modules is not configured to work with 
VDSM.
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: To configure the module use the following:
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: 'vdsm-tool configure [--module module-name]'.
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: If all modules are not configured try to use:
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: 'vdsm-tool configure --force'
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: (The force flag will stop the module's service and 
start it
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: afterwards automatically to load the new 
configuration.)
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: Current revision of multipath.conf detected, 
preserving
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: libvirt is already configured for vdsm
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: Modules sebool are not configured
Jan 19 18:03:32 msk1-kvm001.interrao.ru 
vdsmd_init_common.sh[20079]: vdsm: stopped during execute check_is_configured 
task (task returned with error code 1).
Jan 19 18:03:32 

Re: [ovirt-users] Monitoring disk I/O

2017-01-19 Thread Ernest Beinrohr

On 19.01.2017 21:42, Michael Watters wrote:

Does ovirt have any way to monitor disk I/O for each VM or disk in a
storage pool?  I am receiving disk latency warnings and would like to
know which VMs are causing the most disk I/O.

We have a homebrew IO vm monitoring, libvirt uses cgroups which record 
CPU and IO stats for each VM. It's a little tricky to follow the VM 
while it migrates, but once done, we have cpu and IO graphs for each VM.


Basicly for each hypervisor we periodicky poll cgroup info for all its VMs:

for vm in $vms
do
(
echo -n "$HOST:$vm:"
vm=${vm/-/x2d}
egrep -v "$IGNORED_REGEX" 
/sys/fs/cgroup/blkio/machine.slice/machine-qemu*$vm*/blkio.throttle.io_serviced 
| grep ^253:.*Read | cut -f3 -d " " | paste -sd+ | bc

echo -n ":"
egrep -v "$IGNORED_REGEX" 
/sys/fs/cgroup/blkio/machine.slice/machine-qemu*$vm*/blkio.throttle.io_serviced 
| grep ^253:.*Write | cut -f3 -d " " | paste -sd+ | bc

echo -n ":"
egrep -v "$IGNORED_REGEX" 
/sys/fs/cgroup/blkio/machine.slice/machine-qemu*$vm*/blkio.throttle.io_service_bytes 
| grep ^253:.*Read | cut -f3 -d " " | paste -sd+ | bc

echo -n ":"
egrep -v "$IGNORED_REGEX" 
/sys/fs/cgroup/blkio/machine.slice/machine-qemu*$vm*/blkio.throttle.io_service_bytes 
| grep ^253:.*Write | cut -f3 -d " " | paste -sd+ | bc

echo -n ":"
cat /sys/fs/cgroup/cpuacct/machine.slice/*$vm*/cpuacct.usage
) | tr -d '\n'
echo ""
done

and then we MRTG it.
--
Ernest Beinrohr, AXON PRO
Ing , RHCE 
, RHCVA 
, LPIC 
, VCA ,

+421-2-62410360 +421-903-482603
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] vdsm problem

2017-01-19 Thread Ilya Fedotov
Dear Konstantin,




 Read the instruction for installation before
 Where did you see  CentOS7.3   ?


 with br, Ilya





oVirt 4.0.6 Release Notes

The oVirt Project is pleased to announce the availability of 4.0.6 Release
as of January 10, 2017.

oVirt is an open source alternative to VMware™ vSphere™, and provides an
awesome KVM management interface for multi-node virtualization. This
release is available now for Red Hat Enterprise Linux 7.2, CentOS Linux 7.2
(or similar).

To find out more about features which were added in previous oVirt
releases, check out the previous versions release notes
. For a general
overview of oVirt, read the Quick Start Guide
 and the about oVirt
 page.

An updated documentation has been provided by our downstream Red Hat
Virtualization


2017-01-20 9:02 GMT+03:00 Стаценко Константин Юрьевич <
statsenko...@interrao.ru>:

> Anyone ?
>
>
>
> *From:* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *On
> Behalf Of *Стаценко Константин Юрьевич
> *Sent:* Thursday, January 19, 2017 5:08 PM
> *To:* users 
> *Subject:* [ovirt-users] vdsm problem
>
>
>
> Hello!
>
> Today, after installing some of the updates, vdsmd suddenly dies. Running
> oVirt 4.0.6 CentOS 7.3.
>
> It cannot start any more:
>
>
>
> *# journalctl -xe *
>
>
>
> -- Subject: Unit vdsmd.service has begun start-up
>
> -- Defined-By: systemd
>
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
>
> --
>
> -- Unit vdsmd.service has begun starting up.
>
> Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]:
> vdsm: Running mkdirs
>
> Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]:
> vdsm: Running configure_coredump
>
> Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]:
> vdsm: Running configure_vdsm_logs
>
> Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]:
> vdsm: Running wait_for_network
>
> Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]:
> vdsm: Running run_init_hooks
>
> Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]:
> vdsm: Running upgraded_version_check
>
> Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]:
> vdsm: Running check_is_configured
>
> Jan 19 18:03:32 msk1-kvm001.interrao.ru sasldblistusers2[20115]:
> DIGEST-MD5 common mech free
>
> Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]:
> Error:
>
> Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: One
> of the modules is not configured to work with VDSM.
>
> Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: To
> configure the module use the following:
>
> Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]:
> 'vdsm-tool configure [--module module-name]'.
>
> Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: If
> all modules are not configured try to use:
>
> Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]:
> 'vdsm-tool configure --force'
>
> Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: (The
> force flag will stop the module's service and start it
>
> Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]:
> afterwards automatically to load the new configuration.)
>
> Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]:
> Current revision of multipath.conf detected, preserving
>
> Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]:
> libvirt is already configured for vdsm
>
> Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]:
> Modules sebool are not configured
>
> Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]:
> vdsm: stopped during execute check_is_configured task (task returned with
> error code 1).
>
> Jan 19 18:03:32 msk1-kvm001.interrao.ru systemd[1]: vdsmd.service:
> control process exited, code=exited status=1
>
> Jan 19 18:03:32 msk1-kvm001.interrao.ru systemd[1]: Failed to start
> Virtual Desktop Server Manager.
>
> -- Subject: Unit vdsmd.service has failed
>
> -- Defined-By: systemd
>
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
>
> --
>
> -- Unit vdsmd.service has failed.
>
> --
>
> -- The result is failed.
>
> Jan 19 18:03:32 msk1-kvm001.interrao.ru systemd[1]: Dependency failed for
> MOM instance configured for VDSM purposes.
>
> -- Subject: Unit mom-vdsm.service has failed
>
> -- Defined-By: systemd
>
> -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
>
> --
>
> -- Unit mom-vdsm.service has failed.
>
> --
>
> -- The result is dependency.
>
> Jan 19 18:03:32 msk1-kvm001.interrao.ru systemd[1]: Job
> mom-vdsm.service/start failed with result 

Re: [ovirt-users] vdsm problem

2017-01-19 Thread Стаценко Константин Юрьевич
Anyone ?

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
Стаценко Константин Юрьевич
Sent: Thursday, January 19, 2017 5:08 PM
To: users 
Subject: [ovirt-users] vdsm problem

Hello!
Today, after installing some of the updates, vdsmd suddenly dies. Running oVirt 
4.0.6 CentOS 7.3.
It cannot start any more:

# journalctl -xe

-- Subject: Unit vdsmd.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit vdsmd.service has begun starting up.
Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: vdsm: 
Running mkdirs
Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: vdsm: 
Running configure_coredump
Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: vdsm: 
Running configure_vdsm_logs
Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: vdsm: 
Running wait_for_network
Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: vdsm: 
Running run_init_hooks
Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: vdsm: 
Running upgraded_version_check
Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: vdsm: 
Running check_is_configured
Jan 19 18:03:32 msk1-kvm001.interrao.ru sasldblistusers2[20115]: DIGEST-MD5 
common mech free
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: Error:
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: One of the 
modules is not configured to work with VDSM.
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: To 
configure the module use the following:
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: 'vdsm-tool 
configure [--module module-name]'.
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: If all 
modules are not configured try to use:
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: 'vdsm-tool 
configure --force'
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: (The force 
flag will stop the module's service and start it
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: afterwards 
automatically to load the new configuration.)
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: Current 
revision of multipath.conf detected, preserving
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: libvirt is 
already configured for vdsm
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: Modules 
sebool are not configured
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: vdsm: 
stopped during execute check_is_configured task (task returned with error code 
1).
Jan 19 18:03:32 msk1-kvm001.interrao.ru systemd[1]: vdsmd.service: control 
process exited, code=exited status=1
Jan 19 18:03:32 msk1-kvm001.interrao.ru systemd[1]: Failed to start Virtual 
Desktop Server Manager.
-- Subject: Unit vdsmd.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit vdsmd.service has failed.
--
-- The result is failed.
Jan 19 18:03:32 msk1-kvm001.interrao.ru systemd[1]: Dependency failed for MOM 
instance configured for VDSM purposes.
-- Subject: Unit mom-vdsm.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mom-vdsm.service has failed.
--
-- The result is dependency.
Jan 19 18:03:32 msk1-kvm001.interrao.ru systemd[1]: Job mom-vdsm.service/start 
failed with result 'dependency'.
Jan 19 18:03:32 msk1-kvm001.interrao.ru systemd[1]: Unit vdsmd.service entered 
failed state.
Jan 19 18:03:32 msk1-kvm001.interrao.ru systemd[1]: vdsmd.service failed.
Jan 19 18:03:32 msk1-kvm001.interrao.ru systemd[1]: Cannot add dependency job 
for unit microcode.service, ignoring: Invalid argument
Jan 19 18:03:33 msk1-kvm001.interrao.ru systemd[1]: vdsmd.service holdoff time 
over, scheduling restart.
Jan 19 18:03:33 msk1-kvm001.interrao.ru systemd[1]: Cannot add dependency job 
for unit microcode.service, ignoring: Unit is not loaded properly: Invalid 
argumen
Jan 19 18:03:33 msk1-kvm001.interrao.ru systemd[1]: start request repeated too 
quickly for vdsmd.service
Jan 19 18:03:33 msk1-kvm001.interrao.ru systemd[1]: Failed to start Virtual 
Desktop Server Manager.
-- Subject: Unit vdsmd.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit vdsmd.service has failed.
--
-- The result is failed.
Jan 19 18:03:33 msk1-kvm001.interrao.ru systemd[1]: Dependency failed for MOM 
instance configured for VDSM purposes.
-- Subject: Unit mom-vdsm.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mom-vdsm.service has failed.
--
-- The result is dependency.
Jan 19 18:03:33 msk1-kvm001.interrao.ru systemd[1]: Job 

Re: [ovirt-users] Gluster storage expansion

2017-01-19 Thread knarra

On 01/19/2017 09:15 PM, Goorkate, B.J. wrote:

Hi all,

I have an oVirt environment with 5 nodes. 3 nodes offer a replica-3 gluster 
storage domain for the virtual
machines.

Is there a way to use storage in the nodes which are no member of the replica-3 
storage domain?
Or do I need another node and make a second replica-3 gluster storage domain?
since  you have 5 nodes in your cluster, you could add another node and 
make replica-3 gluster storage domain out of these three nodes which are 
no member of the already existing replica-3 storage domain.


In other words: I would like to expand the existing storage domain by adding 
more nodes, rather
than adding disks to the existing gluster nodes. Is that possible?

Thanks!

Regards,

Bertjan



--

De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
te informeren door het bericht te retourneren. Het Universitair Medisch
Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W.
(Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij
de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.

Denk s.v.p aan het milieu voor u deze e-mail afdrukt.

--

This message may contain confidential information and is intended exclusively
for the addressee. If you receive this message unintentionally, please do not
use the contents but notify the sender immediately by return e-mail. University
Medical Center Utrecht is a legal person by public law and is registered at
the Chamber of Commerce for Midden-Nederland under no. 30244197.

Please consider the environment before printing this e-mail.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Monitoring disk I/O

2017-01-19 Thread Michael Watters
Thanks.  I can monitor the VMs using snmp or collectd but what I'd like
is to have I/O use shown in the engine just like memory and CPU use are.


On 1/19/17 4:21 PM, Markus Stockhausen wrote:
> Hi there ...
>
> we are running a simple custom script that collects data of qemu 
> on the nodes via /proc etc. Storing this into RRD databases and
> doing a little LAMP scripting you get the attached result.
>
> Best regards.
>
> Markus
>
> 
> Von: users-boun...@ovirt.org [users-boun...@ovirt.org] im Auftrag von 
> Michael Watters [watte...@watters.ws]
> Gesendet: Donnerstag, 19. Januar 2017 21:42
> An: users
> Betreff: [ovirt-users] Monitoring disk I/O
>
> Does ovirt have any way to monitor disk I/O for each VM or disk in a
> storage pool?  I am receiving disk latency warnings and would like to
> know which VMs are causing the most disk I/O.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Cannot move pointer to top of console view in noVNC

2017-01-19 Thread George Chlipala
The subject line says it all.  When using noVNC and I move the mouse
to the top of the console view, the pointer will stop short.  So, if I
have a Windows VM and a window that is maximized, I cannot click on
the close button.

oVirt Engine Version: 4.0.5.5-1.el7.centos
browser: Google Chrome 55.0.2883.87 m

I have also tried using Firefox (50.1.0) and experience the same issue.

Any help in this matter would be greatly appreciated.

George Chlipala
gchl...@uic.edu
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Monitoring disk I/O

2017-01-19 Thread Michael Watters
Does ovirt have any way to monitor disk I/O for each VM or disk in a
storage pool?  I am receiving disk latency warnings and would like to
know which VMs are causing the most disk I/O.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Select As SPM Fails

2017-01-19 Thread Pavel Gashev
The fix in 4.17.35 is backported from oVirt 4.0. You will not hit it again.

Technically, vdsm 4.17.35 has been released as part of RHEV 3.6.9. So it's 
kinda recommended version if you run 3.6.


From: Beau Sapach 
Sent: Jan 19, 2017 10:58 PM
To: Michael Watters
Cc: Pavel Gashev; users@ovirt.org
Subject: Re: [ovirt-users] Select As SPM Fails

Hmmm, makes sense, thanks for the info!  I'm not enthusiastic about installing 
packages outside of the ovirt repos so will probably look into an upgrade 
regardless.  I noticed that ovirt 4 only lists support for RHEL/CentOS 7.2, 
will a situation such as this crop up again eventually as incremental updates 
for the OS continue to push it past the supported version?  I've been running 
oVirt for less than a year now so I'm curious what to expect.

On Thu, Jan 19, 2017 at 10:42 AM, Michael Watters 
> wrote:
You can upgrade vdsm without upgrading to ovirt 4.  I went through the
same issue on our cluster a few weeks ago and the process was pretty
simple.

You'll need to do this on each of your hosts.

yum --enablerepo=extras install -y epel-release git
git clone https://github.com/oVirt/vdsm.git
cd  vdsm
git checkout v4.17.35
yum install -y `cat ./automation/build-artifacts.packages`
./automation/build-artifacts.sh

cd /root/rpmbuild/RPMS/noarch
yum --enablerepo=extras install centos-release-qemu-ev
yum localinstall vdsm-4.17.35-1.el7.centos.noarch.rpm  
vdsm-hook-vmfex-dev-4.17.35-1.el7.centos.noarch.rpm 
vdsm-infra-4.17.35-1.el7.centos.noarch.rpm 
vdsm-jsonrpc-4.17.35-1.el7.centos.noarch.rpm 
vdsm-python-4.17.35-1.el7.centos.noarch.rpm 
vdsm-xmlrpc-4.17.35-1.el7.centos.noarch.rpm 
vdsm-yajsonrpc-4.17.35-1.el7.centos.noarch.rpm 
vdsm-python-4.17.35-1.el7.centos.noarch.rpm 
vdsm-xmlrpc-4.17.35-1.el7.centos.noarch.rpm 
vdsm-cli-4.17.35-1.el7.centos.noarch.rpm
systemctl restart vdsmd

The qemu-ev repo is needed to avoid dependency errors.


On Thu, 2017-01-19 at 09:16 -0700, Beau Sapach wrote:
> Uh oh, looks like an upgrade to version 4 is the only option then
> unless I'm missing something.
>
> On Thu, Jan 19, 2017 at 1:36 AM, Pavel Gashev 
> >
> wrote:
> > Beau,
> >
> > Looks like you have upgraded to CentOS 7.3. Now you have to update
> > the vdsm package to 4.17.35.
> >
> >
> > From: > on behalf 
> > of Beau Sapach  > alberta.ca>
> > Date: Wednesday 18 January 2017 at 23:56
> > To: "users@ovirt.org" 
> > >
> > Subject: [ovirt-users] Select As SPM Fails
> >
> > Hello everyone,
> >
> > I'm about to start digging through the mailing list archives in
> > search of a solution but thought I would post to the list as well.
> > I'm running oVirt 3.6 on a 2 node CentOS7 cluster backed by fiber
> > channel storage and with a separate engine VM running outside of
> > the cluster (NOT  hosted-engine).
> >
> > When I try to move the SPM role from one node to the other I get
> > the following in the web interface:
> >
> >
> >
> > When I look into /var/log/ovirt-engine/engine.log I see the
> > following:
> >
> > 2017-01-18 13:35:09,332 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVD
> > SCommand] (default task-26) [6990cfca] Failed in
> > 'HSMGetAllTasksStatusesVDS' method
> > 2017-01-18 13:35:09,340 ERROR
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirect
> > or] (default task-26) [6990cfca] Correlation ID: null, Call Stack:
> > null, Custom Event ID: -1, Message: VDSM v6 command failed: Logical
> > Volume extend failed
> >
> > When I look at the task list on the host currently holding the SPM
> > role (in this case 'v6'), using: vdsClient -s 0 getAllTasks, I see
> > a long list like this:
> >
> > dc75d3e7-cea7-449b-9a04-76fd8ef0f82b :
> >  verb = downloadImageFromStream
> >  code = 554
> >  state = recovered
> >  tag = spm
> >  result =
> >  message = Logical Volume extend failed
> >  id = dc75d3e7-cea7-449b-9a04-76fd8ef0f82b
> >
> > When I look at /var/log/vdsm/vdsm.log on the host in question (v6)
> > I see messages like this:
> >
> > '531dd533-22b1-47a0-aae8-76c1dd7d9a56': {'code': 554, 'tag':
> > u'spm', 'state': 'recovered', 'verb': 'downloadImageFromStreaam',
> > 'result': '', 'message': 'Logical Volume extend failed', 'id':
> > '531dd533-22b1-47a0-aae8-76c1dd7d9a56'}
> >
> > As well as the error from the attempted extend of the logical
> > volume:
> >
> > e980df5f-d068-4c84-8aa7-9ce792690562::ERROR::2017-01-18
> > 13:24:50,710::task::866::Storage.TaskManager.Task::(_setError)
> > Task=`e980df5f-d068-4c84-8aa7-9ce792690562`::Unexpected error
> > Traceback (most recent call last):
> >   File "/usr/share/vdsm/storage/task.py", line 873, in _run
> > return 

Re: [ovirt-users] Select As SPM Fails

2017-01-19 Thread Michael Watters
Anything is possible however I haven't had any issues since upgrading to
vdsm 4.17.35.

 

On 01/19/2017 02:58 PM, Beau Sapach wrote:
> Hmmm, makes sense, thanks for the info!  I'm not enthusiastic about
> installing packages outside of the ovirt repos so will probably look
> into an upgrade regardless.  I noticed that ovirt 4 only lists support
> for RHEL/CentOS 7.2, will a situation such as this crop up again
> eventually as incremental updates for the OS continue to push it past
> the supported version?  I've been running oVirt for less than a year
> now so I'm curious what to expect.
>
> On Thu, Jan 19, 2017 at 10:42 AM, Michael Watters
> > wrote:
>
> You can upgrade vdsm without upgrading to ovirt 4.  I went through the
> same issue on our cluster a few weeks ago and the process was pretty
> simple.
>
> You'll need to do this on each of your hosts.
>
> yum --enablerepo=extras install -y epel-release git
> git clone https://github.com/oVirt/vdsm.git
> 
> cd  vdsm
> git checkout v4.17.35
> yum install -y `cat ./automation/build-artifacts.packages`
> ./automation/build-artifacts.sh
>
> cd /root/rpmbuild/RPMS/noarch
> yum --enablerepo=extras install centos-release-qemu-ev
> yum localinstall vdsm-4.17.35-1.el7.centos.noarch.rpm 
> vdsm-hook-vmfex-dev-4.17.35-1.el7.centos.noarch.rpm
> vdsm-infra-4.17.35-1.el7.centos.noarch.rpm
> vdsm-jsonrpc-4.17.35-1.el7.centos.noarch.rpm
> vdsm-python-4.17.35-1.el7.centos.noarch.rpm
> vdsm-xmlrpc-4.17.35-1.el7.centos.noarch.rpm
> vdsm-yajsonrpc-4.17.35-1.el7.centos.noarch.rpm
> vdsm-python-4.17.35-1.el7.centos.noarch.rpm
> vdsm-xmlrpc-4.17.35-1.el7.centos.noarch.rpm
> vdsm-cli-4.17.35-1.el7.centos.noarch.rpm
> systemctl restart vdsmd
>
> The qemu-ev repo is needed to avoid dependency errors.
>
>
> On Thu, 2017-01-19 at 09:16 -0700, Beau Sapach wrote:
> > Uh oh, looks like an upgrade to version 4 is the only option
> then
> > unless I'm missing something.
> >
> > On Thu, Jan 19, 2017 at 1:36 AM, Pavel Gashev  >
> > wrote:
> > > Beau,
> > >  
> > > Looks like you have upgraded to CentOS 7.3. Now you have to update
> > > the vdsm package to 4.17.35.
> > >  
> > >  
> > > From:  > on behalf of Beau Sapach  > > alberta.ca >
> > > Date: Wednesday 18 January 2017 at 23:56
> > > To: "users@ovirt.org "
> >
> > > Subject: [ovirt-users] Select As SPM Fails
> > >  
> > > Hello everyone,
> > >  
> > > I'm about to start digging through the mailing list archives in
> > > search of a solution but thought I would post to the list as
> well. 
> > > I'm running oVirt 3.6 on a 2 node CentOS7 cluster backed by fiber
> > > channel storage and with a separate engine VM running outside of
> > > the cluster (NOT  hosted-engine).
> > >  
> > > When I try to move the SPM role from one node to the other I get
> > > the following in the web interface:
> > >  
> > >
> > >  
> > > When I look into /var/log/ovirt-engine/engine.log I see the
> > > following:
> > >  
> > > 2017-01-18 13:35:09,332 ERROR
> > >
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVD
> > > SCommand] (default task-26) [6990cfca] Failed in
> > > 'HSMGetAllTasksStatusesVDS' method
> > > 2017-01-18 13:35:09,340 ERROR
> > >
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirect
> > > or] (default task-26) [6990cfca] Correlation ID: null, Call Stack:
> > > null, Custom Event ID: -1, Message: VDSM v6 command failed:
> Logical
> > > Volume extend failed
> > >  
> > > When I look at the task list on the host currently holding the SPM
> > > role (in this case 'v6'), using: vdsClient -s 0 getAllTasks, I see
> > > a long list like this:
> > >  
> > > dc75d3e7-cea7-449b-9a04-76fd8ef0f82b :
> > >  verb = downloadImageFromStream
> > >  code = 554
> > >  state = recovered
> > >  tag = spm
> > >  result =
> > >  message = Logical Volume extend failed
> > >  id = dc75d3e7-cea7-449b-9a04-76fd8ef0f82b
> > >  
> > > When I look at /var/log/vdsm/vdsm.log on the host in question (v6)
> > > I see messages like this:
> > >  
> > > '531dd533-22b1-47a0-aae8-76c1dd7d9a56': {'code': 554, 'tag':
> > > u'spm', 'state': 'recovered', 'verb': 'downloadImageFromStreaam',
> > > 'result': '', 'message': 'Logical Volume extend failed', 'id':
> > > '531dd533-22b1-47a0-aae8-76c1dd7d9a56'}
> > >  
> > 

Re: [ovirt-users] Select As SPM Fails

2017-01-19 Thread Beau Sapach
Hmmm, makes sense, thanks for the info!  I'm not enthusiastic about
installing packages outside of the ovirt repos so will probably look into
an upgrade regardless.  I noticed that ovirt 4 only lists support for
RHEL/CentOS 7.2, will a situation such as this crop up again eventually as
incremental updates for the OS continue to push it past the supported
version?  I've been running oVirt for less than a year now so I'm curious
what to expect.

On Thu, Jan 19, 2017 at 10:42 AM, Michael Watters 
wrote:

> You can upgrade vdsm without upgrading to ovirt 4.  I went through the
> same issue on our cluster a few weeks ago and the process was pretty
> simple.
>
> You'll need to do this on each of your hosts.
>
> yum --enablerepo=extras install -y epel-release git
> git clone https://github.com/oVirt/vdsm.git
> cd  vdsm
> git checkout v4.17.35
> yum install -y `cat ./automation/build-artifacts.packages`
> ./automation/build-artifacts.sh
>
> cd /root/rpmbuild/RPMS/noarch
> yum --enablerepo=extras install centos-release-qemu-ev
> yum localinstall vdsm-4.17.35-1.el7.centos.noarch.rpm
> vdsm-hook-vmfex-dev-4.17.35-1.el7.centos.noarch.rpm
> vdsm-infra-4.17.35-1.el7.centos.noarch.rpm 
> vdsm-jsonrpc-4.17.35-1.el7.centos.noarch.rpm
> vdsm-python-4.17.35-1.el7.centos.noarch.rpm 
> vdsm-xmlrpc-4.17.35-1.el7.centos.noarch.rpm
> vdsm-yajsonrpc-4.17.35-1.el7.centos.noarch.rpm 
> vdsm-python-4.17.35-1.el7.centos.noarch.rpm
> vdsm-xmlrpc-4.17.35-1.el7.centos.noarch.rpm vdsm-cli-4.17.35-1.el7.centos.
> noarch.rpm
> systemctl restart vdsmd
>
> The qemu-ev repo is needed to avoid dependency errors.
>
>
> On Thu, 2017-01-19 at 09:16 -0700, Beau Sapach wrote:
> > Uh oh, looks like an upgrade to version 4 is the only option then
> > unless I'm missing something.
> >
> > On Thu, Jan 19, 2017 at 1:36 AM, Pavel Gashev 
> > wrote:
> > > Beau,
> > >
> > > Looks like you have upgraded to CentOS 7.3. Now you have to update
> > > the vdsm package to 4.17.35.
> > >
> > >
> > > From:  on behalf of Beau Sapach  > > alberta.ca>
> > > Date: Wednesday 18 January 2017 at 23:56
> > > To: "users@ovirt.org" 
> > > Subject: [ovirt-users] Select As SPM Fails
> > >
> > > Hello everyone,
> > >
> > > I'm about to start digging through the mailing list archives in
> > > search of a solution but thought I would post to the list as well.
> > > I'm running oVirt 3.6 on a 2 node CentOS7 cluster backed by fiber
> > > channel storage and with a separate engine VM running outside of
> > > the cluster (NOT  hosted-engine).
> > >
> > > When I try to move the SPM role from one node to the other I get
> > > the following in the web interface:
> > >
> > >
> > >
> > > When I look into /var/log/ovirt-engine/engine.log I see the
> > > following:
> > >
> > > 2017-01-18 13:35:09,332 ERROR
> > > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVD
> > > SCommand] (default task-26) [6990cfca] Failed in
> > > 'HSMGetAllTasksStatusesVDS' method
> > > 2017-01-18 13:35:09,340 ERROR
> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirect
> > > or] (default task-26) [6990cfca] Correlation ID: null, Call Stack:
> > > null, Custom Event ID: -1, Message: VDSM v6 command failed: Logical
> > > Volume extend failed
> > >
> > > When I look at the task list on the host currently holding the SPM
> > > role (in this case 'v6'), using: vdsClient -s 0 getAllTasks, I see
> > > a long list like this:
> > >
> > > dc75d3e7-cea7-449b-9a04-76fd8ef0f82b :
> > >  verb = downloadImageFromStream
> > >  code = 554
> > >  state = recovered
> > >  tag = spm
> > >  result =
> > >  message = Logical Volume extend failed
> > >  id = dc75d3e7-cea7-449b-9a04-76fd8ef0f82b
> > >
> > > When I look at /var/log/vdsm/vdsm.log on the host in question (v6)
> > > I see messages like this:
> > >
> > > '531dd533-22b1-47a0-aae8-76c1dd7d9a56': {'code': 554, 'tag':
> > > u'spm', 'state': 'recovered', 'verb': 'downloadImageFromStreaam',
> > > 'result': '', 'message': 'Logical Volume extend failed', 'id':
> > > '531dd533-22b1-47a0-aae8-76c1dd7d9a56'}
> > >
> > > As well as the error from the attempted extend of the logical
> > > volume:
> > >
> > > e980df5f-d068-4c84-8aa7-9ce792690562::ERROR::2017-01-18
> > > 13:24:50,710::task::866::Storage.TaskManager.Task::(_setError)
> > > Task=`e980df5f-d068-4c84-8aa7-9ce792690562`::Unexpected error
> > > Traceback (most recent call last):
> > >   File "/usr/share/vdsm/storage/task.py", line 873, in _run
> > > return fn(*args, **kargs)
> > >   File "/usr/share/vdsm/storage/task.py", line 332, in run
> > > return self.cmd(*self.argslist, **self.argsdict)
> > >   File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
> > > return method(self, *args, **kwargs)
> > >   File "/usr/share/vdsm/storage/sp.py", line 1776, in
> > > downloadImageFromStream
> > > 

[ovirt-users] oVirt Community Newsletter: December 2016

2017-01-19 Thread Brian Proffitt
It's a new year with new opportunities for oVirt to show up its
virtualization features! We're getting ready for DevConf.CZ in Brno next
week, and FOSDEM in Brussels the week after that! We look forward to
meeting European developers and sysadmins to share your experiences!

Here's what happened in December of 2016:

-
Software Releases
-

oVirt 4.0.6 Release is now available
http://bit.ly/2iOI9cY


In the Community


Happy New Documentation!
http://bit.ly/2iOLCrW

oVirt System Tests to the Rescue!—How to Run End-to-End oVirt Tests on Your
Patch
http://bit.ly/2iONDUR

CI Please Build—How to build your oVirt project on-demand
http://bit.ly/2iOTAkD

The Need for Speed—Coming Changes in oVirt's CI Standards
http://bit.ly/2iOPUzf

Еxtension of iptables Rules on oVirt 4.0 Hosts
http://bit.ly/2iOPARp

New oVirt Project Underway
http://bit.ly/2iOKeW6


Deep Dives and Technical Discussions


KVM/Linux Nested Virtualization Support For ARM
http://bit.ly/2iOILiD

Virtual Machines in Kubernetes? How and what makes sense?
http://bit.ly/2iOWDtj

ANNOUNCE: New libvirt project Go XML parser model
http://bit.ly/2iORd1j

Using OVN with KVM and Libvirt
http://bit.ly/2iOOEwc

New libvirt project Go language bindings
http://bit.ly/2iP0ne5

CI tools testing lab: Making it do useful work
http://bit.ly/2iOVilZ

CI tools testing lab: Integrating Jenkins and adding Zuul UI
http://bit.ly/2iOOBRa

CI tools testing lab: Adding Zuul Merger
http://bit.ly/2iOP1a8

CI tools testing lab: Setting up Zuul Server
http://bit.ly/2iOUZYn

CI tools testing lab: Adding Gerrit
http://bit.ly/2iP0CG1

CI tools testing lab: Initial setup with Jenkins
http://bit.ly/2iOSvtc

---
Downstream News
---

Debugging a kernel in QEMU/libvirt
http://red.ht/2jvB5mf

Five Reasons to Switch from vSphere to Red Hat Virtualization
http://red.ht/2iLW5YU

Red Hat scoops Best Virtualization Product at the V3 Technology Awards 2016
http://red.ht/2hW9N7B


-- 
Brian Proffitt
Principal Community Analyst
Open Source and Standards
@TheTechScribe
574.383.9BKP
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Select As SPM Fails

2017-01-19 Thread Michael Watters
Looks like Office 365 likes to mangle hyperlinks.  The correct URL for
the git repo should be github.com/oVirt/vdsm.git.


On Thu, 2017-01-19 at 17:42 +, Michael Watters wrote:
> You can upgrade vdsm without upgrading to ovirt 4.  I went through
> the
> same issue on our cluster a few weeks ago and the process was pretty
> simple.
> 
> You'll need to do this on each of your hosts.
> 
> yum --enablerepo=extras install -y epel-release git
> git clone https://na01.safelinks.protection.outlook.com/?url=https%3A
> %2F%2Fgithub.com%2FoVirt%2Fvdsm.git=02%7C01%7Cmichael.watters%40
> dart.biz%7C30675c7dffcf4b79e32408d44094a5aa%7Cd90804aba2264b3da37a256
> f7aba7ff1%7C0%7C0%7C636204454576524118=jWBHvAr0I03%2FerbUReeZkA
> 0A39dc0rinYDn%2Bzxg6N%2B4%3D=0
> cd  vdsm
> git checkout v4.17.35
> yum install -y `cat ./automation/build-artifacts.packages`
> ./automation/build-artifacts.sh
> 
> cd /root/rpmbuild/RPMS/noarch
> yum --enablerepo=extras install centos-release-qemu-ev
> yum localinstall vdsm-4.17.35-1.el7.centos.noarch.rpm  vdsm-hook-
> vmfex-dev-4.17.35-1.el7.centos.noarch.rpm vdsm-infra-4.17.35-
> 1.el7.centos.noarch.rpm vdsm-jsonrpc-4.17.35-1.el7.centos.noarch.rpm
> vdsm-python-4.17.35-1.el7.centos.noarch.rpm vdsm-xmlrpc-4.17.35-
> 1.el7.centos.noarch.rpm vdsm-yajsonrpc-4.17.35-
> 1.el7.centos.noarch.rpm vdsm-python-4.17.35-1.el7.centos.noarch.rpm
> vdsm-xmlrpc-4.17.35-1.el7.centos.noarch.rpm vdsm-cli-4.17.35-
> 1.el7.centos.noarch.rpm
> systemctl restart vdsmd
> 
> The qemu-ev repo is needed to avoid dependency errors.
> 
> 
> On Thu, 2017-01-19 at 09:16 -0700, Beau Sapach wrote:
> > Uh oh, looks like an upgrade to version 4 is the only option
> > then
> > unless I'm missing something.
> > 
> > On Thu, Jan 19, 2017 at 1:36 AM, Pavel Gashev 
> > wrote:
> > > Beau,
> > >  
> > > Looks like you have upgraded to CentOS 7.3. Now you have to
> > > update
> > > the vdsm package to 4.17.35.
> > >  
> > >  
> > > From:  on behalf of Beau Sapach  > > @u
> > > alberta.ca>
> > > Date: Wednesday 18 January 2017 at 23:56
> > > To: "users@ovirt.org" 
> > > Subject: [ovirt-users] Select As SPM Fails
> > >  
> > > Hello everyone,
> > >  
> > > I'm about to start digging through the mailing list archives in
> > > search of a solution but thought I would post to the list as
> > > well. 
> > > I'm running oVirt 3.6 on a 2 node CentOS7 cluster backed by fiber
> > > channel storage and with a separate engine VM running outside of
> > > the cluster (NOT  hosted-engine).
> > >  
> > > When I try to move the SPM role from one node to the other I get
> > > the following in the web interface:
> > >  
> > > 
> > >  
> > > When I look into /var/log/ovirt-engine/engine.log I see the
> > > following:
> > >  
> > > 2017-01-18 13:35:09,332 ERROR
> > > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatuses
> > > VD
> > > SCommand] (default task-26) [6990cfca] Failed in
> > > 'HSMGetAllTasksStatusesVDS' method
> > > 2017-01-18 13:35:09,340 ERROR
> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDire
> > > ct
> > > or] (default task-26) [6990cfca] Correlation ID: null, Call
> > > Stack:
> > > null, Custom Event ID: -1, Message: VDSM v6 command failed:
> > > Logical
> > > Volume extend failed
> > >  
> > > When I look at the task list on the host currently holding the
> > > SPM
> > > role (in this case 'v6'), using: vdsClient -s 0 getAllTasks, I
> > > see
> > > a long list like this:
> > >  
> > > dc75d3e7-cea7-449b-9a04-76fd8ef0f82b :
> > >          verb = downloadImageFromStream
> > >          code = 554
> > >          state = recovered
> > >          tag = spm
> > >          result =
> > >          message = Logical Volume extend failed
> > >          id = dc75d3e7-cea7-449b-9a04-76fd8ef0f82b
> > >  
> > > When I look at /var/log/vdsm/vdsm.log on the host in question
> > > (v6)
> > > I see messages like this:
> > >  
> > > '531dd533-22b1-47a0-aae8-76c1dd7d9a56': {'code': 554, 'tag':
> > > u'spm', 'state': 'recovered', 'verb': 'downloadImageFromStreaam',
> > > 'result': '', 'message': 'Logical Volume extend failed', 'id':
> > > '531dd533-22b1-47a0-aae8-76c1dd7d9a56'}
> > >  
> > > As well as the error from the attempted extend of the logical
> > > volume:
> > >  
> > > e980df5f-d068-4c84-8aa7-9ce792690562::ERROR::2017-01-18
> > > 13:24:50,710::task::866::Storage.TaskManager.Task::(_setError)
> > > Task=`e980df5f-d068-4c84-8aa7-9ce792690562`::Unexpected error
> > > Traceback (most recent call last):
> > >   File "/usr/share/vdsm/storage/task.py", line 873, in _run
> > >     return fn(*args, **kargs)
> > >   File "/usr/share/vdsm/storage/task.py", line 332, in run
> > >     return self.cmd(*self.argslist, **self.argsdict)
> > >   File "/usr/share/vdsm/storage/securable.py", line 77, in
> > > wrapper
> > >     return method(self, *args, **kwargs)
> > >   File "/usr/share/vdsm/storage/sp.py", line 1776, in
> > > downloadImageFromStream
> > >     

Re: [ovirt-users] Select As SPM Fails

2017-01-19 Thread Michael Watters
You can upgrade vdsm without upgrading to ovirt 4.  I went through the
same issue on our cluster a few weeks ago and the process was pretty
simple.

You'll need to do this on each of your hosts.

yum --enablerepo=extras install -y epel-release git
git clone https://github.com/oVirt/vdsm.git
cd  vdsm
git checkout v4.17.35
yum install -y `cat ./automation/build-artifacts.packages`
./automation/build-artifacts.sh

cd /root/rpmbuild/RPMS/noarch
yum --enablerepo=extras install centos-release-qemu-ev
yum localinstall vdsm-4.17.35-1.el7.centos.noarch.rpm  
vdsm-hook-vmfex-dev-4.17.35-1.el7.centos.noarch.rpm 
vdsm-infra-4.17.35-1.el7.centos.noarch.rpm 
vdsm-jsonrpc-4.17.35-1.el7.centos.noarch.rpm 
vdsm-python-4.17.35-1.el7.centos.noarch.rpm 
vdsm-xmlrpc-4.17.35-1.el7.centos.noarch.rpm 
vdsm-yajsonrpc-4.17.35-1.el7.centos.noarch.rpm 
vdsm-python-4.17.35-1.el7.centos.noarch.rpm 
vdsm-xmlrpc-4.17.35-1.el7.centos.noarch.rpm 
vdsm-cli-4.17.35-1.el7.centos.noarch.rpm
systemctl restart vdsmd

The qemu-ev repo is needed to avoid dependency errors.


On Thu, 2017-01-19 at 09:16 -0700, Beau Sapach wrote:
> Uh oh, looks like an upgrade to version 4 is the only option then
> unless I'm missing something.
> 
> On Thu, Jan 19, 2017 at 1:36 AM, Pavel Gashev 
> wrote:
> > Beau,
> >  
> > Looks like you have upgraded to CentOS 7.3. Now you have to update
> > the vdsm package to 4.17.35.
> >  
> >  
> > From:  on behalf of Beau Sapach  > alberta.ca>
> > Date: Wednesday 18 January 2017 at 23:56
> > To: "users@ovirt.org" 
> > Subject: [ovirt-users] Select As SPM Fails
> >  
> > Hello everyone,
> >  
> > I'm about to start digging through the mailing list archives in
> > search of a solution but thought I would post to the list as well. 
> > I'm running oVirt 3.6 on a 2 node CentOS7 cluster backed by fiber
> > channel storage and with a separate engine VM running outside of
> > the cluster (NOT  hosted-engine).
> >  
> > When I try to move the SPM role from one node to the other I get
> > the following in the web interface:
> >  
> > 
> >  
> > When I look into /var/log/ovirt-engine/engine.log I see the
> > following:
> >  
> > 2017-01-18 13:35:09,332 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVD
> > SCommand] (default task-26) [6990cfca] Failed in
> > 'HSMGetAllTasksStatusesVDS' method
> > 2017-01-18 13:35:09,340 ERROR
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirect
> > or] (default task-26) [6990cfca] Correlation ID: null, Call Stack:
> > null, Custom Event ID: -1, Message: VDSM v6 command failed: Logical
> > Volume extend failed
> >  
> > When I look at the task list on the host currently holding the SPM
> > role (in this case 'v6'), using: vdsClient -s 0 getAllTasks, I see
> > a long list like this:
> >  
> > dc75d3e7-cea7-449b-9a04-76fd8ef0f82b :
> >          verb = downloadImageFromStream
> >          code = 554
> >          state = recovered
> >          tag = spm
> >          result =
> >          message = Logical Volume extend failed
> >          id = dc75d3e7-cea7-449b-9a04-76fd8ef0f82b
> >  
> > When I look at /var/log/vdsm/vdsm.log on the host in question (v6)
> > I see messages like this:
> >  
> > '531dd533-22b1-47a0-aae8-76c1dd7d9a56': {'code': 554, 'tag':
> > u'spm', 'state': 'recovered', 'verb': 'downloadImageFromStreaam',
> > 'result': '', 'message': 'Logical Volume extend failed', 'id':
> > '531dd533-22b1-47a0-aae8-76c1dd7d9a56'}
> >  
> > As well as the error from the attempted extend of the logical
> > volume:
> >  
> > e980df5f-d068-4c84-8aa7-9ce792690562::ERROR::2017-01-18
> > 13:24:50,710::task::866::Storage.TaskManager.Task::(_setError)
> > Task=`e980df5f-d068-4c84-8aa7-9ce792690562`::Unexpected error
> > Traceback (most recent call last):
> >   File "/usr/share/vdsm/storage/task.py", line 873, in _run
> >     return fn(*args, **kargs)
> >   File "/usr/share/vdsm/storage/task.py", line 332, in run
> >     return self.cmd(*self.argslist, **self.argsdict)
> >   File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
> >     return method(self, *args, **kwargs)
> >   File "/usr/share/vdsm/storage/sp.py", line 1776, in
> > downloadImageFromStream
> >     .copyToImage(methodArgs, sdUUID, imgUUID, volUUID)
> >   File "/usr/share/vdsm/storage/image.py", line 1373, in
> > copyToImage
> >     / volume.BLOCK_SIZE)
> >   File "/usr/share/vdsm/storage/blockVolume.py", line 310, in
> > extend
> >     lvm.extendLV(self.sdUUID, self.volUUID, sizemb)
> >   File "/usr/share/vdsm/storage/lvm.py", line 1179, in extendLV
> >     _resizeLV("lvextend", vgName, lvName, size)
> >   File "/usr/share/vdsm/storage/lvm.py", line 1175, in _resizeLV
> >     raise se.LogicalVolumeExtendError(vgName, lvName, "%sM" %
> > (size, ))
> > LogicalVolumeExtendError:
> > Logical Volume extend failed: 'vgname=ae05947f-875c-4507-ad51-
> > 62b0d35ef567 lvname=caaef597-eddd-4c24-8df2-a61f35f744f8
> > newsize=1M'

Re: [ovirt-users] Select As SPM Fails

2017-01-19 Thread Beau Sapach
Uh oh, looks like an upgrade to version 4 is the only option then
unless I'm missing something.

On Thu, Jan 19, 2017 at 1:36 AM, Pavel Gashev  wrote:

> Beau,
>
>
>
> Looks like you have upgraded to CentOS 7.3. Now you have to update the
> vdsm package to 4.17.35.
>
>
>
>
>
> *From: * on behalf of Beau Sapach <
> bsap...@ualberta.ca>
> *Date: *Wednesday 18 January 2017 at 23:56
> *To: *"users@ovirt.org" 
> *Subject: *[ovirt-users] Select As SPM Fails
>
>
>
> Hello everyone,
>
>
>
> I'm about to start digging through the mailing list archives in search of
> a solution but thought I would post to the list as well.  I'm running oVirt
> 3.6 on a 2 node CentOS7 cluster backed by fiber channel storage and with a
> separate engine VM running outside of the cluster (NOT  hosted-engine).
>
>
>
> When I try to move the SPM role from one node to the other I get the
> following in the web interface:
>
>
>
> [image: nline image 1]
>
>
>
> When I look into /var/log/ovirt-engine/engine.log I see the following:
>
>
>
> 2017-01-18 13:35:09,332 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.
> HSMGetAllTasksStatusesVDSCommand] (default task-26) [6990cfca] Failed in
> 'HSMGetAllTasksStatusesVDS' method
>
> 2017-01-18 13:35:09,340 ERROR [org.ovirt.engine.core.dal.
> dbbroker.auditloghandling.AuditLogDirector] (default task-26) [6990cfca]
> Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VDSM
> v6 command failed: Logical Volume extend failed
>
>
>
> When I look at the task list on the host currently holding the SPM role
> (in this case 'v6'), using: vdsClient -s 0 getAllTasks, I see a long list
> like this:
>
>
>
> dc75d3e7-cea7-449b-9a04-76fd8ef0f82b :
>
>  verb = downloadImageFromStream
>
>  code = 554
>
>  state = recovered
>
>  tag = spm
>
>  result =
>
>  message = Logical Volume extend failed
>
>  id = dc75d3e7-cea7-449b-9a04-76fd8ef0f82b
>
>
>
> When I look at /var/log/vdsm/vdsm.log on the host in question (v6) I see
> messages like this:
>
>
>
> '531dd533-22b1-47a0-aae8-76c1dd7d9a56': {'code': 554, 'tag': u'spm',
> 'state': 'recovered', 'verb': 'downloadImageFromStreaam', 'result': '',
> 'message': 'Logical Volume extend failed', 'id': '531dd533-22b1-47a0-aae8-
> 76c1dd7d9a56'}
>
>
>
> As well as the error from the attempted extend of the logical volume:
>
>
>
> e980df5f-d068-4c84-8aa7-9ce792690562::ERROR::2017-01-18
> 13:24:50,710::task::866::Storage.TaskManager.Task::(_setError)
> Task=`e980df5f-d068-4c84-8aa7-9ce792690562`::Unexpected error
>
> Traceback (most recent call last):
>
>   File "/usr/share/vdsm/storage/task.py", line 873, in _run
>
> return fn(*args, **kargs)
>
>   File "/usr/share/vdsm/storage/task.py", line 332, in run
>
> return self.cmd(*self.argslist, **self.argsdict)
>
>   File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
>
> return method(self, *args, **kwargs)
>
>   File "/usr/share/vdsm/storage/sp.py", line 1776, in
> downloadImageFromStream
>
> .copyToImage(methodArgs, sdUUID, imgUUID, volUUID)
>
>   File "/usr/share/vdsm/storage/image.py", line 1373, in copyToImage
>
> / volume.BLOCK_SIZE)
>
>   File "/usr/share/vdsm/storage/blockVolume.py", line 310, in extend
>
> lvm.extendLV(self.sdUUID, self.volUUID, sizemb)
>
>   File "/usr/share/vdsm/storage/lvm.py", line 1179, in extendLV
>
> _resizeLV("lvextend", vgName, lvName, size)
>
>   File "/usr/share/vdsm/storage/lvm.py", line 1175, in _resizeLV
>
> raise se.LogicalVolumeExtendError(vgName, lvName, "%sM" % (size, ))
>
> LogicalVolumeExtendError:
>
> Logical Volume extend failed: 'vgname=ae05947f-875c-4507-ad51-62b0d35ef567
> lvname=caaef597-eddd-4c24-8df2-a61f35f744f8 newsize=1M'
>
> e980df5f-d068-4c84-8aa7-9ce792690562::DEBUG::2017-01-18
> 13:24:50,711::task::885::Storage.TaskManager.Task::(_run)
> Task=`e980df5f-d068-4c84-8aa7-9ce792690562`::Task._run:
> e980df5f-d068-4c84-8aa7-9ce792690562 () {} failed - stopping task
>
>
>
> The logical volume in question is an OVF_STORE disk that lives on one of
> the fiber channel backed LUNs.  If I run:
>
>
>
> vdsClient -s 0 ClearTask TASK-UUID-HERE
>
>
>
> for each task that appears in the:
>
>
>
> vdsClient -s 0 getAllTasks
>
>
>
> output then they disappear and I'm able to move the SPM role to the other
> host.
>
>
>
> This problem then crops up again on the new host once the SPM role is
> moved.  What's going on here?  Does anyone have any insight as to how to
> prevent this task from re-appearing?  Or why it's failing in the first
> place?
>
>
>
> Beau
>
>
>
>
>
>
>



-- 
Beau Sapach
*System Administrator | Information Technology Services | University of
Alberta Libraries*
*Phone: 780.492.4181 | Email: beau.sap...@ualberta.ca
*
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Gluster storage expansion

2017-01-19 Thread Goorkate, B.J.
Hi all,

I have an oVirt environment with 5 nodes. 3 nodes offer a replica-3 gluster 
storage domain for the virtual
machines.

Is there a way to use storage in the nodes which are no member of the replica-3 
storage domain?
Or do I need another node and make a second replica-3 gluster storage domain?

In other words: I would like to expand the existing storage domain by adding 
more nodes, rather
than adding disks to the existing gluster nodes. Is that possible?

Thanks!

Regards,

Bertjan 



--

De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
te informeren door het bericht te retourneren. Het Universitair Medisch
Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W.
(Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij
de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.

Denk s.v.p aan het milieu voor u deze e-mail afdrukt.

--

This message may contain confidential information and is intended exclusively
for the addressee. If you receive this message unintentionally, please do not
use the contents but notify the sender immediately by return e-mail. University
Medical Center Utrecht is a legal person by public law and is registered at
the Chamber of Commerce for Midden-Nederland under no. 30244197.

Please consider the environment before printing this e-mail.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migrate hosted engine to a new storage domain

2017-01-19 Thread Logan Kuhn
Exactly what I needed.  Thank you

On Thu, Jan 19, 2017 at 1:10 AM, Yedidyah Bar David  wrote:

> On Thu, Jan 19, 2017 at 4:01 AM, Logan Kuhn 
> wrote:
> > Hi
> >
> > We are planning on moving to a different storage solution and I'm
> curious,
> > is there a way to migrate the hosted engine's storage domain to the new
> > solution?  It's NFS currently and can be NFS on the new storage as well.
> >
> > From what I've read it looks like it should be possible to
> >
> > Take a full backup of the engine VM
> > Deploy another hosted engine VM with hosted-engine --deploy
> > Install/configure CentOS 7.3
> > Deploy new engine with engine-setup
> > Then restore the backup into the new VM.
> >
> > What I'm not sure of is if that backup will contain enough of it's data
> to
> > restore to a completely different storage domain?
> >
> > Also, the engine database is on a remote server, the data warehouse
> service
> > and all other aspects of the hosted engine reside on the VM.
>
> Please check the list archive:
>
> http://lists.ovirt.org/pipermail/users/2017-January/078739.html
>
> Best,
>
> >
> > Thanks,
> > Logan
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
>
>
> --
> Didi
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] vdsm problem

2017-01-19 Thread Стаценко Константин Юрьевич
Hello!
Today, after installing some of the updates, vdsmd suddenly dies. Running oVirt 
4.0.6 CentOS 7.3.
It cannot start any more:

# journalctl -xe

-- Subject: Unit vdsmd.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit vdsmd.service has begun starting up.
Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: vdsm: 
Running mkdirs
Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: vdsm: 
Running configure_coredump
Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: vdsm: 
Running configure_vdsm_logs
Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: vdsm: 
Running wait_for_network
Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: vdsm: 
Running run_init_hooks
Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: vdsm: 
Running upgraded_version_check
Jan 19 18:03:31 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: vdsm: 
Running check_is_configured
Jan 19 18:03:32 msk1-kvm001.interrao.ru sasldblistusers2[20115]: DIGEST-MD5 
common mech free
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: Error:
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: One of the 
modules is not configured to work with VDSM.
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: To 
configure the module use the following:
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: 'vdsm-tool 
configure [--module module-name]'.
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: If all 
modules are not configured try to use:
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: 'vdsm-tool 
configure --force'
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: (The force 
flag will stop the module's service and start it
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: afterwards 
automatically to load the new configuration.)
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: Current 
revision of multipath.conf detected, preserving
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: libvirt is 
already configured for vdsm
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: Modules 
sebool are not configured
Jan 19 18:03:32 msk1-kvm001.interrao.ru vdsmd_init_common.sh[20079]: vdsm: 
stopped during execute check_is_configured task (task returned with error code 
1).
Jan 19 18:03:32 msk1-kvm001.interrao.ru systemd[1]: vdsmd.service: control 
process exited, code=exited status=1
Jan 19 18:03:32 msk1-kvm001.interrao.ru systemd[1]: Failed to start Virtual 
Desktop Server Manager.
-- Subject: Unit vdsmd.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit vdsmd.service has failed.
--
-- The result is failed.
Jan 19 18:03:32 msk1-kvm001.interrao.ru systemd[1]: Dependency failed for MOM 
instance configured for VDSM purposes.
-- Subject: Unit mom-vdsm.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mom-vdsm.service has failed.
--
-- The result is dependency.
Jan 19 18:03:32 msk1-kvm001.interrao.ru systemd[1]: Job mom-vdsm.service/start 
failed with result 'dependency'.
Jan 19 18:03:32 msk1-kvm001.interrao.ru systemd[1]: Unit vdsmd.service entered 
failed state.
Jan 19 18:03:32 msk1-kvm001.interrao.ru systemd[1]: vdsmd.service failed.
Jan 19 18:03:32 msk1-kvm001.interrao.ru systemd[1]: Cannot add dependency job 
for unit microcode.service, ignoring: Invalid argument
Jan 19 18:03:33 msk1-kvm001.interrao.ru systemd[1]: vdsmd.service holdoff time 
over, scheduling restart.
Jan 19 18:03:33 msk1-kvm001.interrao.ru systemd[1]: Cannot add dependency job 
for unit microcode.service, ignoring: Unit is not loaded properly: Invalid 
argumen
Jan 19 18:03:33 msk1-kvm001.interrao.ru systemd[1]: start request repeated too 
quickly for vdsmd.service
Jan 19 18:03:33 msk1-kvm001.interrao.ru systemd[1]: Failed to start Virtual 
Desktop Server Manager.
-- Subject: Unit vdsmd.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit vdsmd.service has failed.
--
-- The result is failed.
Jan 19 18:03:33 msk1-kvm001.interrao.ru systemd[1]: Dependency failed for MOM 
instance configured for VDSM purposes.
-- Subject: Unit mom-vdsm.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mom-vdsm.service has failed.
--
-- The result is dependency.
Jan 19 18:03:33 msk1-kvm001.interrao.ru systemd[1]: Job mom-vdsm.service/start 
failed with result 'dependency'.
Jan 19 18:03:33 msk1-kvm001.interrao.ru systemd[1]: Unit vdsmd.service entered 
failed state.
Jan 19 18:03:33 msk1-kvm001.interrao.ru systemd[1]: vdsmd.service 

Re: [ovirt-users] Fail to setup network just after host setup via python SDK

2017-01-19 Thread Ondrej Svoboda
Oops, forgot to "Reply to all".

Hi,

What is the version of your oVirt components? Can you reproduce this
failure?

Was the host in the "Up" state at the time you ran the setupNetworks API
command, wasn't there an action still in progress (that you could see in
Events)? What did your setupNetworks code (you wrote with the SDK) look
like?

Could you provide the current /var/log/vdsm/vdsm.log and
/var/log/vdsm/supervdsm.log?
EDIT: You can find these files at your host.

Thanks,
Ondra

On Thu, Jan 19, 2017 at 12:41 PM, TranceWorldLogic . <
tranceworldlo...@gmail.com> wrote:

> Hi,
>
> I was trying to add host in ovirt. And it got succeeded.
> But when I tried to setup network it throw below error.
>
> " Fault detail is "[Cannot setup Networks. Another Setup Networks or Host
> Refresh process in progress on the host. Please try later.]". HTTP response
> code is 409."
>
> Please help me to solve this problem.
>
> Thanks,
> ~Rohit
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Fail to setup network just after host setup via python SDK

2017-01-19 Thread TranceWorldLogic .
Hi,

I was trying to add host in ovirt. And it got succeeded.
But when I tried to setup network it throw below error.

" Fault detail is "[Cannot setup Networks. Another Setup Networks or Host
Refresh process in progress on the host. Please try later.]". HTTP response
code is 409."

Please help me to solve this problem.

Thanks,
~Rohit
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] hosted-engine hyperconverged glusterFS hosted-storage import fails

2017-01-19 Thread Liebe , André-Sebastian
Hello List,

I run into trouble after moving our hosted engine from nfs to hyperconverged 
glusterFS by backup/restore[1] procedure. The engine logs it  can't import and 
activate the hosted-storage although I can see the storage.
Any Hints how to fix this?

- I created the ha-replica-3 gluster volume prior to hosted-engine-setup using 
the hosts short name.
- Then ran hosted-engine-setup to install an new hosted engine (by installing 
centOS7 and ovirt-engine amnually)
- inside the new hosted-engine I restored the last successfull backup (wich was 
in running state)
- then I connected to the engine-database and removed the old hosted-engine by 
hand (as part of this patch would do: https://gerrit.ovirt.org/#/c/64966/) and 
all known hosts (after marking all vms as down, where I got ETL error messages 
later on for this)
- then I finished up the engine installation by running the engine-setup inside 
the hosted_engine
- and finally completed the hosted-engine-setup


The new hosted engine came up successfully with all prior known storage and 
after enabling glusterFS, the cluster this HA-host is part of, I could see it 
in the volumes and storage tab. After adding the remaining two hosts, the 
volume was marked as active.

But here's the the error message I get repeadately since then:
> 2017-01-19 08:49:36,652 WARN  
> [org.ovirt.engine.core.bll.storage.domain.ImportHostedEngineStorageDomainCommand]
>  (org.ovirt.thread.pool-6-thread-10) [3b955ecd] Validation of action 
> 'ImportHostedEngineStorageDomain' failed for user SYSTEM. Reasons: 
> VAR__ACTION__ADD,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_ALREADY_EXIST


There are also some repeating messages about this ha-replica-3 volume, because 
I used the hosts short name on volume creation, which I can't change afaik 
without a complete cluster shutdown.
> 2017-01-19 08:48:03,134 INFO  
> [org.ovirt.engine.core.bll.AddUnmanagedVmsCommand] (DefaultQuartzScheduler3) 
> [7471d7de] Running command: AddUnmanagedVmsCommand internal: true.
> 2017-01-19 08:48:03,134 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] 
> (DefaultQuartzScheduler3) [7471d7de] START, FullListVDSCommand(HostName = , 
> FullListVDSCommandParameters:{runAsync='true', 
> hostId='f62c7d04-9c95-453f-92d5-6dabf9da874a', 
> vds='Host[,f62c7d04-9c95-453f-92d5-6dabf9da874a]', 
> vmIds='[dfea96e8-e94a-407e-af46-3019fd3f2991]'}), log id: 2d0941f9
> 2017-01-19 08:48:03,163 INFO  
> [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] 
> (DefaultQuartzScheduler3) [7471d7de] FINISH, FullListVDSCommand, return: 
> [{guestFQDN=, emulatedMachine=pc, pid=0, guestDiskMapping={}, 
> devices=[Ljava.lang.Object;@4181d938, cpuType=Haswell-noTSX, smp=2, 
> vmType=kvm, memSize=8192, vmName=HostedEngine, username=, exitMessage=XML 
> error: maximum vcpus count must be an integer, 
> vmId=dfea96e8-e94a-407e-af46-3019fd3f2991, displayIp=0, displayPort=-1, 
> guestIPs=, 
> spiceSecureChannels=smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir,
>  exitCode=1, nicModel=rtl8139,pv, exitReason=1, status=Down, maxVCpus=None, 
> clientIp=, statusTime=6675071780, display=vnc, displaySecurePort=-1}], log 
> id: 2d0941f9
> 2017-01-19 08:48:03,163 ERROR 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerObjectsBuilder] 
> (DefaultQuartzScheduler3) [7471d7de] null architecture type, replacing with 
> x86_64, %s
> 2017-01-19 08:48:17,779 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
> (DefaultQuartzScheduler3) [7471d7de] START, 
> GlusterServersListVDSCommand(HostName = lvh2, 
> VdsIdVDSCommandParametersBase:{runAsync='true', 
> hostId='23297fc2-db12-4778-a5ff-b74d6fc9554b'}), log id: 57d029dc
> 2017-01-19 08:48:18,177 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] 
> (DefaultQuartzScheduler3) [7471d7de] FINISH, GlusterServersListVDSCommand, 
> return: [172.31.1.22/24:CONNECTED, lvh3.lab.gematik.de:CONNECTED, 
> lvh4.lab.gematik.de:CONNECTED], log id: 57d029dc
> 2017-01-19 08:48:18,180 INFO  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] 
> (DefaultQuartzScheduler3) [7471d7de] START, 
> GlusterVolumesListVDSCommand(HostName = lvh2, 
> GlusterVolumesListVDSParameters:{runAsync='true', 
> hostId='23297fc2-db12-4778-a5ff-b74d6fc9554b'}), log id: 5cd11a39
> 2017-01-19 08:48:18,282 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
> (DefaultQuartzScheduler3) [7471d7de] Could not associate brick 
> 'lvh2:/data/gluster/0/brick' of volume '7dc6410d-8f2a-406c-812a-8235fa6f721c' 
> with correct network as no gluster network found in cluster 
> '57ff41c2-0297-039d-039c-0362'
> 2017-01-19 08:48:18,284 WARN  
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc] 
> (DefaultQuartzScheduler3) [7471d7de] Could not associate brick 
> 'lvh3:/data/gluster/0/brick' of volume '7dc6410d-8f2a-406c-812a-8235fa6f721c' 

[ovirt-users] fast import to ovirt

2017-01-19 Thread p...@email.cz

Hello,
how can I import Vm from different ovirt envir..? There is no common 
mgmt ovirt. ( ovirt 3.5 -> 4.0 )

Gluster FS used.
Will ovirt accept "rsync" file migrations , meaning will update oVirt DB 
automaticaly  ?

I'd prefer more quickly method then export-umount oV1-mount oV2-import .

regards
paf1
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Select As SPM Fails

2017-01-19 Thread Pavel Gashev
Beau,

Looks like you have upgraded to CentOS 7.3. Now you have to update the vdsm 
package to 4.17.35.


From:  on behalf of Beau Sapach 
Date: Wednesday 18 January 2017 at 23:56
To: "users@ovirt.org" 
Subject: [ovirt-users] Select As SPM Fails

Hello everyone,

I'm about to start digging through the mailing list archives in search of a 
solution but thought I would post to the list as well.  I'm running oVirt 3.6 
on a 2 node CentOS7 cluster backed by fiber channel storage and with a separate 
engine VM running outside of the cluster (NOT  hosted-engine).

When I try to move the SPM role from one node to the other I get the following 
in the web interface:

[nline image 1]

When I look into /var/log/ovirt-engine/engine.log I see the following:

2017-01-18 13:35:09,332 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] 
(default task-26) [6990cfca] Failed in 'HSMGetAllTasksStatusesVDS' method
2017-01-18 13:35:09,340 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-26) [6990cfca] Correlation ID: null, Call Stack: null, Custom Event ID: 
-1, Message: VDSM v6 command failed: Logical Volume extend failed

When I look at the task list on the host currently holding the SPM role (in 
this case 'v6'), using: vdsClient -s 0 getAllTasks, I see a long list like this:

dc75d3e7-cea7-449b-9a04-76fd8ef0f82b :
 verb = downloadImageFromStream
 code = 554
 state = recovered
 tag = spm
 result =
 message = Logical Volume extend failed
 id = dc75d3e7-cea7-449b-9a04-76fd8ef0f82b

When I look at /var/log/vdsm/vdsm.log on the host in question (v6) I see 
messages like this:

'531dd533-22b1-47a0-aae8-76c1dd7d9a56': {'code': 554, 'tag': u'spm', 'state': 
'recovered', 'verb': 'downloadImageFromStreaam', 'result': '', 'message': 
'Logical Volume extend failed', 'id': '531dd533-22b1-47a0-aae8-76c1dd7d9a56'}

As well as the error from the attempted extend of the logical volume:

e980df5f-d068-4c84-8aa7-9ce792690562::ERROR::2017-01-18 
13:24:50,710::task::866::Storage.TaskManager.Task::(_setError) 
Task=`e980df5f-d068-4c84-8aa7-9ce792690562`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
  File "/usr/share/vdsm/storage/task.py", line 332, in run
return self.cmd(*self.argslist, **self.argsdict)
  File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
return method(self, *args, **kwargs)
  File "/usr/share/vdsm/storage/sp.py", line 1776, in downloadImageFromStream
.copyToImage(methodArgs, sdUUID, imgUUID, volUUID)
  File "/usr/share/vdsm/storage/image.py", line 1373, in copyToImage
/ volume.BLOCK_SIZE)
  File "/usr/share/vdsm/storage/blockVolume.py", line 310, in extend
lvm.extendLV(self.sdUUID, self.volUUID, sizemb)
  File "/usr/share/vdsm/storage/lvm.py", line 1179, in extendLV
_resizeLV("lvextend", vgName, lvName, size)
  File "/usr/share/vdsm/storage/lvm.py", line 1175, in _resizeLV
raise se.LogicalVolumeExtendError(vgName, lvName, "%sM" % (size, ))
LogicalVolumeExtendError:
Logical Volume extend failed: 'vgname=ae05947f-875c-4507-ad51-62b0d35ef567 
lvname=caaef597-eddd-4c24-8df2-a61f35f744f8 newsize=1M'
e980df5f-d068-4c84-8aa7-9ce792690562::DEBUG::2017-01-18 
13:24:50,711::task::885::Storage.TaskManager.Task::(_run) 
Task=`e980df5f-d068-4c84-8aa7-9ce792690562`::Task._run: 
e980df5f-d068-4c84-8aa7-9ce792690562 () {} failed - stopping task

The logical volume in question is an OVF_STORE disk that lives on one of the 
fiber channel backed LUNs.  If I run:

vdsClient -s 0 ClearTask TASK-UUID-HERE

for each task that appears in the:

vdsClient -s 0 getAllTasks

output then they disappear and I'm able to move the SPM role to the other host.

This problem then crops up again on the new host once the SPM role is moved.  
What's going on here?  Does anyone have any insight as to how to prevent this 
task from re-appearing?  Or why it's failing in the first place?

Beau



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt 4.1 - Edit Host

2017-01-19 Thread Michael Burman
Hi Brett

We have a bug to track this issue -
https://bugzilla.redhat.com/show_bug.cgi?id=1402873

Thank you)

On Thu, Jan 19, 2017 at 10:11 AM, Maton, Brett 
wrote:

> If I try to edit the comment field of an active host, the UI refuses with
> the following message.
>
> Cannot edit Host. Host parameters cannot be modified while Host is
> operational
> Please switch host to Maintenance mode first
>
> Whilst I'm sure change the majority of values for an active host would be
> undesirable, having to put a host in to maintenance mode to change a
> comment label is a pain...
>
> Thoughts?
>
> Regards,
> Brett
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Michael Burman
RedHat Israel, RHV-M Network QE

Mobile: 054-5355725
IRC: mburman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt 4.1 - Edit Host

2017-01-19 Thread Maton, Brett
If I try to edit the comment field of an active host, the UI refuses with
the following message.

Cannot edit Host. Host parameters cannot be modified while Host is
operational
Please switch host to Maintenance mode first

Whilst I'm sure change the majority of values for an active host would be
undesirable, having to put a host in to maintenance mode to change a
comment label is a pain...

Thoughts?

Regards,
Brett
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users