Re: [ovirt-users] options for fence_ipmilan?

2017-04-18 Thread Matthias Leopold



Am 2017-04-18 um 12:29 schrieb Gianluca Cecchi:

On Tue, Apr 18, 2017 at 12:16 PM, Matthias Leopold
<matthias.leop...@meduniwien.ac.at
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:

hi,

i'm trying to make fencing (via ipmilan) work with a ipmi user with
only "OPERATOR" privileges. This works with CLI (-L OPERATOR). when
trying to configure this via GUI i fail. interface tells me

Test failed: [WARNING:root:Parse error: Ignoring unknown option
'L=OPERATOR', , ERROR:root:Failed: Unable to obtain correct plug
status or plug is not available, , , Failed: Unable to obtain
correct plug status or plug is not available, , ]

i tried "-L OPERATOR" and "L=OPERATOR", with and without quotes, no
avail...

can somebody help me?

ovirt 4.1.1


Hi,
this works for me in options part of host power mgmt configuration with
4.1.1 and Dell M610 hypervisors:

privlvl=operator,lanplus=on

Gianluca


thx, that solved it (privlvl=OPERATOR in my case)


--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] options for fence_ipmilan?

2017-04-18 Thread Matthias Leopold

hi,

i'm trying to make fencing (via ipmilan) work with a ipmi user with only 
"OPERATOR" privileges. This works with CLI (-L OPERATOR). when trying to 
configure this via GUI i fail. interface tells me


Test failed: [WARNING:root:Parse error: Ignoring unknown option 
'L=OPERATOR', , ERROR:root:Failed: Unable to obtain correct plug status 
or plug is not available, , , Failed: Unable to obtain correct plug 
status or plug is not available, , ]


i tried "-L OPERATOR" and "L=OPERATOR", with and without quotes, no avail...

can somebody help me?

ovirt 4.1.1

--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] official status of Windows support in oVirt/RHEV ?

2017-07-31 Thread Matthias Leopold

Hi,

can someone please point me to resources about official status of 
Windows support in oVirt and RHEV? are there differences between oVirt 
and RHEV? i'm not talking about technical details, i know that eg. 
Windows 2008 or 2016 runs in oVirt, it's about support contracts. i 
looked at 
https://www.redhat.com/cms/managed-files/rh-red-hat-virtualization-datasheet-f6865kc-201704-en.pdf, 
there you have the terms "Full support", "Support" and "Vendor Support". 
What's the difference?


thx
matthias

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Libguestfs] virt-v2v import from KVM without storage-pool ?

2017-07-07 Thread Matthias Leopold

thanks for caring about this.

Ming Xie, are you opening this BZ bug?

thanks
matthias

Am 2017-07-07 um 13:31 schrieb Tomáš Golembiovský:

Hi,

yes it is an issue in VDSM. We count on the disks being in storage pool
(except for block devices).

Can you open a BZ bug for that please.

Thanks,

 Tomas


On Fri, 7 Jul 2017 02:52:26 -0400 (EDT)
Ming Xie <m...@redhat.com> wrote:


I could reproduce customer's problem

Packages:
rhv:4.1.3-0.1.el7
vdsm-4.19.20-1.el7ev.x86_64
virt-v2v-1.36.3-6.el7.x86_64
libguestfs-1.36.3-6.el7.x86_64

Steps:
1.Prepare a guest which is not listed storage pool
# virsh dumpxml avocado-vt-vm1


   
   
   
   
 
.
2.Try to import this guest in rhv4.1 from KVM host but failed to import the 
guest as screenshot and get error info from vdsm.log

2017-07-07 14:41:22,176+0800 ERROR (jsonrpc/6) [root] Error getting disk size 
(v2v:1089)
Traceback (most recent call last):
   File "/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 1078, in 
_get_disk_info
 vol = conn.storageVolLookupByPath(disk['alias'])
   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 4555, in 
storageVolLookupByPath
 if ret is None:raise libvirtError('virStorageVolLookupByPath() failed', 
conn=self)
libvirtError: Storage volume not found: no storage vol with matching path 
'/root/RHEL-7.3-x86_64-latest.qcow2'



3.Try to convert this guest to rhv by virt-v2v on v2v conversion server,could 
import the guest from export domain to data domain on rhv4.1 after finishing 
conversion
# virt-v2v avocado-vt-vm1 -o rhv -os 10.73.131.93:/home/nfs_export
[   0.0] Opening the source -i libvirt avocado-vt-vm1
[   0.0] Creating an overlay to protect the source from being modified
[   0.4] Initializing the target -o rhv -os 10.73.131.93:/home/nfs_export
[   0.7] Opening the overlay
[   6.1] Inspecting the overlay
[  13.8] Checking for sufficient free disk space in the guest
[  13.8] Estimating space required on target for each disk
[  13.8] Converting Red Hat Enterprise Linux Server 7.3 (Maipo) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[  52.2] Mapping filesystem data to avoid copying unused and blank areas
[  52.4] Closing the overlay
[  52.7] Checking if the guest needs BIOS or UEFI to boot
[  52.7] Assigning disks to buses
[  52.7] Copying disk 1/1 to 
/tmp/v2v.Zzc4KD/c9cfeba7-73f8-428a-aa77-9a2a1acf0063/images/c8eb039e-3007-4e08-9580-c49da8b73d55/f76d16ea-5e66-4987-a496-8f378b127986
 (qcow2)
 (100.00/100%)
[ 152.4] Creating output metadata
[ 152.6] Finishing off


Result:
So this problem is caused by vdsm or ovirt

Regards
Ming Xie

- Original Message -
From: "Richard W.M. Jones" <rjo...@redhat.com>
To: "Matthias Leopold" <matthias.leop...@meduniwien.ac.at>
Cc: users@ovirt.org, libgues...@redhat.com
Sent: Wednesday, July 5, 2017 9:15:16 PM
Subject: Re: [Libguestfs] virt-v2v import from KVM without storage-pool ?

On Wed, Jul 05, 2017 at 11:14:09AM +0200, Matthias Leopold wrote:

hi,

i'm trying to import a VM in oVirt from a KVM host that doesn't use
storage pools. this fails with the following message in
/var/log/vdsm/vdsm.log:

2017-07-05 09:34:20,513+0200 ERROR (jsonrpc/5) [root] Error getting
disk size (v2v:1089)
Traceback (most recent call last):
   File "/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 1078, in
_get_disk_info
 vol = conn.storageVolLookupByPath(disk['alias'])
   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 4770,
in storageVolLookupByPath
 if ret is None:raise libvirtError('virStorageVolLookupByPath()
failed', conn=self)
libvirtError: Storage volume not found: no storage vol with matching path

the disks in the origin VM are defined as

 
   
   

 
   
   

is this a virt-v2v or oVirt problem?


Well the stack trace is in the oVirt code, so I guess it's an oVirt
problem.  Adding ovirt-users mailing list.

Rich.

--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine.  Supports Linux and Windows.
http://people.redhat.com/~rjones/virt-df/

___
Libguestfs mailing list
libgues...@redhat.com
https://www.redhat.com/mailman/listinfo/libguestfs





--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] trying to use raw disks from foreign KVM directly in oVirt

2017-07-12 Thread Matthias Leopold

hi,

i'm using a KVM system that creates it's disks in raw format as LVM 
logical volumes. there's one VG with one LV per disk. the physical 
devices for these LVM entities are iSCSI devices. now my idea was to use 
these disks directly in oVirt as "Direct LUN". i did the following:


- stopped the foreign KVM domain
- deactivated the LV that is the disk
- reconfigured the SAN so the iSCSI device is removed from foreign KVM 
host and is visible to the oVirt hosts

- created an oVirt "Direct LUN" disk with the iSCSI device
- created a VM in oVirt, attached the "Direct LUN" disk to it and set 
the "bootable" flag

- started the VM
- console displays "boot failed: not a bootable disk" :-(

i tried virtIO, virtIO-SCSI, IDE interfaces for the disk, no change
i ran "scan alignment" for the disk, no change
i tried without bootable flag, no change

strange thing is the wrong virtual size the oVirt GUI display for this 
disk, GUI says virtual size is 372GB, "qemu-img info" (in the oVirt 
host) says virtual size is 47GB (which was the size in the foreign KVM 
system)


what could be wrong? can this work at all?
could LVM metadata from the old system be a problem?

i know the whole operation is a litte crazy, but if this worked the 
migration process would be so much easier...


thx
matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Best practices for LACP bonds on oVirt

2017-07-04 Thread Matthias Leopold



Am 2017-07-04 um 10:01 schrieb Simone Tiraboschi:



On Tue, Jul 4, 2017 at 5:50 AM, Vinícius Ferrão > wrote:


Thanks, Konstantin.

Just to be clear enough: the first deployment would be made on
classic eth interfaces and later after the deployment of Hosted
Engine I can convert the "ovirtmgmt" network to a LACP Bond, right?

Another question: what about iSCSI Multipath on Self Hosted Engine?
I've looked through the net and only found this issue:
https://bugzilla.redhat.com/show_bug.cgi?id=1193961


Appears to be unsupported as today, but there's an workaround on the
comments. It's safe to deploy this way? Should I use NFS instead?


It's probably not the most tested path but once you have an engine you 
should be able to create an iSCSI bond on your hosts from the engine.
Network configuration is persisted across host reboots and so the iSCSI 
bond configuration.


A different story is instead having ovirt-ha-agent connecting multiple 
IQNs or multiple targets over your SAN. This is currently not supported 
for the hosted-engine storage domain.

See:
https://bugzilla.redhat.com/show_bug.cgi?id=1149579



Hi Simone,

i think my post to this list titled "iSCSI multipathing setup troubles" 
just recently is about the exact same problem, except i'm not talking 
about the hosted-engine storage domain. i would like to configure _any_ 
iSCSI storage domain the way you describe it in 
https://bugzilla.redhat.com/show_bug.cgi?id=1149579#c1. i would like to 
do so using the oVirt "iSCSI Multipathing" GUI after everything else is 
setup. i can't find a way to do this. is this now possible? i think the 
iSCSI Multipathing documentation could be improved by describing an 
example IP setup for this.


thanks a lot
matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] iSCSI multipathing setup troubles

2017-07-03 Thread Matthias Leopold

hi,

i'm trying to use iSCSI multipathing for a LUN shared by a Hitachi SAN. 
i can't figure out how this is supposed to work, maybe my setup isn't 
applicable at all...


our storage admin shared the same LUN for me on two targets, which are 
located in two logical networks connected to different switches (i asked 
him to do so). on the oVirt hypervisor side there is only one bonded 
interface for storage traffic, so i configured two VLAN interfaces 
located in these networks on the bond interface.


now i create the storage domain logging in to one of the targets 
connecting through its logical network. when i try to create a "second" 
storage domain for the same LUN logging in to the second target, oVirt 
tells me "LUN is already in use". i understand this, but now i can't 
configure an oVirt "iSCSI Bond" in any way.


how is this supposed to work?
right now the only working setup i can think of would be an iSCSI target 
that uses a redundant bond interface (with only one IP addresss) to 
which my hypervisors connect through different routed networks (using 
either dedicated network cards or vlan interfaces). is that correct?


i feel like i'm missing something, but i couldn't find any examples for 
real world IP setups for iSCSI multipathing.


thanks for explaining
matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt minor upgrades for nodes via GUI or CLI?

2017-04-26 Thread Matthias Leopold



Am 2017-04-25 um 07:36 schrieb Yedidyah Bar David:

On Mon, Apr 24, 2017 at 5:41 PM, Matthias Leopold
<matthias.leop...@meduniwien.ac.at> wrote:



Am 2017-04-24 um 16:37 schrieb Yedidyah Bar David:


On Mon, Apr 24, 2017 at 5:28 PM, Matthias Leopold
<matthias.leop...@meduniwien.ac.at> wrote:


hi,

i'm still testing ovirt 4.1.

i installed engine and 2 nodes in vanilla centos 7.3 hosts with
everything
that came from
http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm

i regularly checked for updates in the engine host OS with "yum update"
(is
there a gui option for this?). it obviously got an ovirt update from
version
4.1.0 to 4.1.1.1 already some time ago.

i regularly checked for updates in the nodes via the ovirt web gui
(installation - check for upgrade). there where package updates available
and installed in the past so i thought that everything was fine.

now i checked with "yum check-update" in the nodes OS shell and noticed
that
ovirt-release41 is still on 4.1.0 and there are 81 packages available for
update (from centos base _and_ ovirt repos including ovirt-release41
itself). ovirt gui tells me 'no updates found'.



I think this function only checks for specific packages, not everything
yum reports.



why didn't these updates get installed? is it because of the
ovirt-release41
update? do i have to do this manually with yum?



ovirt-release41 itself is not one of these packages, and should in
principle
be considered "another package" (just like any other package you installed
on your machine).

Which packages does yum say you have updates for?



firts my repos, to be sure:

# yum repolist
...
Repo-IDRepo-Name:
Status
base/7/x86_64  CentOS-7 - Base
9.363
centos-opstools-testing/x86_64 CentOS-7 - OpsTools - testing
repo  448
centos-ovirt-common-candidate/x86_64   CentOS-7 - oVirt common
198
centos-ovirt41-candidate/x86_64CentOS-7 - oVirt 4.1
95
extras/7/x86_64CentOS-7 - Extras
337
ovirt-4.1/7Latest oVirt 4.1 Release
455
ovirt-4.1-centos-gluster38/x86_64  CentOS-7 - Gluster 3.8
181
ovirt-4.1-epel/x86_64  Extra Packages for Enterprise
Linux 7 - x86_64   11.550
ovirt-4.1-patternfly1-noarch-epel/x86_64   Copr repo for patternfly1
owned by patternfly 2
updates/7/x86_64   CentOS-7 - Updates
1.575
virtio-win-stable  virtio-win builds roughly
matching what was shipped in latest RHEL4
repolist: 24.208

# yum check-update
...
NetworkManager.x86_64   1:1.4.0-19.el7_3

...

util-linux.x86_64 2.23.2-33.el7_3.2   updates


Looks ok to me. Please check the upgrade guide for details:

http://www.ovirt.org/documentation/upgrade-guide/chap-Updates_between_Minor_Releases/

It wasn't updated for 4.1 but the principles remain the same.

Best,



thank you very much. i should have looked at the docs first

regards
matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt minor upgrades for nodes via GUI or CLI?

2017-04-24 Thread Matthias Leopold



Am 2017-04-24 um 16:37 schrieb Yedidyah Bar David:

On Mon, Apr 24, 2017 at 5:28 PM, Matthias Leopold
<matthias.leop...@meduniwien.ac.at> wrote:

hi,

i'm still testing ovirt 4.1.

i installed engine and 2 nodes in vanilla centos 7.3 hosts with everything
that came from http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm

i regularly checked for updates in the engine host OS with "yum update" (is
there a gui option for this?). it obviously got an ovirt update from version
4.1.0 to 4.1.1.1 already some time ago.

i regularly checked for updates in the nodes via the ovirt web gui
(installation - check for upgrade). there where package updates available
and installed in the past so i thought that everything was fine.

now i checked with "yum check-update" in the nodes OS shell and noticed that
ovirt-release41 is still on 4.1.0 and there are 81 packages available for
update (from centos base _and_ ovirt repos including ovirt-release41
itself). ovirt gui tells me 'no updates found'.


I think this function only checks for specific packages, not everything
yum reports.



why didn't these updates get installed? is it because of the ovirt-release41
update? do i have to do this manually with yum?


ovirt-release41 itself is not one of these packages, and should in principle
be considered "another package" (just like any other package you installed
on your machine).

Which packages does yum say you have updates for?


firts my repos, to be sure:

# yum repolist
...
Repo-IDRepo-Name: 
   Status
base/7/x86_64  CentOS-7 - Base 
9.363
centos-opstools-testing/x86_64 CentOS-7 - OpsTools - 
testing repo  448
centos-ovirt-common-candidate/x86_64   CentOS-7 - oVirt common 
  198
centos-ovirt41-candidate/x86_64CentOS-7 - oVirt 4.1 
   95
extras/7/x86_64CentOS-7 - Extras 
  337
ovirt-4.1/7Latest oVirt 4.1 Release 
  455
ovirt-4.1-centos-gluster38/x86_64  CentOS-7 - Gluster 3.8 
  181
ovirt-4.1-epel/x86_64  Extra Packages for 
Enterprise Linux 7 - x86_64   11.550
ovirt-4.1-patternfly1-noarch-epel/x86_64   Copr repo for patternfly1 
owned by patternfly 2
updates/7/x86_64   CentOS-7 - Updates 
1.575
virtio-win-stable  virtio-win builds roughly 
matching what was shipped in latest RHEL4

repolist: 24.208

# yum check-update
...
NetworkManager.x86_64   1:1.4.0-19.el7_3 
   updates
NetworkManager-config-server.x86_64 1:1.4.0-19.el7_3 
   updates
NetworkManager-libnm.x86_64 1:1.4.0-19.el7_3 
   updates
NetworkManager-team.x86_64  1:1.4.0-19.el7_3 
   updates
NetworkManager-tui.x86_64   1:1.4.0-19.el7_3 
   updates
NetworkManager-wifi.x86_64  1:1.4.0-19.el7_3 
   updates
bind-libs-lite.x86_64 
32:9.9.4-38.el7_3.3 updates
bind-license.noarch 
32:9.9.4-38.el7_3.3 updates
ca-certificates.noarch 
2017.2.11-70.1.el7_3updates
dmidecode.x86_641:3.0-2.1.el7_3 
   updates
fence-agents-all.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-apc.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-apc-snmp.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-bladecenter.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-brocade.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-cisco-mds.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-cisco-ucs.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-common.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-compute.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-drac5.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-eaton-snmp.x86_64 
4.0.11-47.el7_3.5   updates
fence-agents-emerson.x86_64 
4.0.11-47.el7_3.5   

Re: [ovirt-users] Internet access for oVirt Nodes?

2017-05-18 Thread Matthias Leopold



Am 2017-05-16 um 22:01 schrieb Ryan Barry:


On Mon, May 15, 2017 at 1:09 PM, Matthias Leopold 
<matthias.leop...@meduniwien.ac.at 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:


thanks, i guess configuring repositories in oVirt Node can only be
achieved when using Foreman/Satellite integration, is that correct?
i've just started to use oVirt Node and i'm beginning to realize
that things are a _little_ bit different compared to a standard
linux host.


Well, yes/no. We whitelist which packages are able to be updated from 
the oVirt repositories and disable the base centos repositories, but you 
can easily change "enabled=0" to "enabled=1" in any of them, or add your 
own repos just like you would with CentOS.


In general, I'd recommend not including updates for any packages which 
are part of Node itself, but that decision is yours to make.



this brings me to another update related question:
right now oVirt Nodes in my test environment can connect to the
internet and there recently was an update available which i applied
through the engine gui, which seemed to finish successfully. i
remember wondering how i could check what actually changend, there
was eg. no kernel change IIRC. today i discovered that on both
updated hosts /tmp/imgbased.log exists and ends in an error:


Node is still an A/B image, so you'd need to reboot in order to see a 
new kernel, if it's part of a new image.



subprocess.CalledProcessError: Command '['lvcreate', '--thin',
'--virtualsize', u'8506048512B', '--name',
'ovirt-node-ng-4.1.1.1-0.20170406.0', u'HostVG/pool00']' returned
non-zero exit status 5

i have to mention i manually partitioned my oVirt Node host when i
installed it from the installer ISO (because i want to use software
raid).
i used partitioning recommendations from
https://bugzilla.redhat.com/show_bug.cgi?id=1369874
<https://bugzilla.redhat.com/show_bug.cgi?id=1369874> (doubling size
recommendations).


As long as you're thinly provisioned, this should update normally,, 
though I have to say that I haven't tried software RAID.



did my oVirt Node update complete successfully?
how can i check this?
why was there an lvcreate error?


I'll try to reproduce this, but attempting the lvcreate by hand may give 
some usable debugging information.



'imgbase layout' says:
ovirt-node-ng-4.1.1.1-0.20170406.0
  +- ovirt-node-ng-4.1.1.1-0.20170406.0+1


If 'imgabase layout' only shows these, then it's likely that it didn't 
update. Node uses LVM directly, so "lvm lvs" may show a new device, but 
from the command above, I'm guessing it wasn't able to create it. I'd 
suspect that it wasn't able to create it because it's the same version, 
and LVM sees a duplicate LV. Can you attach your engine log (or the yum 
log from the host) so we can see what it pulled?




i'll try the update with another oVirt Node where i'll stay with 
standard partitioning and see what happens. i have to understand the 
update process more thoroughly anyway


thx
matthias


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Internet access for oVirt Nodes?

2017-05-15 Thread Matthias Leopold

Am 2017-05-15 16:33, schrieb Ryan Barry:

On Mon, May 15, 2017 at 5:00 AM,  wrote:


hi,

do hypervisors, that are running oVirt Node (not standard
CentOS/RHEL),
need internet access for updates or can they be in a private, non
routed
network (and updates happen via engine)? it seems the latter is the
case, but i want to be sure

thx
matthias


Engine isn't very particular about updating in this case. As long as
any repository is configured where 'yum check-update
ovirt-node-ng-image-update' is true, upgrades from engine will work.

In general, otopi's miniyum is a bit smarter than base yum, so
'check-update ...' is not always a reliable mechanism to verify this,
but yes, a local repo in a non-routed network which presents the
update will show an update from engine.


thanks, i guess configuring repositories in oVirt Node can only be 
achieved when using Foreman/Satellite integration, is that correct? i've 
just started to use oVirt Node and i'm beginning to realize that things 
are a _little_ bit different compared to a standard linux host.


this brings me to another update related question:
right now oVirt Nodes in my test environment can connect to the internet 
and there recently was an update available which i applied through the 
engine gui, which seemed to finish successfully. i remember wondering 
how i could check what actually changend, there was eg. no kernel change 
IIRC. today i discovered that on both updated hosts /tmp/imgbased.log 
exists and ends in an error:


subprocess.CalledProcessError: Command '['lvcreate', '--thin', 
'--virtualsize', u'8506048512B', '--name', 
'ovirt-node-ng-4.1.1.1-0.20170406.0', u'HostVG/pool00']' returned 
non-zero exit status 5


i have to mention i manually partitioned my oVirt Node host when i 
installed it from the installer ISO (because i want to use software 
raid).
i used partitioning recommendations from 
https://bugzilla.redhat.com/show_bug.cgi?id=1369874 (doubling size 
recommendations).


did my oVirt Node update complete successfully?
how can i check this?
why was there an lvcreate error?

'imgbase layout' says:
ovirt-node-ng-4.1.1.1-0.20170406.0
 +- ovirt-node-ng-4.1.1.1-0.20170406.0+1

kernel version is:
3.10.0-514.10.2.el7.x86_64

thanks a lot again
matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] SPICE keymap ?

2017-06-22 Thread Matthias Leopold

hi,

i'm looking for a way to change the SPICE keymap for a VM. i couldn't 
find it. i couldn't find a way to change it in the client either (linux 
remote-viewer application). this is probably easy, thanks anyway...


matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SPICE keymap ?

2017-06-23 Thread Matthias Leopold

Am 2017-06-22 um 13:23 schrieb Matthias Leopold:

hi,

i'm looking for a way to change the SPICE keymap for a VM. i couldn't 
find it. i couldn't find a way to change it in the client either (linux 
remote-viewer application). this is probably easy, thanks anyway...


matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


as i said, it was easy. it's obviously a matter of guest OS keymap 
configuration (localectl set-keymap in CentOS 7). i dared to ask because 
i found configuration stanzas for qemu like




via google. i don't know if this is obsolete, i was suspicious from the 
beginning that there is no configuration option in oVirt...


matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Internet access for oVirt Nodes?

2017-05-20 Thread Matthias Leopold

Am 2017-05-16 22:01, schrieb Ryan Barry:

On Mon, May 15, 2017 at 1:09 PM, Matthias Leopold
<matthias.leop...@meduniwien.ac.at> wrote:


thanks, i guess configuring repositories in oVirt Node can only be
achieved when using Foreman/Satellite integration, is that correct?
i've just started to use oVirt Node and i'm beginning to realize
that things are a _little_ bit different compared to a standard
linux host.


Well, yes/no. We whitelist which packages are able to be updated from
the oVirt repositories and disable the base centos repositories, but
you can easily change "enabled=0" to "enabled=1" in any of them, or
add your own repos just like you would with CentOS.

In general, I'd recommend not including updates for any packages which
are part of Node itself, but that decision is yours to make.


this brings me to another update related question:
right now oVirt Nodes in my test environment can connect to the
internet and there recently was an update available which i applied
through the engine gui, which seemed to finish successfully. i
remember wondering how i could check what actually changend, there
was eg. no kernel change IIRC. today i discovered that on both
updated hosts /tmp/imgbased.log exists and ends in an error:


Node is still an A/B image, so you'd need to reboot in order to see a
new kernel, if it's part of a new image.


subprocess.CalledProcessError: Command '['lvcreate', '--thin',
'--virtualsize', u'8506048512B', '--name',
'ovirt-node-ng-4.1.1.1-0.20170406.0', u'HostVG/pool00']' returned
non-zero exit status 5

i have to mention i manually partitioned my oVirt Node host when i
installed it from the installer ISO (because i want to use software
raid).
i used partitioning recommendations from
https://bugzilla.redhat.com/show_bug.cgi?id=1369874 [1] (doubling
size recommendations).


As long as you're thinly provisioned, this should update normally,,
though I have to say that I haven't tried software RAID.


did my oVirt Node update complete successfully?
how can i check this?
why was there an lvcreate error?


I'll try to reproduce this, but attempting the lvcreate by hand may
give some usable debugging information.


'imgbase layout' says:
ovirt-node-ng-4.1.1.1-0.20170406.0
+- ovirt-node-ng-4.1.1.1-0.20170406.0+1


If 'imgabase layout' only shows these, then it's likely that it didn't
update. Node uses LVM directly, so "lvm lvs" may show a new device,
but from the command above, I'm guessing it wasn't able to create it.
I'd suspect that it wasn't able to create it because it's the same
version, and LVM sees a duplicate LV. Can you attach your engine log
(or the yum log from the host) so we can see what it pulled?


ok, after _hours_ of debugging and reinstalling i came to the following 
conclusion (which may point to a bug):


when i install oVirt Node from the 
ovirt-node-ng-installer-ovirt-4.1-2017040614.iso, register the Node in 
my engine and check for updates the engine tells me about an available 
update. when i apply this update everything seems to be ok (and in fact 
everything _is_ ok). what happens as an "update" is that the packages 
ovirt-node-ng-image-4.1.1.1-1.el7.centos.noarch and 
ovirt-node-ng-image-update-4.1.1.1-1.el7.centos.noarch are installed and 
the postinstall script for ovirt-node-ng-image-update is executed which 
calls "imgbase update". this fails with the above mentioned lvcreate 
error because it's the same version and the volume is already there 
(like you suspected). why this useless "update" happens is beyond me, 
but because i never before saw a "real" update and i'm using this 
non-standard setup with software raid and manual partitioning i was so 
anxious that something might be wrong in my setup that i desperately 
looked for an explanation to this "error". what helped to understand was 
a "real" update from version 4.1.1 to 4.1.1.1. i hope all of this might 
be of use to somebody, i spent a lot of time, but now i'm ok...


thx
matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] trouble when creating VM snapshots including memory

2017-06-12 Thread Matthias Leopold



Am 2017-06-11 um 10:11 schrieb Yaniv Kaul:



On Fri, Jun 9, 2017 at 3:39 PM, Matthias Leopold 
<matthias.leop...@meduniwien.ac.at 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:


hi,

i'm having trouble creating VM snapshots that include memory in my
oVirt 4.1 test environment. when i do this the VM gets paused and
shortly (20-30s) afterwards i'm seeing messages in engine.log about
both iSCSI storage domains (master storage domain and data storage
where VM resides) experiencing high latency. this quickly worsens
from the engines view: VM is unresponsive, Host is unresponsive,
engine wants to fence the host (impossible because it's the only
host in the test cluster). in the end there is an EngineException

EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
VDSGenericException: VDSNetworkException: Message timeout which can
be caused by communication issues (Failed with error
VDS_NETWORK_ERROR and code 5022)

the snapshot fails and is left in an inconsistent state. the
situation has to be resolved manually with unlock_entity.sh and
maybe lvm commands. this happened twice in exactly the same manner.
VM snapshots without memory for this VM are not a problem.

VM guest OS is CentOS7 installed from one of the
ovirt-image-repository images. it has the oVirt guest agent running.

what could be wrong?

this is a test environment where lots of parameters aren't optimal
but i never had problems like this before, nothing concerning
network latency. iSCSI is on a FreeNAS box. CPU, RAM, ethernet
(10GBit for storage) on all hosts involved (engine hosted
externally, oVirt Node, storage) should be OK by far.


Are you sure iSCSI traffic is going over the 10gb interfaces?
If it doesn't, it might choke the mgmt interface.
Regardless, how is the performance of the storage? I don't expect it to 
require too much, but saving the memory might require some storage 
performance. Perhaps there's a bottleneck there?

Y.


i shot myself in the foot by also playing around with network QoS and 
forgetting about it no wonder the network chokes when i tell it to 
do so. without randomly applied QoS profiles snapshots work perfectly ;-)


thx
matthias



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] trouble when creating VM snapshots including memory

2017-06-12 Thread Matthias Leopold



Am 2017-06-09 um 21:48 schrieb Karli Sjöberg:



Den 9 juni 2017 21:40 skrev Matthias Leopold 
<matthias.leop...@meduniwien.ac.at>:


hi,

i'm having trouble creating VM snapshots that include memory in my
oVirt
4.1 test environment. when i do this the VM gets paused and shortly
(20-30s) afterwards i'm seeing messages in engine.log about both iSCSI
storage domains (master storage domain and data storage where VM
resides) experiencing high latency. this quickly worsens from the
engines view: VM is unresponsive, Host is unresponsive, engine wants to
fence the host (impossible because it's the only host in the test
cluster). in the end there is an EngineException

EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
VDSGenericException: VDSNetworkException: Message timeout which can be
caused by communication issues (Failed with error VDS_NETWORK_ERROR and
code 5022)

the snapshot fails and is left in an inconsistent state. the situation
has to be resolved manually with unlock_entity.sh and maybe lvm
commands. this happened twice in exactly the same manner. VM snapshots
without memory for this VM are not a problem.

VM guest OS is CentOS7 installed from one of the ovirt-image-repository
images. it has the oVirt guest agent running.

what could be wrong?


Seems to me that the snapshot operation, where the host needs to save 
all of the VM memory chokes the storage pipe, the host becomes 
"unresponsive" from engine's point of view and all goes up shit creek. 
How is the hypervisor connected to the storage, in more detail?


/K


i shot myself in the foot by also playing around with network QoS and 
forgetting about it no wonder the network chokes when i tell it to 
do so. without randomly applied QoS profiles snapshots work perfectly ;-)


thx
matthias


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] trouble when creating VM snapshots including memory

2017-06-09 Thread Matthias Leopold

hi,

i'm having trouble creating VM snapshots that include memory in my oVirt 
4.1 test environment. when i do this the VM gets paused and shortly 
(20-30s) afterwards i'm seeing messages in engine.log about both iSCSI 
storage domains (master storage domain and data storage where VM 
resides) experiencing high latency. this quickly worsens from the 
engines view: VM is unresponsive, Host is unresponsive, engine wants to 
fence the host (impossible because it's the only host in the test 
cluster). in the end there is an EngineException


EngineException: 
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: 
VDSGenericException: VDSNetworkException: Message timeout which can be 
caused by communication issues (Failed with error VDS_NETWORK_ERROR and 
code 5022)


the snapshot fails and is left in an inconsistent state. the situation 
has to be resolved manually with unlock_entity.sh and maybe lvm 
commands. this happened twice in exactly the same manner. VM snapshots 
without memory for this VM are not a problem.


VM guest OS is CentOS7 installed from one of the ovirt-image-repository 
images. it has the oVirt guest agent running.


what could be wrong?

this is a test environment where lots of parameters aren't optimal but i 
never had problems like this before, nothing concerning network latency. 
iSCSI is on a FreeNAS box. CPU, RAM, ethernet (10GBit for storage) on 
all hosts involved (engine hosted externally, oVirt Node, storage) 
should be OK by far.


it looks like some obvious configuration botch or performance bottleneck 
to me. can it be linked to the network roles (management and migration 
network are on a 1 GBit link)?


i'm still new to this, not a lot of KVM experience, too. maybe someone 
recognizes the culprit...


thx
matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] configuring gluster volumes/bricks from ovirt ??

2017-05-04 Thread Matthias Leopold



Am 2017-05-04 um 09:00 schrieb knarra:

On 05/04/2017 12:28 PM, knarra wrote:

On 05/03/2017 07:44 PM, Matthias Leopold wrote:

hi,


i'm trying to get into this gluster thing with oVirt and added a 2
node gluster cluster to my oVirt 4.1 data center (just for testing, i
know it can't have HA with 2 nodes). provisioning of the storage
hosts did apparently work and my storage cluster seems to be
operational.

i have very little understanding of glusterfs right know, but what i
think i am missing in the interface is a way to configure
volumes/bricks on my gluster cluster/hosts so i can use them for
storage domains (i want to use a "managed gluster volume"), the drop
down "Gluster" in "New domain" is empty. all i could find for storage
specific UI was the "Services" tab for the storage cluster which is
empty.

once gluster hosts are added into the UI, user will be able to see
volumes created on that hosts and to use them as storage domains. For
this you will need to create a new storage domain with the mount path
as gluster volume path.


i'm not using a hyperconverged/self-hosted setup, my engine is
located on a dedicated server and i used iSCSI storage for data
master domain. my hosts (for hypervisors and gluster storage) where
installed on top of centos7, not using oVirt Node.

does my setup make sense (it's only for testing)?
do i have to configure gluster hosts manually?

yes, you will have to do this manually .

Installing gluster packages have to done manually. Once gluster packages
are installed you can create a gluster cluster from ovirt UI , add
gluster hosts and create volumes on them using the volumes tab.


i'm sorry, but i'm missing all these "Gluster Volumes" UI components 
that are mentioned in 
http://www.ovirt.org/develop/release-management/features/gluster/gluster-support/. 
the tabs i see for my storage cluster are "General", "Logical Networks", 
"Hosts", "Services", "Permissions". as i said "Services" is empty, is 
that a problem?


what's wrong?

thx
matthias

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] configuring gluster volumes/bricks from ovirt ??

2017-05-04 Thread Matthias Leopold



Am 2017-05-04 um 10:40 schrieb knarra:

On 05/04/2017 01:55 PM, Matthias Leopold wrote:



Am 2017-05-04 um 10:21 schrieb knarra:

On 05/04/2017 01:16 PM, Matthias Leopold wrote:



Am 2017-05-04 um 09:00 schrieb knarra:

On 05/04/2017 12:28 PM, knarra wrote:

On 05/03/2017 07:44 PM, Matthias Leopold wrote:

hi,


i'm trying to get into this gluster thing with oVirt and added a 2
node gluster cluster to my oVirt 4.1 data center (just for
testing, i
know it can't have HA with 2 nodes). provisioning of the storage
hosts did apparently work and my storage cluster seems to be
operational.

i have very little understanding of glusterfs right know, but what i
think i am missing in the interface is a way to configure
volumes/bricks on my gluster cluster/hosts so i can use them for
storage domains (i want to use a "managed gluster volume"), the drop
down "Gluster" in "New domain" is empty. all i could find for
storage
specific UI was the "Services" tab for the storage cluster which is
empty.

once gluster hosts are added into the UI, user will be able to see
volumes created on that hosts and to use them as storage domains. For
this you will need to create a new storage domain with the mount path
as gluster volume path.


i'm not using a hyperconverged/self-hosted setup, my engine is
located on a dedicated server and i used iSCSI storage for data
master domain. my hosts (for hypervisors and gluster storage) where
installed on top of centos7, not using oVirt Node.

does my setup make sense (it's only for testing)?
do i have to configure gluster hosts manually?

yes, you will have to do this manually .

Installing gluster packages have to done manually. Once gluster
packages
are installed you can create a gluster cluster from ovirt UI , add
gluster hosts and create volumes on them using the volumes tab.


i'm sorry, but i'm missing all these "Gluster Volumes" UI components
that are mentioned in
http://www.ovirt.org/develop/release-management/features/gluster/gluster-support/.

the tabs i see for my storage cluster are "General", "Logical
Networks", "Hosts", "Services", "Permissions". as i said "Services" is
empty, is that a problem?

what's wrong?

thx
matthias


Does your cluster has both virt+gluster enabled or only virt ? If only
virt you will not be able to see them.

If the cluster has both virt+gluster service enabled or only gluster
service enabled you should be able to see them.


my storage cluster has only gluster service enabled

matthias



I think you have selected the cluster and you are referring to the sub
tabs for that cluster. There should be a main tab called 'Volumes' which
is present. Are you not seeing that? I have attached screenshot for the
same.



thanks for the screenshot, now i know how it should look like. i'm 
attaching my screenshot. i'm missing a couple of elements, especially 
"Cluster Node Type" (i don't have that in my VM cluster either). is 
there an obvious explanation? next step would be to recreate the gluster 
cluster with "clean" oVirt Nodes. maybe my storage hosts are botched, i 
had glusterfs 3.10 packages installed on one of them previously


thanks a lot so far
matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] configuring gluster volumes/bricks from ovirt ??

2017-05-04 Thread Matthias Leopold



Am 2017-05-04 um 12:21 schrieb knarra:

On 05/04/2017 02:57 PM, Matthias Leopold wrote:



Am 2017-05-04 um 10:40 schrieb knarra:

On 05/04/2017 01:55 PM, Matthias Leopold wrote:



Am 2017-05-04 um 10:21 schrieb knarra:

On 05/04/2017 01:16 PM, Matthias Leopold wrote:



Am 2017-05-04 um 09:00 schrieb knarra:

On 05/04/2017 12:28 PM, knarra wrote:

On 05/03/2017 07:44 PM, Matthias Leopold wrote:

hi,


i'm trying to get into this gluster thing with oVirt and added a 2
node gluster cluster to my oVirt 4.1 data center (just for
testing, i
know it can't have HA with 2 nodes). provisioning of the storage
hosts did apparently work and my storage cluster seems to be
operational.

i have very little understanding of glusterfs right know, but
what i
think i am missing in the interface is a way to configure
volumes/bricks on my gluster cluster/hosts so i can use them for
storage domains (i want to use a "managed gluster volume"), the
drop
down "Gluster" in "New domain" is empty. all i could find for
storage
specific UI was the "Services" tab for the storage cluster
which is
empty.

once gluster hosts are added into the UI, user will be able to see
volumes created on that hosts and to use them as storage
domains. For
this you will need to create a new storage domain with the mount
path
as gluster volume path.


i'm not using a hyperconverged/self-hosted setup, my engine is
located on a dedicated server and i used iSCSI storage for data
master domain. my hosts (for hypervisors and gluster storage)
where
installed on top of centos7, not using oVirt Node.

does my setup make sense (it's only for testing)?
do i have to configure gluster hosts manually?

yes, you will have to do this manually .

Installing gluster packages have to done manually. Once gluster
packages
are installed you can create a gluster cluster from ovirt UI , add
gluster hosts and create volumes on them using the volumes tab.


i'm sorry, but i'm missing all these "Gluster Volumes" UI components
that are mentioned in
http://www.ovirt.org/develop/release-management/features/gluster/gluster-support/.


the tabs i see for my storage cluster are "General", "Logical
Networks", "Hosts", "Services", "Permissions". as i said
"Services" is
empty, is that a problem?

what's wrong?

thx
matthias


Does your cluster has both virt+gluster enabled or only virt ? If only
virt you will not be able to see them.

If the cluster has both virt+gluster service enabled or only gluster
service enabled you should be able to see them.


my storage cluster has only gluster service enabled

matthias



I think you have selected the cluster and you are referring to the sub
tabs for that cluster. There should be a main tab called 'Volumes' which
is present. Are you not seeing that? I have attached screenshot for the
same.



thanks for the screenshot, now i know how it should look like. i'm
attaching my screenshot. i'm missing a couple of elements, especially
"Cluster Node Type" (i don't have that in my VM cluster either). is
there an obvious explanation? next step would be to recreate the
gluster cluster with "clean" oVirt Nodes. maybe my storage hosts are
botched, i had glusterfs 3.10 packages installed on one of them
previously

thanks a lot so far
matthias


During the engine-setup when application mode was asked hope you have
set "Both" .



no, i didn't... (didn't know what i was doing then)
i'm learning it the hard way... going to start again from scratch...
still i think oVirt is a great product, thanks for software and support ;-)

matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disk image upload via CLI?

2017-09-14 Thread Matthias Leopold

Hi Daniel and other friendly contributors,

finally i sorted out how to set provisioned_size/initial_size correctly 
in upload_disk.py and my error is gone. It wasn't so easy, but maybe i 
took an awkard route when starting with a preallocated qcow2 image. In 
this special case you have to set provisioned_size to st_size, whereas 
with sparse images provisioned_size is "virtual size" from "qemu-img 
info". This may seem obvious to others, i took the hard route.


My approach stems from my desire to repeat the exact example in 
upload_disk.py (which uses a qcow image) and my actual use case, which 
is uploading a rather large image converted from vmdk (i only tested 
this with raw format yet), so i wanted to have some "real large" data to 
upload.


@nsoffer:
I'll open a bug for better ovirt-imageio-daemon as soon as i can.

thanks a lot for help
matthias

Am 2017-09-13 um 16:49 schrieb Daniel Erez:

Hi Matthias,

The 403 response from the daemon means the ticket can't be authenticated
(for some reason). I assume that the issue here is the initial size of 
the disk.

When uploading/downloading a qcow image, you should specify the apparent
size of the file (see 'st_size' in [1]). You can get it simply by 'ls 
-l' [2] (which is

a different value from 'disk size' of qemu-img info [3]).
btw, why are you creating a preallocated qcow disk? For what use-case?

[1] https://linux.die.net/man/2/stat

[2] $ ls -l test.qcow2
-rw-r--r--. 1 user user 1074135040 Sep 13 16:50 test.qcow2

[3]
$ qemu-img create -f qcow2 -o preallocation=full test.qcow2 1g
$ qemu-img info test.qcow2
image: test.qcow2
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 1.0G
cluster_size: 65536
Format specific information:
 compat: 1.1
 lazy refcounts: false
 refcount bits: 16
 corrupt: false



On Wed, Sep 13, 2017 at 5:03 PM Matthias Leopold 
<matthias.leop...@meduniwien.ac.at 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:


i tried it again twice:

when using upload_disk.py from the ovirt engine host itself the disk
upload succeeds (despite an "503 Service Unavailable Completed 100%" in
script output in the end)

another try was from an ovirt-sdk installation on my ubuntu desktop
itself (yesterday i tried it from a centos VM on my desktop machine).
this failed again, this time with "socket.error: [Errno 32] Broken pipe"
after reaching "200 OK Completed 100%". in imageio-proxy log i have
again the 403 error in this moment

what's the difference between accessing the API from the engine host and
from "outside" in this case?

thx
matthias

Am 2017-09-12 um 16:42 schrieb Matthias Leopold:
 > Thanks, i tried this script and it _almost_ worked ;-)
 >
 > i uploaded two images i created with
 > qemu-img create -f qcow2 -o preallocation=full
 > and
 > qemu-img create -f qcow2 -o preallocation=falloc
 >
 > for initial_size and provisioned_size i took the value reported by
 > "qemu-img info" in "virtual size" (same as "disk size" in this case)
 >
 > the upload goes to 100% and then fails with
 >
 > 200 OK Completed 100%
 > Traceback (most recent call last):
 >File "./upload_disk.py", line 157, in 
 >  headers=upload_headers,
 >File "/usr/lib64/python2.7/httplib.py", line 1017, in request
 >  self._send_request(method, url, body, headers)
 >File "/usr/lib64/python2.7/httplib.py", line 1051, in
_send_request
 >  self.endheaders(body)
 >File "/usr/lib64/python2.7/httplib.py", line 1013, in endheaders
 >  self._send_output(message_body)
 >File "/usr/lib64/python2.7/httplib.py", line 864, in _send_output
 >  self.send(msg)
 >File "/usr/lib64/python2.7/httplib.py", line 840, in send
 >  self.sock.sendall(data)
 >File "/usr/lib64/python2.7/ssl.py", line 746, in sendall
 >  v = self.send(data[count:])
 >File "/usr/lib64/python2.7/ssl.py", line 712, in send
 >  v = self._sslobj.write(data)
 > socket.error: [Errno 104] Connection reset by peer
 >
 > in web GUI the disk stays in Status: "Transferring via API"
 > it can only be removed when manually unlocking it (unlock_entity.sh)
 >
 > engine.log tells nothing interesting
 >
 > i attached the last lines of ovirt-imageio-proxy/image-proxy.log and
 > ovirt-imageio-daemon/daemon.log (from the executing node)
 >
 > the HTTP status 403 in ovirt-imageio-daemon/daemon.log doesn't
look too
 > nice to me
 >
 > can you explain what happens?
 &g

Re: [ovirt-users] oVirt Node update question

2017-09-22 Thread Matthias Leopold

Hi Yuval,

i updated my nodes from 4.1.3 to 4.1.6 today and noticed that the

> /etc/yum.repos.d/ovirt-4.1-pre-dependencies.repo
> /etc/yum.repos.d/ovirt-4.1-pre.repo

files i moved away previously reappeared after rebooting, so i'm getting 
updates to 4.1.7-0.1.rc1.20170919143904.git0c14f08 proposed again. 
obviously i haven't fully understood this "layer" concept of imgbased. 
the practical question for me is: how do i get _permanently_ rid of 
these files in path "/etc/yum.repos.d/"?


thanks
matthias

Am 2017-08-31 um 16:24 schrieb Yuval Turgeman:

Yes that would do it, thanks for the update :)

On Thu, Aug 31, 2017 at 5:21 PM, Matthias Leopold 
<matthias.leop...@meduniwien.ac.at 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:


Hi,

all of the nodes that already made updates in the past have

/etc/yum.repos.d/ovirt-4.1-pre-dependencies.repo
/etc/yum.repos.d/ovirt-4.1-pre.repo

i went through the logs in /var/log/ovirt-engine/host-deploy/ and my
own notes and discovered/remembered that this being presented with
RC versions started on 20170707 when i updated my nodes from 4.1.2
to 4.1.3-0.3.rc3.20170622082156.git47b4302 (!). probably there was a
short timespan when you erroneously published a RC version in the
wrong repo, my nodes "caught" it and dragged this along until today
when i finally cared ;-) I moved the
/etc/yum.repos.d/ovirt-4.1-pre*.repo files away and now everything
seems fine

Regards
Matthias

Am 2017-08-31 um 15:25 schrieb Yuval Turgeman:

Hi,

Don't quite understand how you got to that 4.1.6 rc, it's only
available in the pre release repo, can you paste the yum repos
that are enabled on your system ?

Thanks,
Yuval.

    On Thu, Aug 31, 2017 at 4:19 PM, Matthias Leopold
<matthias.leop...@meduniwien.ac.at
<mailto:matthias.leop...@meduniwien.ac.at>
<mailto:matthias.leop...@meduniwien.ac.at
<mailto:matthias.leop...@meduniwien.ac.at>>> wrote:

     Hi,

     thanks a lot.

     So i understand everything is fine with my nodes and i'll
wait until
     the update GUI shows the right version to update (4.1.5 at
the moment).

     Regards
     Matthias


     Am 2017-08-31 um 14:56 schrieb Yuval Turgeman:

         Hi,

         oVirt node ng is shipped with a placeholder rpm
preinstalled.
         The image-update rpms obsolete the placeholder rpm, so
once a
         new image-update rpm is published, yum update will pull
those
         packages.  So you have 1 system that was a fresh
install and the
         others were upgrades.
         Next, the post install script for those image-update
rpms will
         install --justdb the image-update rpms to the new image (so
         running yum update in the new image won't try to pull
again the
         same version).

         Regarding the 4.1.6 it's very strange, we'll need to
check the
         repos to see why it was published.

         As for nodectl, if there are no changes, it won't be
updated and
         you'll see an "old" version or a version that doesn't
seem to be
         matching the current image, but it is ok, we are
thinking of
         changing its name to make it less confusing.

         Hope this helps,
             Yuval.


         On Thu, Aug 31, 2017 at 11:17 AM, Matthias Leopold
         <matthias.leop...@meduniwien.ac.at
<mailto:matthias.leop...@meduniwien.ac.at>
         <mailto:matthias.leop...@meduniwien.ac.at
<mailto:matthias.leop...@meduniwien.ac.at>>
         <mailto:matthias.leop...@meduniwien.ac.at
<mailto:matthias.leop...@meduniwien.ac.at>

         <mailto:matthias.leop...@meduniwien.ac.at
<mailto:matthias.leop...@meduniwien.ac.at>>>> wrote:

              hi,

              i still don't completely understand the oVirt Node
update
         process
              and the involved rpm packages.

              We have 4 nodes, all running oVirt Node 4.1.3.
Three of
         them show as
              available updates

'ovirt-node-ng-image-update-4.1.6-0.1.rc1.20170823083853.gitd646d2f.el7.centos'

              (i don't want run release candidates), one of them
shows
              'ovirt-node-ng-image-update-4.1.5-1.el7.centos'
(this is what i
              like). The node that doesn't want to upgrade to

[ovirt-users] multipath configuration for local disks?

2017-10-10 Thread Matthias Leopold

hi,

i'm using three different generations of hardware for my oVirt 
hypervisor hosts. they are not exactly the "same", but very "similar".


- all were installed with oVirt Node 4.1.x installers
- all have (2-4) local SSD disks for oVirt Node OS
- all were configured with manual partitioning and SW RAID 1 with two 
SSD disks for oVirt Node OS


after initializing the third generation of hosts (with 4 SSD disks) i 
noticed the difference in multipath configuration for the local disks. 
suddenly the two OS disks (for RAID 1) are not configured as multipath 
disks anymore, the unused other two are multipath disks. in the first 
two host generations the (only) two OS disks (for RAID 1) are configured 
as multipath disks.


after looking at this for the first time and briefly looking into 
multipath documentation i think that local disks shouldn't be included 
in multipath configuration at all. i tried to remove the local disks 
from multipathing (delete from wwids file, explicit blacklisting), to no 
avail.


i'm pasting two listings from the 4 SSD disk situation:

# lsscsi --scsi_id -g
[0:0:0:0]diskATA  SAMSUNG MZ7LM480 204Q  /dev/sda   -  /dev/sg0
[0:0:1:0]diskATA  SAMSUNG MZ7LM480 204Q  /dev/sdb   -  /dev/sg1
[0:0:2:0]diskATA  SAMSUNG MZ7LM480 204Q  /dev/sdc 
SAMSUNG_MZ7LM480HMHQ-5_S2UJNX0J102216  /dev/sg2
[0:0:3:0]diskATA  SAMSUNG MZ7LM480 204Q  /dev/sdd 
SAMSUNG_MZ7LM480HMHQ-5_S2UJNX0J102214  /dev/sg3


# ls -l /dev/disk/by-id/*SAMSUNG*
lrwxrwxrwx. 1 root root  9  9. Okt 12:07 
/dev/disk/by-id/ata-SAMSUNG_MZ7LM480HMHQ-5_S2UJNX0J102212 -> ../../sdb
lrwxrwxrwx. 1 root root 10  9. Okt 12:07 
/dev/disk/by-id/ata-SAMSUNG_MZ7LM480HMHQ-5_S2UJNX0J102212-part1 -> 
../../sdb1
lrwxrwxrwx. 1 root root 10  9. Okt 12:07 
/dev/disk/by-id/ata-SAMSUNG_MZ7LM480HMHQ-5_S2UJNX0J102212-part2 -> 
../../sdb2
lrwxrwxrwx. 1 root root  9  9. Okt 12:07 
/dev/disk/by-id/ata-SAMSUNG_MZ7LM480HMHQ-5_S2UJNX0J102214 -> ../../sdd
lrwxrwxrwx. 1 root root  9  9. Okt 12:07 
/dev/disk/by-id/ata-SAMSUNG_MZ7LM480HMHQ-5_S2UJNX0J102216 -> ../../sdc
lrwxrwxrwx. 1 root root  9  9. Okt 12:07 
/dev/disk/by-id/ata-SAMSUNG_MZ7LM480HMHQ-5_S2UJNX0J10 -> ../../sda
lrwxrwxrwx. 1 root root 10  9. Okt 12:07 
/dev/disk/by-id/ata-SAMSUNG_MZ7LM480HMHQ-5_S2UJNX0J10-part1 -> 
../../sda1
lrwxrwxrwx. 1 root root 10  9. Okt 12:07 
/dev/disk/by-id/ata-SAMSUNG_MZ7LM480HMHQ-5_S2UJNX0J10-part2 -> 
../../sda2
lrwxrwxrwx. 1 root root 10  9. Okt 12:07 
/dev/disk/by-id/dm-name-SAMSUNG_MZ7LM480HMHQ-5_S2UJNX0J102214 -> 
../../dm-6
lrwxrwxrwx. 1 root root 10  9. Okt 12:07 
/dev/disk/by-id/dm-name-SAMSUNG_MZ7LM480HMHQ-5_S2UJNX0J102216 -> 
../../dm-5
lrwxrwxrwx. 1 root root 10  9. Okt 12:07 
/dev/disk/by-id/dm-uuid-mpath-SAMSUNG_MZ7LM480HMHQ-5_S2UJNX0J102214 
-> ../../dm-6
lrwxrwxrwx. 1 root root 10  9. Okt 12:07 
/dev/disk/by-id/dm-uuid-mpath-SAMSUNG_MZ7LM480HMHQ-5_S2UJNX0J102216 
-> ../../dm-5


can someone explain this behaviour? what is the _intended_ configuration?

thx
matthias



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] multipath configuration for local disks?

2017-10-10 Thread Matthias Leopold



Am 2017-10-10 um 12:53 schrieb Yaniv Kaul:



On Oct 10, 2017 12:23 PM, "Matthias Leopold" 
<matthias.leop...@meduniwien.ac.at 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:


hi,

i'm using three different generations of hardware for my oVirt
hypervisor hosts. they are not exactly the "same", but very "similar".

- all were installed with oVirt Node 4.1.x installers
- all have (2-4) local SSD disks for oVirt Node OS
- all were configured with manual partitioning and SW RAID 1 with
two SSD disks for oVirt Node OS

after initializing the third generation of hosts (with 4 SSD disks)
i noticed the difference in multipath configuration for the local
disks. 



I may be wrong but it rarely matters on local disks. Unless they have 
some kind of active-active (multi-channel?) controllers.

Y.


thanks, you're probably right. i was asking
(a) for the sake of consistency
(b) out of curiosity
(c) because the multipathing adds an additional level of abstraction in 
the whole setup (which includes SW RAID)


strictly speaking none of these are show-stoppers , i'll manage...

matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Disk image upload via CLI?

2017-09-12 Thread Matthias Leopold

Hi,

is there a way to upload disk images (not OVF files, not ISO files) to 
oVirt storage domains via CLI? I need to upload a 800GB file and this is 
not really comfortable via browser. I looked at ovirt-shell and 
https://www.ovirt.org/develop/release-management/features/storage/image-upload/, 
but i didn't find an option in either of them.


thx
matthias

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disk image upload via CLI?

2017-09-13 Thread Matthias Leopold

i tried it again twice:

when using upload_disk.py from the ovirt engine host itself the disk 
upload succeeds (despite an "503 Service Unavailable Completed 100%" in 
script output in the end)


another try was from an ovirt-sdk installation on my ubuntu desktop 
itself (yesterday i tried it from a centos VM on my desktop machine). 
this failed again, this time with "socket.error: [Errno 32] Broken pipe" 
after reaching "200 OK Completed 100%". in imageio-proxy log i have 
again the 403 error in this moment


what's the difference between accessing the API from the engine host and 
from "outside" in this case?


thx
matthias

Am 2017-09-12 um 16:42 schrieb Matthias Leopold:

Thanks, i tried this script and it _almost_ worked ;-)

i uploaded two images i created with
qemu-img create -f qcow2 -o preallocation=full
and
qemu-img create -f qcow2 -o preallocation=falloc

for initial_size and provisioned_size i took the value reported by 
"qemu-img info" in "virtual size" (same as "disk size" in this case)


the upload goes to 100% and then fails with

200 OK Completed 100%
Traceback (most recent call last):
   File "./upload_disk.py", line 157, in 
 headers=upload_headers,
   File "/usr/lib64/python2.7/httplib.py", line 1017, in request
 self._send_request(method, url, body, headers)
   File "/usr/lib64/python2.7/httplib.py", line 1051, in _send_request
 self.endheaders(body)
   File "/usr/lib64/python2.7/httplib.py", line 1013, in endheaders
 self._send_output(message_body)
   File "/usr/lib64/python2.7/httplib.py", line 864, in _send_output
 self.send(msg)
   File "/usr/lib64/python2.7/httplib.py", line 840, in send
 self.sock.sendall(data)
   File "/usr/lib64/python2.7/ssl.py", line 746, in sendall
 v = self.send(data[count:])
   File "/usr/lib64/python2.7/ssl.py", line 712, in send
 v = self._sslobj.write(data)
socket.error: [Errno 104] Connection reset by peer

in web GUI the disk stays in Status: "Transferring via API"
it can only be removed when manually unlocking it (unlock_entity.sh)

engine.log tells nothing interesting

i attached the last lines of ovirt-imageio-proxy/image-proxy.log and 
ovirt-imageio-daemon/daemon.log (from the executing node)


the HTTP status 403 in ovirt-imageio-daemon/daemon.log doesn't look too 
nice to me


can you explain what happens?

ovirt engine is 4.1.5
ovirt node is 4.1.3 (is that a problem?)

thx
matthias



Am 2017-09-12 um 13:15 schrieb Fred Rolland:

Hi,

You can check this example:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py 



Regards,
Fred

On Tue, Sep 12, 2017 at 11:49 AM, Matthias Leopold 
<matthias.leop...@meduniwien.ac.at 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:


Hi,

is there a way to upload disk images (not OVF files, not ISO files)
to oVirt storage domains via CLI? I need to upload a 800GB file and
this is not really comfortable via browser. I looked at ovirt-shell
and

https://www.ovirt.org/develop/release-management/features/storage/image-upload/ 


<https://www.ovirt.org/develop/release-management/features/storage/image-upload/>, 


but i didn't find an option in either of them.

thx
matthias

___
    Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>






--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Disk image upload via CLI?

2017-09-12 Thread Matthias Leopold

Thanks, i tried this script and it _almost_ worked ;-)

i uploaded two images i created with
qemu-img create -f qcow2 -o preallocation=full
and
qemu-img create -f qcow2 -o preallocation=falloc

for initial_size and provisioned_size i took the value reported by 
"qemu-img info" in "virtual size" (same as "disk size" in this case)


the upload goes to 100% and then fails with

200 OK Completed 100%
Traceback (most recent call last):
  File "./upload_disk.py", line 157, in 
headers=upload_headers,
  File "/usr/lib64/python2.7/httplib.py", line 1017, in request
self._send_request(method, url, body, headers)
  File "/usr/lib64/python2.7/httplib.py", line 1051, in _send_request
self.endheaders(body)
  File "/usr/lib64/python2.7/httplib.py", line 1013, in endheaders
self._send_output(message_body)
  File "/usr/lib64/python2.7/httplib.py", line 864, in _send_output
self.send(msg)
  File "/usr/lib64/python2.7/httplib.py", line 840, in send
self.sock.sendall(data)
  File "/usr/lib64/python2.7/ssl.py", line 746, in sendall
v = self.send(data[count:])
  File "/usr/lib64/python2.7/ssl.py", line 712, in send
v = self._sslobj.write(data)
socket.error: [Errno 104] Connection reset by peer

in web GUI the disk stays in Status: "Transferring via API"
it can only be removed when manually unlocking it (unlock_entity.sh)

engine.log tells nothing interesting

i attached the last lines of ovirt-imageio-proxy/image-proxy.log and 
ovirt-imageio-daemon/daemon.log (from the executing node)


the HTTP status 403 in ovirt-imageio-daemon/daemon.log doesn't look too 
nice to me


can you explain what happens?

ovirt engine is 4.1.5
ovirt node is 4.1.3 (is that a problem?)

thx
matthias



Am 2017-09-12 um 13:15 schrieb Fred Rolland:

Hi,

You can check this example:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py

Regards,
Fred

On Tue, Sep 12, 2017 at 11:49 AM, Matthias Leopold 
<matthias.leop...@meduniwien.ac.at 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:


Hi,

is there a way to upload disk images (not OVF files, not ISO files)
to oVirt storage domains via CLI? I need to upload a 800GB file and
this is not really comfortable via browser. I looked at ovirt-shell
and

https://www.ovirt.org/develop/release-management/features/storage/image-upload/

<https://www.ovirt.org/develop/release-management/features/storage/image-upload/>,
but i didn't find an option in either of them.

thx
matthias

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
2017-09-12 16:07:10,046 INFO(Thread-632) [web] xxx.yyy.215.2 - PUT /1e12aa19-f122-4f6c-bfad-ce84abe2684e 200 0 (0.28s)
2017-09-12 16:07:10,171 INFO(Thread-633) [images] Writing 8388608 bytes at offset 5301600256 to /rhev/data-center/0001-0001-0001-0001-0311/ebb620c9-6dfe-43a8-9867-20b9a93c76b5/images/54b6da51-1c67-42e9-b128-0a218fa1e8b7/e0c1ab33-a817-4207-b1f9-32f1aa4e46be for ticket 1e12aa19-f122-4f6c-bfad-ce84abe2684e
2017-09-12 16:07:10,439 INFO(Thread-633) [web] xxx.yyy.215.2 - PUT /1e12aa19-f122-4f6c-bfad-ce84abe2684e 200 0 (0.27s)
2017-09-12 16:07:10,556 INFO(Thread-634) [images] Writing 8388608 bytes at offset 5309988864 to /rhev/data-center/0001-0001-0001-0001-0311/ebb620c9-6dfe-43a8-9867-20b9a93c76b5/images/54b6da51-1c67-42e9-b128-0a218fa1e8b7/e0c1ab33-a817-4207-b1f9-32f1aa4e46be for ticket 1e12aa19-f122-4f6c-bfad-ce84abe2684e
2017-09-12 16:07:10,819 INFO(Thread-634) [web] xxx.yyy.215.2 - PUT /1e12aa19-f122-4f6c-bfad-ce84abe2684e 200 0 (0.26s)
2017-09-12 16:07:10,924 INFO(Thread-635) [images] Writing 8388608 bytes at offset 5318377472 to /rhev/data-center/0001-0001-0001-0001-0311/ebb620c9-6dfe-43a8-9867-20b9a93c76b5/images/54b6da51-1c67-42e9-b128-0a218fa1e8b7/e0c1ab33-a817-4207-b1f9-32f1aa4e46be for ticket 1e12aa19-f122-4f6c-bfad-ce84abe2684e
2017-09-12 16:07:11,219 INFO(Thread-635) [web] xxx.yyy.215.2 - PUT /1e12aa19-f122-4f6c-bfad-ce84abe2684e 200 0 (0.30s)
2017-09-12 16:07:11,336 INFO(Thread-636) [images] Writing 8388608 bytes at offset 5326766080 to /rhev/data-center/0001-0001-0001-0001-0311/ebb620c9-6dfe-43a8-9867-20b9a93c76b5/images/54b6da51-1c67-42e9-b128-0a218fa1e8b7/e0c1ab33-a817-4207-b1f9-32f1aa4e46be for ticket 1e12aa19-f122-4f6c-bfad-ce84abe2684e
2017-09-12 16:07:11,595 INFO(Thread-636) [web] xxx.yyy.215.2 - PUT /1e12aa19-f122-4f6c-bfad-ce84abe2684e 200 0 (0.26s)
2017-09-12 16:07:11,711 

[ovirt-users] USER_CREATE_SNAPSHOT_FINISHED_FAILURE with Cinder storage stuck

2017-08-21 Thread Matthias Leopold

Hi,

we're experimenting with Cinder/Ceph Storage on oVirt 4.1.3. When we 
tried to snapshot a VM (2 disks on Cinder storage domain) the task never 
finished and now seems to be in an uninterruptible loop. We tried to 
stop it in various (brute force) ways, but the below messages (one of 
the disks as an example) are cluttering engine.log every 10 seconds. We 
tried the following:


- deleting the VM
- restarting ovirt-engine service
- vdsClient -s 0 getAllTasksStatuses on SPM host (no result)
- restarting vdsmd service on SPM host
- /usr/share/ovirt-engine/setup/dbutils/taskcleaner.sh -u engine -d 
engine -c c841c979-70ea-4e06-b9c4-9c5ce014d76d


None of this helped. How do we get rid of this failed transaction?

thx
matthias

2017-08-21 16:40:44,798+02 INFO 
[org.ovirt.engine.core.utils.transaction.TransactionSupport] 
(DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] 
transaction rolled back
2017-08-21 16:40:44,799+02 ERROR 
[org.ovirt.engine.core.bll.job.ExecutionHandler] 
(DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] 
Exception: org.springframework.dao.DataIntegrityViolationException: 
CallableStatementCallback; SQL [{call insertstep(?, ?, ?, ?, ?, ?, ?, ?, 
?, ?, ?, ?, ?, ?)}]; ERROR: insert or update on table "step" violates 
foreign key constraint "fk_step_job"
2017-08-21 16:40:44,805+02 ERROR 
[org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand] 
(DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] Ending 
command 
'org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand' 
with failure.
2017-08-21 16:40:44,807+02 WARN 
[org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand] 
(DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] No 
snapshot was created for VM 'c0235316-81c4-48be-9521-b86b338c7d20' which 
is in LOCKED status
2017-08-21 16:40:44,810+02 INFO 
[org.ovirt.engine.core.utils.transaction.TransactionSupport] 
(DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] 
transaction rolled back
2017-08-21 16:40:44,810+02 WARN 
[org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
(DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] Trying 
to release exclusive lock which does not exist, lock key: 
'c0235316-81c4-48be-9521-b86b338c7d20VM'
2017-08-21 16:40:44,810+02 INFO 
[org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand] 
(DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] Lock 
freed to object 
'EngineLock:{exclusiveLocks='[c0235316-81c4-48be-9521-b86b338c7d20=VM]', 
sharedLocks=''}'
2017-08-21 16:40:44,829+02 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] 
EVENT_ID: USER_CREATE_SNAPSHOT_FINISHED_FAILURE(69), Correlation ID: 
080af640-bac3-4990-8bf4-6829551b538d, Job ID: 
a3be8af1-8d33-4d35-9672-215ac7c9959f, Call Stack: null, Custom Event ID: 
-1, Message: Failed to complete snapshot 'test' creation for VM ''.
2017-08-21 16:40:44,829+02 ERROR 
[org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller] 
(DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] Failed 
invoking callback end method 'onFailed' for command 
'c841c979-70ea-4e06-b9c4-9c5ce014d76d' with exception 'null', the 
callback is marked for end method retries








___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] USER_CREATE_SNAPSHOT_FINISHED_FAILURE with Cinder storage stuck

2017-08-22 Thread Matthias Leopold



Am 2017-08-22 um 09:33 schrieb Maor Lipchuk:

On Mon, Aug 21, 2017 at 6:12 PM, Matthias Leopold
<matthias.leop...@meduniwien.ac.at> wrote:

Hi,

we're experimenting with Cinder/Ceph Storage on oVirt 4.1.3. When we tried
to snapshot a VM (2 disks on Cinder storage domain) the task never finished
and now seems to be in an uninterruptible loop. We tried to stop it in
various (brute force) ways, but the below messages (one of the disks as an
example) are cluttering engine.log every 10 seconds. We tried the following:

- deleting the VM
- restarting ovirt-engine service
- vdsClient -s 0 getAllTasksStatuses on SPM host (no result)
- restarting vdsmd service on SPM host
- /usr/share/ovirt-engine/setup/dbutils/taskcleaner.sh -u engine -d engine
-c c841c979-70ea-4e06-b9c4-9c5ce014d76d

None of this helped. How do we get rid of this failed transaction?

thx
matthias

2017-08-21 16:40:44,798+02 INFO
[org.ovirt.engine.core.utils.transaction.TransactionSupport]
(DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] transaction
rolled back
2017-08-21 16:40:44,799+02 ERROR
[org.ovirt.engine.core.bll.job.ExecutionHandler] (DefaultQuartzScheduler7)
[080af640-bac3-4990-8bf4-6829551b538d] Exception:
org.springframework.dao.DataIntegrityViolationException:
CallableStatementCallback; SQL [{call insertstep(?, ?, ?, ?, ?, ?, ?, ?, ?,
?, ?, ?, ?, ?)}]; ERROR: insert or update on table "step" violates foreign
key constraint "fk_step_job"
2017-08-21 16:40:44,805+02 ERROR
[org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand]
(DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] Ending
command
'org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand' with
failure.
2017-08-21 16:40:44,807+02 WARN
[org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand]
(DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] No snapshot
was created for VM 'c0235316-81c4-48be-9521-b86b338c7d20' which is in LOCKED
status
2017-08-21 16:40:44,810+02 INFO
[org.ovirt.engine.core.utils.transaction.TransactionSupport]
(DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] transaction
rolled back
2017-08-21 16:40:44,810+02 WARN
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
(DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] Trying to
release exclusive lock which does not exist, lock key:
'c0235316-81c4-48be-9521-b86b338c7d20VM'
2017-08-21 16:40:44,810+02 INFO
[org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand]
(DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] Lock freed
to object
'EngineLock:{exclusiveLocks='[c0235316-81c4-48be-9521-b86b338c7d20=VM]',
sharedLocks=''}'
2017-08-21 16:40:44,829+02 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] EVENT_ID:
USER_CREATE_SNAPSHOT_FINISHED_FAILURE(69), Correlation ID:
080af640-bac3-4990-8bf4-6829551b538d, Job ID:
a3be8af1-8d33-4d35-9672-215ac7c9959f, Call Stack: null, Custom Event ID: -1,
Message: Failed to complete snapshot 'test' creation for VM ''.
2017-08-21 16:40:44,829+02 ERROR
[org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
(DefaultQuartzScheduler7) [080af640-bac3-4990-8bf4-6829551b538d] Failed
invoking callback end method 'onFailed' for command
'c841c979-70ea-4e06-b9c4-9c5ce014d76d' with exception 'null', the callback
is marked for end method retries







Hi Matthias,

Can you please attach the full engine log contains the first error
occurred so we can trace its origin and fix it?
Does it reproduced constantly?

The engine does not use VDSM tasks to manage Cinder, the engine use
Cinder as an external provider using the COCO infrastructure for async
tasks.
The COCO tasks are managed in the database using the command_entities
table, basically if you will remove all references of the command id
from the command_entities and restart engine you should not see it any
more.

Regards,
Maor



Hi Maor,

thanks very much for replying. First i tried to clean the 
command_entities table plus restarting engine as you suggested. This 
didn't work entirely, these two entries


engine=# select command_id, command_type, root_command_id, status from 
command_entities;
  command_id  | command_type | 
root_command_id| status

--+--+--+
 c841c979-70ea-4e06-b9c4-9c5ce014d76d |  206 | 
c841c979-70ea-4e06-b9c4-9c5ce014d76d | FAILED
 65fa094e-1609-47ea-bf0d-611e3d5b9358 |  206 | 
65fa094e-1609-47ea-bf0d-611e3d5b9358 | FAILED



keep appearing and still cause messages in engine.log like

 2017-08-22 11:54:57,109+02 WARN 
[org.ovirt.engine.core.bll.snapshots.CreateAllSnapshotsFromVmCommand] 
(DefaultQuartzScheduler8) [080af640-bac3-4990-8bf4-6829551b538d] No 
snapshot was created for VM 'c0235316-81c4-48be-9521-b86b338c7d20' which

[ovirt-users] oVirt Node update question

2017-08-31 Thread Matthias Leopold

hi,

i still don't completely understand the oVirt Node update process and 
the involved rpm packages.


We have 4 nodes, all running oVirt Node 4.1.3. Three of them show as 
available updates 
'ovirt-node-ng-image-update-4.1.6-0.1.rc1.20170823083853.gitd646d2f.el7.centos' 
(i don't want run release candidates), one of them shows 
'ovirt-node-ng-image-update-4.1.5-1.el7.centos' (this is what i like). 
The node that doesn't want to upgrade to '4.1.6-0.1.rc1' lacks the rpm 
package 'ovirt-node-ng-image-update-4.1.3-1.el7.centos.noarch', only has 
'ovirt-node-ng-image-update-placeholder-4.1.3-1.el7.centos.noarch'. Also 
the version of ovirt-node-ng-nodectl is '4.1.3-0.20170709.0.el7' instead 
of '4.1.3-0.20170705.0.el7'. This node was the last one i installed and 
never made a version update before.


I only began using oVirt starting with 4.1, but already completed minor 
version upgrades of oVirt nodes. IIRC this 'mysterious' 
ovirt-node-ng-image-update package comes into place when updating a node 
for the first time after initial installation. Usually i wouldn't care 
about all of this, but now i have this RC update situation that i don't 
want. How is this supposed to work? How can i resolve it?


thx
matthias

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node update question

2017-08-31 Thread Matthias Leopold

Hi,

thanks a lot.

So i understand everything is fine with my nodes and i'll wait until the 
update GUI shows the right version to update (4.1.5 at the moment).


Regards
Matthias


Am 2017-08-31 um 14:56 schrieb Yuval Turgeman:

Hi,

oVirt node ng is shipped with a placeholder rpm preinstalled.
The image-update rpms obsolete the placeholder rpm, so once a new 
image-update rpm is published, yum update will pull those packages.  So 
you have 1 system that was a fresh install and the others were upgrades.
Next, the post install script for those image-update rpms will install 
--justdb the image-update rpms to the new image (so running yum update 
in the new image won't try to pull again the same version).


Regarding the 4.1.6 it's very strange, we'll need to check the repos to 
see why it was published.


As for nodectl, if there are no changes, it won't be updated and you'll 
see an "old" version or a version that doesn't seem to be matching the 
current image, but it is ok, we are thinking of changing its name to 
make it less confusing.


Hope this helps,
Yuval.


On Thu, Aug 31, 2017 at 11:17 AM, Matthias Leopold 
<matthias.leop...@meduniwien.ac.at 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:


hi,

i still don't completely understand the oVirt Node update process
and the involved rpm packages.

We have 4 nodes, all running oVirt Node 4.1.3. Three of them show as
available updates

'ovirt-node-ng-image-update-4.1.6-0.1.rc1.20170823083853.gitd646d2f.el7.centos'
(i don't want run release candidates), one of them shows
'ovirt-node-ng-image-update-4.1.5-1.el7.centos' (this is what i
like). The node that doesn't want to upgrade to '4.1.6-0.1.rc1'
lacks the rpm package
'ovirt-node-ng-image-update-4.1.3-1.el7.centos.noarch', only has
'ovirt-node-ng-image-update-placeholder-4.1.3-1.el7.centos.noarch'.
Also the version of ovirt-node-ng-nodectl is
'4.1.3-0.20170709.0.el7' instead of '4.1.3-0.20170705.0.el7'. This
node was the last one i installed and never made a version update
before.

I only began using oVirt starting with 4.1, but already completed
minor version upgrades of oVirt nodes. IIRC this 'mysterious'
ovirt-node-ng-image-update package comes into place when updating a
node for the first time after initial installation. Usually i
wouldn't care about all of this, but now i have this RC update
situation that i don't want. How is this supposed to work? How can i
resolve it?

thx
matthias

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node update question

2017-08-31 Thread Matthias Leopold

Hi,

all of the nodes that already made updates in the past have

/etc/yum.repos.d/ovirt-4.1-pre-dependencies.repo
/etc/yum.repos.d/ovirt-4.1-pre.repo

i went through the logs in /var/log/ovirt-engine/host-deploy/ and my own 
notes and discovered/remembered that this being presented with RC 
versions started on 20170707 when i updated my nodes from 4.1.2 to 
4.1.3-0.3.rc3.20170622082156.git47b4302 (!). probably there was a short 
timespan when you erroneously published a RC version in the wrong repo, 
my nodes "caught" it and dragged this along until today when i finally 
cared ;-) I moved the /etc/yum.repos.d/ovirt-4.1-pre*.repo files away 
and now everything seems fine


Regards
Matthias

Am 2017-08-31 um 15:25 schrieb Yuval Turgeman:

Hi,

Don't quite understand how you got to that 4.1.6 rc, it's only available 
in the pre release repo, can you paste the yum repos that are enabled on 
your system ?


Thanks,
Yuval.

On Thu, Aug 31, 2017 at 4:19 PM, Matthias Leopold 
<matthias.leop...@meduniwien.ac.at 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:


Hi,

thanks a lot.

So i understand everything is fine with my nodes and i'll wait until
the update GUI shows the right version to update (4.1.5 at the moment).

Regards
Matthias


Am 2017-08-31 um 14:56 schrieb Yuval Turgeman:

Hi,

oVirt node ng is shipped with a placeholder rpm preinstalled.
The image-update rpms obsolete the placeholder rpm, so once a
new image-update rpm is published, yum update will pull those
packages.  So you have 1 system that was a fresh install and the
others were upgrades.
Next, the post install script for those image-update rpms will
install --justdb the image-update rpms to the new image (so
running yum update in the new image won't try to pull again the
same version).

Regarding the 4.1.6 it's very strange, we'll need to check the
repos to see why it was published.

As for nodectl, if there are no changes, it won't be updated and
you'll see an "old" version or a version that doesn't seem to be
matching the current image, but it is ok, we are thinking of
changing its name to make it less confusing.

Hope this helps,
Yuval.


On Thu, Aug 31, 2017 at 11:17 AM, Matthias Leopold
<matthias.leop...@meduniwien.ac.at
<mailto:matthias.leop...@meduniwien.ac.at>
<mailto:matthias.leop...@meduniwien.ac.at
<mailto:matthias.leop...@meduniwien.ac.at>>> wrote:

 hi,

 i still don't completely understand the oVirt Node update
process
 and the involved rpm packages.

 We have 4 nodes, all running oVirt Node 4.1.3. Three of
them show as
 available updates

'ovirt-node-ng-image-update-4.1.6-0.1.rc1.20170823083853.gitd646d2f.el7.centos'

 (i don't want run release candidates), one of them shows
 'ovirt-node-ng-image-update-4.1.5-1.el7.centos' (this is what i
 like). The node that doesn't want to upgrade to '4.1.6-0.1.rc1'
 lacks the rpm package
 'ovirt-node-ng-image-update-4.1.3-1.el7.centos.noarch',
only has

'ovirt-node-ng-image-update-placeholder-4.1.3-1.el7.centos.noarch'.

 Also the version of ovirt-node-ng-nodectl is
 '4.1.3-0.20170709.0.el7' instead of
'4.1.3-0.20170705.0.el7'. This
 node was the last one i installed and never made a version
update
 before.

 I only began using oVirt starting with 4.1, but already
completed
 minor version upgrades of oVirt nodes. IIRC this 'mysterious'
 ovirt-node-ng-image-update package comes into place when
updating a
 node for the first time after initial installation. Usually i
 wouldn't care about all of this, but now i have this RC update
 situation that i don't want. How is this supposed to work?
How can i
 resolve it?

 thx
 matthias

 ___
 Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org
<mailto:Users@ovirt.org>>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>
 <http://lists.ovirt.org/mailman/listinfo/users
    <http://lists.ovirt.org/mailman/listinfo/users>>



-- 
Matthias Leopold

IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241 <tel:%2B43%201%2040160-21241>
Fax: +43 1 40160-921200 <tel:%2B43%201%2040160

Re: [ovirt-users] failed upgrade oVirt node 4.1.3 -> 4.1.5

2017-09-04 Thread Matthias Leopold

thanks, so i'll wait for 4.1.6 before upgrading my other nodes

Regards
matthias

Am 2017-09-03 um 15:57 schrieb Yuval Turgeman:

Hi,

Seems to be a bug that was resolved here https://gerrit.ovirt.org/c/80716/

Thanks,
Yuval.


On Fri, Sep 1, 2017 at 3:55 PM, Matthias Leopold 
<matthias.leop...@meduniwien.ac.at 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:


hi,

i'm sorry to write to this list again, but i failed to upgrade a
freshly installed oVirt Node from version 4.1.3 to 4.1.5. it seems
to be a SELinux related problem. i'm attaching imgbased.log +
relevant lines from engine.log.

is the skipped version (4.1.4) the problem?
can i force upgrade to version 4.1.4?

thx
matthias


___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] using oVirt with newer librbd1

2017-10-23 Thread Matthias Leopold

Hi,

we want to use a Ceph cluster as the main storage for our oVirt 4.1.x 
datacenter. We successfully tested using librbd1-12.2.1-0.el7 package 
from Ceph repos instead of the standard librbd1-0.94.5-2.el7 from CentOS 
7 in an oVirt virtualization node. Are there any caveats when doing so? 
Will this work in oVirt 4.2?


thx
matthias

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] using oVirt with newer librbd1

2017-10-24 Thread Matthias Leopold



Am 2017-10-24 um 14:09 schrieb Konstantin Shalygin:

we want to use a Ceph cluster as the main storage for our oVirt 4.1.x
datacenter. We successfully tested using librbd1-12.2.1-0.el7 package
from Ceph repos instead of the standard librbd1-0.94.5-2.el7 from CentOS
7 in an oVirt virtualization node. Are there any caveats when doing so?
Will this work in oVirt 4.2?


Hello Matthias. Can I ask separate question?
At this time we atoVirt 4.1.3.5 and Ceph Cluster at 11.2.0 (Kraken).In 
few weeks I planned to expand the cluster and I would like to upgrade to 
Ceph 12 (Luminous), for bluestore support.

So my question is:have you tested oVirt with Ceph 12?


Thanks.

--
Best regards,
Konstantin Shalygin



--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200

Hi Konstantin,

yes, we have a Ceph 12 Cluster and are using librbd1-12.2.1 on oVirt 
Hypervisor Hosts, which we're installed with CentOS 7 and Ceph upstream 
repos, not oVirt Node (for this exact purpose). Since 
/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_rbd.so is 
using /lib64/librbd.so.1 our VMs with disks from Cinder storage domain 
are using Ceph 12 all the way.


Are you also using a newer librbd1?

Regards
Matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] "Enable Discard" for Ceph/Cinder Disks?

2017-11-27 Thread Matthias Leopold

Hi,

according to http://docs.ceph.com/docs/luminous/rbd/qemu-rbd/ the use of 
Discard/TRIM for Ceph RBD disks is possible. Openstack seems to have 
implemented it 
(https://www.sebastien-han.fr/blog/2015/02/02/openstack-and-ceph-rbd-discard/). 
In oVirt there is no option "Enable Discard" for Cinder Disks (when 
choosing IDE or VirtIO-SCSI driver), even when i set 
"report_discard_supported = true" in Cinder. Are there plans for 
supporting this in the future? Can i use it right now with custom 
properties (never tried this before)?


thx
matthias

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] can't migrate Windows 2012R2 VM

2017-11-07 Thread Matthias Leopold

Hi,

i'm experiencing reproducible problems migrating Windows 2012R2 VMs in 
oVirt 4.1 environments. I can't migrate two different Windows 2012R2 VMs 
in two different oVirt environments. engine.log in this situation is the 
same in both cases, i'm attaching it.


lines like
"Failed to destroy VM 'd4ddd4d1-2de4-4ace-900d-76552cadefb0' because VM 
does not exist, ignoring"
"VM 'd4ddd4d1-2de4-4ace-900d-76552cadefb0'(ovirt-test03_win2012) was 
unexpectedly detected as 'Down' on VDS 'xxx'"

don't sound too confidence-inspiring.

Windows 2008 and Windows 2016 VMs can be migrated with no problem.

- oVirt version is 4.1.6
- VirtIO drivers and oVirt guest agent in windows guest were installed 
from oVirt-toolsSetup-4.1-5.fc24.iso (one VM is using VirtIO devices, 
other VM isn't)
- oVirt cluster migration settings are default in a 4.1 install 
(migration policy: legacy, i also tried: minimal downtime, post copy 
migration, no avail...)


thx for any advice
matthias


2017-11-07 11:25:09,706+01 INFO  
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-7) 
[68971ee3-707e-4bf7-91ce-9826fc254d64] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[d4ddd4d1-2de4-4ace-900d-76552cadefb0=VM]', 
sharedLocks=''}'
2017-11-07 11:25:09,878+01 INFO  
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] 
(org.ovirt.thread.pool-6-thread-6) [68971ee3-707e-4bf7-91ce-9826fc254d64] 
Running command: MigrateVmToServerCommand internal: false. Entities affected :  
ID: d4ddd4d1-2de4-4ace-900d-76552cadefb0 Type: VMAction group MIGRATE_VM with 
role type USER
2017-11-07 11:25:10,225+01 INFO  
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(org.ovirt.thread.pool-6-thread-6) [68971ee3-707e-4bf7-91ce-9826fc254d64] 
START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', 
hostId='f162d746-3541-4b30-8e00-dfc91799e054', 
vmId='d4ddd4d1-2de4-4ace-900d-76552cadefb0', 
srcHost='ov-test-04-02.ovn.some.domain', 
dstVdsId='d8794e95-3f89-4b1a-9bec-12ccf6db0cb1', 
dstHost='ov-test-04-01.ovn.some.domain:54321', migrationMethod='ONLINE', 
tunnelMigration='false', migrationDowntime='0', autoConverge='false', 
migrateCompressed='false', consoleAddress='null', maxBandwidth='null', 
enableGuestEvents='false', maxIncomingMigrations='2', 
maxOutgoingMigrations='2', convergenceSchedule='null'}), log id: 71a17aae
2017-11-07 11:25:10,225+01 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(org.ovirt.thread.pool-6-thread-6) [68971ee3-707e-4bf7-91ce-9826fc254d64] 
START, MigrateBrokerVDSCommand(HostName = ov-test-04-02, 
MigrateVDSCommandParameters:{runAsync='true', 
hostId='f162d746-3541-4b30-8e00-dfc91799e054', 
vmId='d4ddd4d1-2de4-4ace-900d-76552cadefb0', 
srcHost='ov-test-04-02.ovn.some.domain', 
dstVdsId='d8794e95-3f89-4b1a-9bec-12ccf6db0cb1', 
dstHost='ov-test-04-01.ovn.some.domain:54321', migrationMethod='ONLINE', 
tunnelMigration='false', migrationDowntime='0', autoConverge='false', 
migrateCompressed='false', consoleAddress='null', maxBandwidth='null', 
enableGuestEvents='false', maxIncomingMigrations='2', 
maxOutgoingMigrations='2', convergenceSchedule='null'}), log id: 13e47e7b
2017-11-07 11:25:11,409+01 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] 
(org.ovirt.thread.pool-6-thread-6) [68971ee3-707e-4bf7-91ce-9826fc254d64] 
FINISH, MigrateBrokerVDSCommand, log id: 13e47e7b
2017-11-07 11:25:11,423+01 INFO  
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] 
(org.ovirt.thread.pool-6-thread-6) [68971ee3-707e-4bf7-91ce-9826fc254d64] 
FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 71a17aae
2017-11-07 11:25:11,441+01 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-6-thread-6) [68971ee3-707e-4bf7-91ce-9826fc254d64] 
EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 
68971ee3-707e-4bf7-91ce-9826fc254d64, Job ID: 
c586546c-6bf0-418e-ba75-3ebe37eea0dd, Call Stack: null, Custom ID: null, Custom 
Event ID: -1, Message: Migration started (VM: ovirt-test03_win2012, Source: 
ov-test-04-02, Destination: ov-test-04-01, User: admin@internal-authz). 
2017-11-07 11:25:11,494+01 INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(ForkJoinPool-1-worker-2) [] VM 'd4ddd4d1-2de4-4ace-900d-76552cadefb0' was 
reported as Down on VDS 'd8794e95-3f89-4b1a-9bec-12ccf6db0cb1'(ov-test-04-01)
2017-11-07 11:25:11,495+01 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(ForkJoinPool-1-worker-2) [] START, DestroyVDSCommand(HostName = ov-test-04-01, 
DestroyVmVDSCommandParameters:{runAsync='true', 
hostId='d8794e95-3f89-4b1a-9bec-12ccf6db0cb1', 
vmId='d4ddd4d1-2de4-4ace-900d-76552cadefb0', force='false', secondsToWait='0', 
gracefully='false', reason='', ignoreNoVm='true'}), log id: 7f5e7600
2017-11-07 11:25:12,521+01 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
(ForkJoinPool-1-worker-2) [] Failed to destroy VM 
'd4ddd4d1-2de4-4ace-900d-76552cadefb0' because VM does not 

Re: [ovirt-users] can't migrate Windows 2012R2 VM

2017-11-09 Thread Matthias Leopold

Hi Arsene,

i have to apologize, your hint to the QXL driver was right.
after searching in the wrong places for most of the time i finally 
looked into the logs of the hypervisor host and found


libvirtError: internal error: unable to execute QEMU command 'migrate': 
qxl: guest bug: command not in ram bar


which led me to https://bugzilla.redhat.com/show_bug.cgi?id=1446147

you can't use 
https://www.spice-space.org/download/windows/qxl-wddm-dod/qxl-wddm-dod-0.18/ 
in windows 2012, but when uninstalling the Red Hat QXL driver completely 
the VM is finally able to migrate


thanks
matthias

Am 2017-11-08 um 10:38 schrieb Matthias Leopold:

Hi Arsene,

thanks for your feedback. I doubt that a display device driver in the 
guest can be responsible for the mentioned migration problems in the 
virtualization environment. Nevertheless i installed 
https://www.spice-space.org/download/windows/spice-guest-tools/spice-guest-tools-latest.exe 
im my Windows 2012R2 VM (the driver in your link didn't install). As i 
expected this didn't fix the migration problem.


matthias

Am 2017-11-07 um 14:49 schrieb Arsène Gschwind:

Hi Mathias,

I had a similar problem with Windows Server 2016 and could resolve it 
by installing the latest driver for QXL found at 
https://www.spice-space.org/download/windows/qxl-wddm-dod/qxl-wddm-dod-0.18/. 
This should also work for Windows 2012R2.


Arsène


On 11/07/2017 12:47 PM, Matthias Leopold wrote:

Hi,

i'm experiencing reproducible problems migrating Windows 2012R2 VMs 
in oVirt 4.1 environments. I can't migrate two different Windows 
2012R2 VMs in two different oVirt environments. engine.log in this 
situation is the same in both cases, i'm attaching it.


lines like
"Failed to destroy VM 'd4ddd4d1-2de4-4ace-900d-76552cadefb0' because 
VM does not exist, ignoring"
"VM 'd4ddd4d1-2de4-4ace-900d-76552cadefb0'(ovirt-test03_win2012) was 
unexpectedly detected as 'Down' on VDS 'xxx'"

don't sound too confidence-inspiring.

Windows 2008 and Windows 2016 VMs can be migrated with no problem.

- oVirt version is 4.1.6
- VirtIO drivers and oVirt guest agent in windows guest were 
installed from oVirt-toolsSetup-4.1-5.fc24.iso (one VM is using 
VirtIO devices, other VM isn't)
- oVirt cluster migration settings are default in a 4.1 install 
(migration policy: legacy, i also tried: minimal downtime, post copy 
migration, no avail...)


thx for any advice
matthias




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--

*Arsène Gschwind*
Fa. Sapify AG im Auftrag der Universität Basel
IT Services
Klingelbergstr. 70 |  CH-4056 Basel  |  Switzerland
Tel. +41 79 449 25 63  | http://its.unibas.ch <http://its.unibas.ch/>
ITS-ServiceDesk: support-...@unibas.ch | +41 61 267 14 11



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] can't migrate Windows 2012R2 VM

2017-11-08 Thread Matthias Leopold

Hi Arsene,

thanks for your feedback. I doubt that a display device driver in the 
guest can be responsible for the mentioned migration problems in the 
virtualization environment. Nevertheless i installed 
https://www.spice-space.org/download/windows/spice-guest-tools/spice-guest-tools-latest.exe 
im my Windows 2012R2 VM (the driver in your link didn't install). As i 
expected this didn't fix the migration problem.


matthias

Am 2017-11-07 um 14:49 schrieb Arsène Gschwind:

Hi Mathias,

I had a similar problem with Windows Server 2016 and could resolve it by 
installing the latest driver for QXL found at 
https://www.spice-space.org/download/windows/qxl-wddm-dod/qxl-wddm-dod-0.18/. 
This should also work for Windows 2012R2.


Arsène


On 11/07/2017 12:47 PM, Matthias Leopold wrote:

Hi,

i'm experiencing reproducible problems migrating Windows 2012R2 VMs in 
oVirt 4.1 environments. I can't migrate two different Windows 2012R2 
VMs in two different oVirt environments. engine.log in this situation 
is the same in both cases, i'm attaching it.


lines like
"Failed to destroy VM 'd4ddd4d1-2de4-4ace-900d-76552cadefb0' because 
VM does not exist, ignoring"
"VM 'd4ddd4d1-2de4-4ace-900d-76552cadefb0'(ovirt-test03_win2012) was 
unexpectedly detected as 'Down' on VDS 'xxx'"

don't sound too confidence-inspiring.

Windows 2008 and Windows 2016 VMs can be migrated with no problem.

- oVirt version is 4.1.6
- VirtIO drivers and oVirt guest agent in windows guest were installed 
from oVirt-toolsSetup-4.1-5.fc24.iso (one VM is using VirtIO devices, 
other VM isn't)
- oVirt cluster migration settings are default in a 4.1 install 
(migration policy: legacy, i also tried: minimal downtime, post copy 
migration, no avail...)


thx for any advice
matthias




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--

*Arsène Gschwind*
Fa. Sapify AG im Auftrag der Universität Basel
IT Services
Klingelbergstr. 70 |  CH-4056 Basel  |  Switzerland
Tel. +41 79 449 25 63  | http://its.unibas.ch <http://its.unibas.ch/>
ITS-ServiceDesk: support-...@unibas.ch | +41 61 267 14 11



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] using oVirt with newer librbd1

2017-10-24 Thread Matthias Leopold



Am 2017-10-24 um 15:11 schrieb Konstantin Shalygin:

On 10/24/2017 07:26 PM, Matthias Leopold wrote:

yes, we have a Ceph 12 Cluster and are using librbd1-12.2.1 on oVirt 
Hypervisor Hosts, which we're installed with CentOS 7 and Ceph 
upstream repos, not oVirt Node (for this exact purpose).

On oVirt Hypervisor hosts we use librbd1-0.94.5-1.el7.x86_64
Since 
/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_rbd.so is 
using /lib64/librbd.so.1 our VMs with disks from Cinder storage domain 
are using Ceph 12 all the way. 
Our OpenStack cinder is openstack-cinder-10.0.0-1.el7.noarch with 
librbd1-10.2.3-0.el7.x86_64
What version of Cinder I should have for work with Ceph 12? Or just 
upgrade python-rbd/librados/librbd1/etc.


I'll talk to my colleague, who is the Ceph expert, about this tomorrow.

Regards
Matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] using oVirt with newer librbd1

2017-10-25 Thread Matthias Leopold



Am 2017-10-24 um 15:11 schrieb Konstantin Shalygin:

On 10/24/2017 07:26 PM, Matthias Leopold wrote:

yes, we have a Ceph 12 Cluster and are using librbd1-12.2.1 on oVirt 
Hypervisor Hosts, which we're installed with CentOS 7 and Ceph 
upstream repos, not oVirt Node (for this exact purpose).

On oVirt Hypervisor hosts we use librbd1-0.94.5-1.el7.x86_64
Since 
/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_rbd.so is 
using /lib64/librbd.so.1 our VMs with disks from Cinder storage domain 
are using Ceph 12 all the way. 
Our OpenStack cinder is openstack-cinder-10.0.0-1.el7.noarch with 
librbd1-10.2.3-0.el7.x86_64


we're also using cinder from openstack ocata release.

the point is
a) we didn't upgrade, but started from scratch with ceph 12
b) we didn't test all of the new features in ceph 12 (eg. EC pools for 
RBD devices) in connection with cinder yet


matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] managing local users in 4.2 ?

2018-05-04 Thread Matthias Leopold

Hi,

i tried to create a local user in oVirt 4.2 with "ovirt-aaa-jdbc-tool 
user add" (like i did in oVirt 4.1.9). the command worked ok, but the 
created user wasn't visible in the web gui. i then used the "add" button 
in admin portal to add the already existing user and after that the user 
was visible. i didn't have to do that in 4.1.9, the "add" button was 
already there the, but i didn't know what to do with it. how did 
managing local users change in 4.2?


thx
matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] managing local users in 4.2 ?

2018-05-04 Thread Matthias Leopold

Am 2018-05-04 um 12:36 schrieb Matthias Leopold:

Hi,

i tried to create a local user in oVirt 4.2 with "ovirt-aaa-jdbc-tool 
user add" (like i did in oVirt 4.1.9). the command worked ok, but the 
created user wasn't visible in the web gui. i then used the "add" button 
in admin portal to add the already existing user and after that the user 
was visible. i didn't have to do that in 4.1.9, the "add" button was 
already there the, but i didn't know what to do with it. how did 
managing local users change in 4.2?




ok, i got it: only after setting actual permissions for a user he/she 
appears automatically in Admin Portal - Administration - Users. this was 
different in 4.1.9 IIRC


matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 4.2.4: Enable only strong ciphers/Disable TLS versions < 1.2

2018-06-26 Thread Matthias Leopold

Hi,

i decided to update my test environment (4.2.2) today and noticed oVirt 
4.2.4 is out ;-)


i have some dumb questions concerning
- BZ 1582527 Enable only strong ciphers from engine to VDSM 
communication for hosts in cluster level >= 4.2

- BZ 1577593 Disable TLS versions < 1.2 for hosts with cluster level >= 4.1

Is simply updating a host from 4.2.2 to 4.2.4 enough to apply the 
changes mentioned above?
Or do i have to reinstall hosts in addition to upgrading? Before or 
after the upgrade?


My cluster was on cluster level 4.2 when i started.
My hosts are type: Enterprise Linux (CentOS)

thx
matthias

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/REL7JFGVC3D263USMF73HK2GIFNFND5I/


[ovirt-users] Snapshot error with Cinder/Ceph disk

2018-06-27 Thread Matthias Leopold

Hi,

i'm having problems with snapshotting Cinder/Ceph disks since upgrading 
to 4.2. Observed behavior has changed between 4.2.2 and 4.2.4.


With oVirt 4.2.2 and Cinder 11.1.0
- oVirt snapshot fails (according to oVirt), but is listed in GUI
- disk snapshot are visible in oVirt storage domain tab and Cinder CLI
- first try to remove the oVirt snapshot fails (according to oVirt), but 
disk snapshots are removed from oVirt storage domain tab and Cinder CLI

- second try to remove oVirt snapshot succeeds

With oVirt 4.2.4 and Cinder 11.1.1
- oVirt snapshot fails "completely"
- in Cinder logs i can see that disk snapshots are created and 
immediately deleted


oVirt error log message is the same in both cases: "Failed in 
'SnapshotVDS' method"


I'm attaching logs from oVirt engine from the latter case.

thx for any advice
matthias


2018-06-27 16:19:18,550+02 INFO  
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand] (default 
task-3) [73adb039-22cc-47b5-9d0f-3620a12df43f] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[4a8c9902-f9ab-490f-b1dd-82d9aee63b5f=VM]', 
sharedLocks=''}'
2018-06-27 16:19:19,186+02 INFO  
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotForVmCommand] 
(EE-ManagedThreadFactory-engine-Thread-22973) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] Running command: 
CreateSnapshotForVmCommand internal: false. Entities affected :  ID: 
4a8c9902-f9ab-490f-b1dd-82d9aee63b5f Type: VMAction group 
MANIPULATE_VM_SNAPSHOTS with role type USER
2018-06-27 16:19:19,208+02 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-22973) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] EVENT_ID: FREEZE_VM_INITIATED(10,766), 
Freeze of guest filesystems on VM ovirt-test01.srv was initiated.
2018-06-27 16:19:19,209+02 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.FreezeVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-22973) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] START, FreezeVDSCommand(HostName = 
ov-test-04-01, 
VdsAndVmIDVDSParametersBase:{hostId='d8794e95-3f89-4b1a-9bec-12ccf6db0cb1', 
vmId='4a8c9902-f9ab-490f-b1dd-82d9aee63b5f'}), log id: 2d55f627
2018-06-27 16:19:19,259+02 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.FreezeVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-22973) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] FINISH, FreezeVDSCommand, log id: 
2d55f627
2018-06-27 16:19:19,262+02 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-22973) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] EVENT_ID: FREEZE_VM_SUCCESS(10,767), 
Guest filesystems on VM ovirt-test01.srv have been frozen successfully.
2018-06-27 16:19:19,292+02 INFO  
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotDiskCommand] 
(EE-ManagedThreadFactory-engine-Thread-22973) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] Running command: 
CreateSnapshotDiskCommand internal: true. Entities affected :  ID: 
4a8c9902-f9ab-490f-b1dd-82d9aee63b5f Type: VMAction group 
MANIPULATE_VM_SNAPSHOTS with role type USER
2018-06-27 16:19:19,359+02 INFO  
[org.ovirt.engine.core.bll.storage.disk.cinder.CreateCinderSnapshotCommand] 
(EE-ManagedThreadFactory-commandCoordinator-Thread-10) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] Running command: 
CreateCinderSnapshotCommand internal: true. Entities affected :  ID: 
e97009e5-c712-4199-9664-572eaba268dc Type: StorageAction group 
CONFIGURE_VM_STORAGE with role type USER
2018-06-27 16:19:20,228+02 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-22973) [] EVENT_ID: 
USER_CREATE_SNAPSHOT(45), Snapshot 'disk1_snap' creation for VM 
'ovirt-test01.srv' was initiated by admin@internal-authz.
2018-06-27 16:19:22,322+02 INFO  
[org.ovirt.engine.core.bll.storage.disk.cinder.CreateCinderSnapshotCommandCallback]
 (EE-ManagedThreadFactory-engineScheduled-Thread-96) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] Command 'CreateCinderSnapshot' id: 
'e4561612-d000-47f3-980e-1c05ed813f88' child commands '[]' executions were 
completed, status 'SUCCEEDED'
2018-06-27 16:19:22,322+02 INFO  
[org.ovirt.engine.core.bll.storage.disk.cinder.CreateCinderSnapshotCommandCallback]
 (EE-ManagedThreadFactory-engineScheduled-Thread-96) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] Command 'CreateCinderSnapshot' id: 
'e4561612-d000-47f3-980e-1c05ed813f88' Updating status to 'SUCCEEDED', The 
command end method logic will be executed by one of its parent commands.
2018-06-27 16:19:22,332+02 INFO  
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
(EE-ManagedThreadFactory-engineScheduled-Thread-96) 
[73adb039-22cc-47b5-9d0f-3620a12df43f] Command 'CreateSnapshotDisk' id: 
'378e2ffc-352b-4318-b34b-6c46a7fc15d8' child commands 
'[e4561612-d000-47f3-980e-1c05ed813f88]' executions were completed, status 
'SUCCEEDED'
2018-06-27 16:19:22,332+02 INFO  
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 

[ovirt-users] restricting release version when upgrading/installing Linux Enterprise hosts?

2018-04-29 Thread Matthias Leopold

Hi,

is it possible to restrict the oVirt version when installing or 
upgrading CentOS hypervisor hosts? let's say: 4.2.3 is already released, 
but i want to update/install certain hosts only to version 4.2.2 to have 
the same version in the whole cluster/data center. i know i could to 
this manually with yum and specifying rpm package versions but is there 
an "oVirt way"? or is this not necessary at all and newer hosts will 
always work smoothly with older engine and other hosts? still i think 
this is a topic that must be of interest to others.


besides that: please excuse my impatience sometimes, my upgrade of our 
oVirt installation (4.1.9 to 4.2.2) worked perfectly, even with a very 
customized cinder/ceph setup. thanks a lot for this great software!


matthias

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] restricting release version when upgrading/installing Linux Enterprise hosts?

2018-04-30 Thread Matthias Leopold

Thanks!

Maybe the cinder/ceph setup is not so "very" customized, but we are 
using a current librbd1 (12.2.x) instead of the version in CentOS 7. For 
this we had to switch from upstream Ceph RPMs (depending on 
lttng-ust/userspace-rcu from epel repo) to self-compiled Ceph RPMs 
(depending on lttng-ust/userspace-rcu from centos-ceph-luminous repo) on 
virtualization hosts when upgrading to oVirt 4.2. We have been using a 
Ceph 12.2.x cluster as storage with oVirt 4.1 (now 4.2) for quite some 
time with good results. Of course we would be happy to see further 
Cinder integration in oVirt ("Future Work?" on 
https://ovirt.org/develop/release-management/features/storage/cinder-integration/), 
but we understand that oVirt is focused on Gluster.


matthias

Am 2018-04-30 um 11:12 schrieb Fred Rolland:

Hi,
Newer Vdsm will work with older engine, so in your case, I don't see any 
reason not to update the Vdsm.


BTW, can you describe what do you mean by "a very customized cinder/ceph 
setup" ?


Thanks,
Fred

On Sun, Apr 29, 2018 at 3:55 PM, Matthias Leopold 
<matthias.leop...@meduniwien.ac.at 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:


Hi,

is it possible to restrict the oVirt version when installing or
upgrading CentOS hypervisor hosts? let's say: 4.2.3 is already
released, but i want to update/install certain hosts only to version
4.2.2 to have the same version in the whole cluster/data center. i
know i could to this manually with yum and specifying rpm package
versions but is there an "oVirt way"? or is this not necessary at
all and newer hosts will always work smoothly with older engine and
other hosts? still i think this is a topic that must be of interest
to others.

besides that: please excuse my impatience sometimes, my upgrade of
our oVirt installation (4.1.9 to 4.2.2) worked perfectly, even with
a very customized cinder/ceph setup. thanks a lot for this great
software!

matthias

___
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
    <http://lists.ovirt.org/mailman/listinfo/users>




--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] disk cache mode?

2017-10-20 Thread Matthias Leopold

Hi,

is there an option to set the disk cache mode for disks in oVirt 4.1?

thx
matthias


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] disk cache mode?

2017-10-20 Thread Matthias Leopold



Am 2017-10-20 um 13:29 schrieb Matthias Leopold:

Hi,

is there an option to set the disk cache mode for disks in oVirt 4.1?

i'm talking about KVM disk definitions of course

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/sect-Virtualization_Tuning_Optimization_Guide-BlockIO-Caching

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.2 CEPH support

2018-01-09 Thread Matthias Leopold



Am 2018-01-08 um 23:32 schrieb ~Stack~:

On 01/08/2018 07:15 AM, Gianluca Cecchi wrote:

Probably he refers to this blog:
https://rhelblog.redhat.com/2018/01/04/red-hat-virtualization-4-2-beta-is-live/

with:
"
*Support for Ceph via iSCSI* – The Ceph iSCSI target has been tested and
certified as a storage domain for virtual machines. This provides more
infrastructure and deployment choices for engineers and architects.
"

It seems a described feature that didn't get any referral in oVirt 4.2
release notes:
https://ovirt.org/release/4.2.0/

But I think in general, given a version, it is not guaranteed that what
in RHEV maps with what in oVirt and viceversa.
I don't know if this one about Ceph via iSCSI is one of them.


ErrrWHAA???

If Ceph support is in oVirt, I am about to be extremely excited. I'm
just racked the hardware for a new oVirt install today and the Ceph gear
is showing up in a few weeks. I was planning on setting up a dedicated
NFS server for VM's essentially having two storage domains, but if I can
just have Ceph...I would be a very happy sys admin!

~Stack~



just for the records:
we are running a productive oVirt 4.1 cluster with storage on a Ceph 
12.2 cluster connected as an external provider, type: Openstack Volume 
(= Cinder)


Matthias











___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VirtIO-SCSI and viodiskcache custom property

2018-01-19 Thread Matthias Leopold

Hi,

is there a reason why the viodiskcache custom property isn't honored 
when using VirtIO-SCSI?


On a Cinder (Ceph) disk "viodiskcache=writeback" is ignored with 
VirtIO-SCSI and honored when using VirtIO.


On an iSCSI disk "viodiskcache=writeback" is ignored with VirtIO-SCSI 
and the VM can't be started when using VirtIO with "unsupported 
configuration: native I/O needs either no disk cache or directsync cache 
mode, QEMU will fallback to aio=threads"


We actually want to use "viodiskcache=writeback" with Cinder (Ceph) disks.

oVirt version: 4.1.8

Thanks
Matthias

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VirtIO-SCSI and viodiskcache custom property

2018-01-22 Thread Matthias Leopold



Am 2018-01-20 um 19:54 schrieb Yaniv Kaul:



On Jan 19, 2018 3:29 PM, "Matthias Leopold" 
<matthias.leop...@meduniwien.ac.at 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:


Hi,

is there a reason why the viodiskcache custom property isn't honored
when using VirtIO-SCSI?

On a Cinder (Ceph) disk "viodiskcache=writeback" is ignored with
VirtIO-SCSI and honored when using VirtIO.

On an iSCSI disk "viodiskcache=writeback" is ignored with
VirtIO-SCSI and the VM can't be started when using VirtIO with
"unsupported configuration: native I/O needs either no disk cache or
directsync cache mode, QEMU will fallback to aio=threads"

We actually want to use "viodiskcache=writeback" with Cinder (Ceph)
disks.


That's because on block storage we use native io and not threads. I 
assume the hook needs to change to use native io in this case.

Y.


Thank you, but i still don't quite get what's missing.

We want to use the combination of Cinder (Ceph) + cache=writeback + 
VirtIO-SCSI. Would this be possible if aio=native was used (which is not 
configurable)? With iSCSI storage you still get cache=none when using 
VirtIO-SCSI and "viodiskcache=writeback" (and aio=native). Should i file 
a bug report for the Cinder disk situation?


Thanks again
Matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] effectiveness of "discard=unmap"

2018-02-12 Thread Matthias Leopold



Am 2018-02-12 um 13:40 schrieb Idan Shaby:
On Mon, Feb 12, 2018 at 1:55 PM, Matthias Leopold 
<matthias.leop...@meduniwien.ac.at 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:


Hi Idan,

thanks for your answer. But i'm still confused, because i thought
that the content of /sys/block/dm-X/queue/discard* in the VM OS
should depend on the setting of the "discard=(unmap|ignore)" setting
in the qemu-kvm command. Unexpectedly it's the same in both cases
(it's >0, saying discard is 'on'). I was then trying to inquire
about the TRIM/UNMAP capability of block devices in the VM with
"sdparm -p lbp /dev/sdx", but i always get "Logical block
provisioning (SBC) mode subpage failed".

The file /sys/block/dm-X/queue/discard_max_bytes in sysfs tells you 
whether your underlying storage supports discard.
The flag discard=unmap of the VM in qemu means that qemu will not throw 
away the UNMAP commands comming from the guest OS (by default it does 
throw them away).

 From what I know, the file in sysfs and the VM flag are not related.


Thank you, i will now finally accept it ;-)

Just for the records: This is where i got my info from: 
https://chrisirwin.ca/posts/discard-with-kvm/


Matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] effectiveness of "discard=unmap"

2018-02-08 Thread Matthias Leopold

Hi,

i'm sorry to bother you again with my ignorance of the DISCARD feature 
for block devices in general.


after finding several ways to enable "discard=unmap" for oVirt disks 
(via standard GUI option for iSCSI disks or via "diskunmap" custom 
property for Cinder disks) i wanted to check in the guest for the 
effectiveness of this feature. to my surprise i couldn't find a 
difference between Linux guests with and without "discard=unmap" enabled 
in the VM. "lsblk -D" reports the same in both cases and also 
fstrim/blkdiscard commands appear to work with no difference. Why is 
this? Do i have to look at the underlying storage to find out what 
really happens? Shouldn't this be visible in the guest OS?


thx
matthias

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Install Windows VM issues

2018-02-19 Thread Matthias Leopold

Hi Markus,

you don't need a second CD Rom or floppy drive. Choose "Run once" - 
"Boot Options" - "Attach CD" to attach the Windows ISO. When the install 
process gets to detecting storage devices you have to choose "Change 
CD", where you insert the VirtIO ISO. When the hard disk is detected you 
revert back to the Windows Installer ISO for the rest of the install 
process.


Good luck
Matthias

Am 2018-02-19 um 08:58 schrieb markus.schauf...@ooe.gv.at:

Hi!

I’m new here – hope you can forgive my „newbie questions“.

I want to install a Server 2016 – so I uploaded both the Windows ISO and 
the virtio drivers iso to the ISO Domain location. In the VM Options I 
can choose both ISO files.


But as refered in a Howto, I need to use a Floppy device with a flv 
file. I found the FLV drivers file, but I cannot find any Floppy device 
– there’s no option to choose.


So I tried to add a second CD-Rom because in Proxmox that already did 
work. But I cannot find any option to add a second CD-Rom too.


Any idea how I can provide the drivers for the windows installation?

Thanks for any help!

Markus



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Install Windows VM issues

2018-02-19 Thread Matthias Leopold



Am 2018-02-19 um 09:51 schrieb markus.schauf...@ooe.gv.at:

Hi Matthias,

thanks for your quick response.
I had already tried to use "Change CD" - but there's no effect at all. I also 
couldn't find any error messages in the log files.

Also the "run once" option using the floppy does not work as there's simply no 
floppy in the setup guide visible (or any other device with the drivers on it).


There's no need for a floppy device.

Steps to take (with oVirt 4.1.9 and Virtio-SCSI disk in VM):
1. "Run once" - "Boot Options" - "Attach CD" to attach the Windows 
2016R2 ISO

2. Windows setup starts in console window
3. Choose "Install Windows only"
4. Windows setup presents "Where do you want to install Windows?"
5. oVirt "Change CD" - choose oVirt-toolsSetup-4.15.fc24.iso (has to be 
in ISO domain)
6. Windows setup "Load driver" - "No signed drivers found" - OK - 
"Browse" - navigate to CD ROM drive "vioscsi - win2016r2 - amd64" folder 
- driver is highlighted - Next
7. Windows setup presents "Drive 0 Unallocated Space" (with warning: 
"Windows can't be installed on this device")

8. oVirt "Change CD" - choose Windows 2016R2 ISO
9. Windows setup "Refresh"
10. Warning disappears - Press "Next"

Matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Uploading big raw disks into storage

2018-01-22 Thread Matthias Leopold



Am 2018-01-22 um 11:36 schrieb Gabriel Stein:

Hello!

I'm still migrating the VMs from Proxmox to oVirt, migrating all disks, 
one by one.


Google Chrome doesn't accept anymore the certificates from ovirt-engine, 
firefox it's ok.


Error(I receive that direct from start from upload on google chrome, on 
firefox works):

/
/
/Unable to upload image to disk a-b-c due to a network error. Make sure 
ovirt-imageio-proxy service is installed and configured, and 
ovirt-engine's certificate is registered as a valid CA in the browser. 
The certificate can be fetched from 
https:///ovirt-engine/services/pki-resource?resource=ca-certificate=X509-PEM-CA/


BUT! When I'm uploading big disks(using firefox), such > 100GB, 
ovirt-engine interrupts the process, giving the same error above. I 
think that is a kind of timeout and the error doesn't show the real problem.


On engine.log is the same error. on image-proxy.log:

(Thread-5834) WARNING 2018-01-22 10:46:07,708 web:89:web:(log_response) 
1.1.1.1 - PUT /d69d1a59-ed5c-4097-a9af-5e2d36299181 401 360 (0.00s)
(Thread-5835) INFO 2018-01-22 10:46:10,716 web:89:web:(log_response) 
1.1.1.1 - OPTIONS /d69d1a59-ed5c-4097-a9af-5e2d36299181 204 0 (0.00s)
(Thread-5836) ERROR 2018-01-22 10:46:10,722 
session:314:root:(_decode_ovirt_ticket) Failed to verify proxy ticket: 
Ticket life time expired
(Thread-5836) ERROR 2018-01-22 10:46:10,722 
session:137:root:(start_session) Error starting session: Unable to 
verify proxy ticket

Traceback (most recent call last):
   File 
"/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/session.py", line 
135, in start_session

     session_id = _create_update_session(ticket, session_id)
   File 
"/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/session.py", line 
194, in _create_update_session

     ticket_vars = _decode_proxy_ticket(authorization)
   File 
"/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/session.py", line 
237, in _decode_proxy_ticket

     payload = _decode_ovirt_ticket(ticket)
   File 
"/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/session.py", line 
315, in _decode_ovirt_ticket

     raise ValueError("Unable to verify proxy ticket")
ValueError: Unable to verify proxy ticket
(Thread-5836) WARNING 2018-01-22 10:46:10,722 web:89:web:(log_response) 
1.1.1.1 - PUT /d69d1a59-ed5c-4097-a9af-5e2d36299181 401 360 (0.00s)
(Thread-5837) INFO 2018-01-22 10:46:13,731 web:89:web:(log_response) 
1.1.1.1 - OPTIONS /d69d1a59-ed5c-4097-a9af-5e2d36299181 204 0 (0.00s)
(Thread-5838) ERROR 2018-01-22 10:46:13,737 
session:314:root:(_decode_ovirt_ticket) Failed to verify proxy ticket: 
Ticket life time expired
(Thread-5838) ERROR 2018-01-22 10:46:13,737 
session:137:root:(start_session) Error starting session: Unable to 
verify proxy ticket

Traceback (most recent call last):
   File 
"/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/session.py", line 
135, in start_session

     session_id = _create_update_session(ticket, session_id)
   File 
"/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/session.py", line 
194, in _create_update_session

     ticket_vars = _decode_proxy_ticket(authorization)
   File 
"/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/session.py", line 
237, in _decode_proxy_ticket

     payload = _decode_ovirt_ticket(ticket)
   File 
"/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/session.py", line 
315, in _decode_ovirt_ticket

     raise ValueError("Unable to verify proxy ticket")
ValueError: Unable to verify proxy ticket
(Thread-5838) WARNING 2018-01-22 10:46:13,738 web:89:web:(log_response) 
1.1.1.1 - PUT /d69d1a59-ed5c-4097-a9af-5e2d36299181 401 360 (0.00s)
(Thread-5839) INFO 2018-01-22 10:46:16,745 web:89:web:(log_response) 
1.1.1.1 - OPTIONS /d69d1a59-ed5c-4097-a9af-5e2d36299181 204 0 (0.00s)
(Thread-5840) ERROR 2018-01-22 10:46:16,750 
session:314:root:(_decode_ovirt_ticket) Failed to verify proxy ticket: 
Ticket life time expired
(Thread-5840) ERROR 2018-01-22 10:46:16,750 
session:137:root:(start_session) Error starting session: Unable to 
verify proxy ticket


There is a bugzilla bug, but it's confusing, I'm already using the 
version 4.2 and the error persists.


Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1514887(no, I'm not 
restaring the daemon, the it seems something similar)


A thread on version 4.05:

http://lists.ovirt.org/pipermail/users/2016-November/077706.html

I need some help...


Hi Gabriel,

i posted to this list with a similar issue on 2017-09-12. you can upload 
disks via CLI, an example script "upload_disk.py" is part of the package 
python-ovirt-engine-sdk4. i had some trouble using the script, but it 
was mainly my fault. as far as i can remember it worked in the end...


good luck
Matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] icons for custom defined OS

2018-09-11 Thread Matthias Leopold

Hi,

I defined additional operating systems in my oVirt 4.2.6 installation 
according to 
https://www.ovirt.org/develop/release-management/features/virt/os-info/. 
Now i want to have custom default icons for use in VM portal for these 
new OS definitions. How can i do this? I only found a way to change the 
icon as individual VM portal user, this is nice, but not exactly what i 
want.


thx
matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XTKHNOR4YMD6EB3RWRLYCR7DEATATT3C/


[ovirt-users] missing features in VM Portal (oVirt 4.2.2)

2018-04-25 Thread Matthias Leopold

Hi,

i'm considering to upgrade from 4.1.9 to 4.2.2.

When looking at the new "VM Portal" i'm missing two things:
- VM power off
- Snapshots

Both issues have been mentioned here. Are these being worked on? Is 
there a timeframe when these will be available?


Nevertheless 4.2 looks good to me, thanks for great work!

Matthias

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] RPM conflicts when upgrading from 4.1.9 to 4.2.2

2018-04-23 Thread Matthias Leopold



Am 2018-04-23 um 11:44 schrieb Matthias Leopold:

Hi,

i tried to upgrade my oVirt 4.1.9 test environment to version 4.2.2.
this failed on the engine host with messages like

2018-04-23 11:10:57,716+0200 ERROR 
otopi.plugins.otopi.packagers.yumpackager yumpackager.error:85 Yum 
Test-Transaktionsfehler:   file /usr/share/ansible/roles/ovirt-manageiq 
from install of ovirt-ansible-roles-1.0.4-1.el7.centos.noarch conflicts 
with file from package ovirt-ansible-manageiq-1.1.6-1.el7.centos.noarch
   file /usr/share/ansible/roles/ovirt-image-template from install of 
ovirt-ansible-roles-1.0.4-1.el7.centos.noarch conflicts with file from 
package ovirt-ansible-image-template-1.1.5-1.el7.centos.noarch
   file /usr/share/ansible/roles/ovirt-infra from install of 
ovirt-ansible-roles-1.0.4-1.el7.centos.noarch conflicts with file from 
package ovirt-ansible-infra-1.1.4-1.el7.centos.noarch
   file /usr/share/ansible/roles/ovirt-vm-infra from install of 
ovirt-ansible-roles-1.0.4-1.el7.centos.noarch conflicts with file from 
package ovirt-ansible-vm-infra-1.1.5-1.el7.centos.noarch
   file /usr/share/ansible/roles/ovirt-cluster-upgrade from install of 
ovirt-ansible-roles-1.0.4-1.el7.centos.noarch conflicts with file from 
package ovirt-ansible-cluster-upgrade-1.1.6-1.el7.centos.noarch


OK, now i found https://bugzilla.redhat.com/show_bug.cgi?id=1519301 
which describes the problem, but doesn't give a solution...


it sounds like removing the "optional in 4.1.9" package 
ovirt-ansible-roles before the upgrade would prevent the error, but i 
would like to finish the upgrade in my test environment where i didn't 
do this. how can i do that?


thx
matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] RPM conflicts when upgrading from 4.1.9 to 4.2.2

2018-04-23 Thread Matthias Leopold



Am 2018-04-23 um 12:08 schrieb Matthias Leopold:



Am 2018-04-23 um 11:44 schrieb Matthias Leopold:

Hi,

i tried to upgrade my oVirt 4.1.9 test environment to version 4.2.2.
this failed on the engine host with messages like

2018-04-23 11:10:57,716+0200 ERROR 
otopi.plugins.otopi.packagers.yumpackager yumpackager.error:85 Yum 
Test-Transaktionsfehler:   file 
/usr/share/ansible/roles/ovirt-manageiq from install of 
ovirt-ansible-roles-1.0.4-1.el7.centos.noarch conflicts with file from 
package ovirt-ansible-manageiq-1.1.6-1.el7.centos.noarch
   file /usr/share/ansible/roles/ovirt-image-template from install of 
ovirt-ansible-roles-1.0.4-1.el7.centos.noarch conflicts with file from 
package ovirt-ansible-image-template-1.1.5-1.el7.centos.noarch
   file /usr/share/ansible/roles/ovirt-infra from install of 
ovirt-ansible-roles-1.0.4-1.el7.centos.noarch conflicts with file from 
package ovirt-ansible-infra-1.1.4-1.el7.centos.noarch
   file /usr/share/ansible/roles/ovirt-vm-infra from install of 
ovirt-ansible-roles-1.0.4-1.el7.centos.noarch conflicts with file from 
package ovirt-ansible-vm-infra-1.1.5-1.el7.centos.noarch
   file /usr/share/ansible/roles/ovirt-cluster-upgrade from install of 
ovirt-ansible-roles-1.0.4-1.el7.centos.noarch conflicts with file from 
package ovirt-ansible-cluster-upgrade-1.1.6-1.el7.centos.noarch


OK, now i found https://bugzilla.redhat.com/show_bug.cgi?id=1519301 
which describes the problem, but doesn't give a solution...


it sounds like removing the "optional in 4.1.9" package 
ovirt-ansible-roles before the upgrade would prevent the error, but i 
would like to finish the upgrade in my test environment where i didn't 
do this. how can i do that?




i finally resolved the situation by force removing (rpm -e --nodeps) the 
ovirt-ansible-* packages, after that engine-setup finishes ok, in the 
end i execute "yum install ovirt-ansible-roles". this _seems_ to work...


thx matthias





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Re: Snapshot error with Cinder/Ceph disk

2018-06-29 Thread Matthias Leopold



Am 2018-06-27 um 19:39 schrieb Nir Soffer:
On Wed, Jun 27, 2018 at 6:47 PM Matthias Leopold 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:


Hi,

i'm having problems with snapshotting Cinder/Ceph disks since upgrading
to 4.2. Observed behavior has changed between 4.2.2 and 4.2.4.

With oVirt 4.2.2 and Cinder 11.1.0
- oVirt snapshot fails (according to oVirt), but is listed in GUI
- disk snapshot are visible in oVirt storage domain tab and Cinder CLI
- first try to remove the oVirt snapshot fails (according to oVirt),
but
disk snapshots are removed from oVirt storage domain tab and Cinder CLI
- second try to remove oVirt snapshot succeeds

With oVirt 4.2.4 and Cinder 11.1.1
- oVirt snapshot fails "completely"
- in Cinder logs i can see that disk snapshots are created and
immediately deleted

oVirt error log message is the same in both cases: "Failed in
'SnapshotVDS' method"

I'm attaching logs from oVirt engine from the latter case.

thx for any advice
matthias


Can you file a ovirt-engine bug?

Nir


thanks, i opened https://bugzilla.redhat.com/show_bug.cgi?id=1596619

matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ERDT6LLSSIW2S6PXDNNFTVQ3JJL7TPYZ/


[ovirt-users] Re: OpenStack Block Storage Provider without authentication doesn't work

2018-10-23 Thread Matthias Leopold

i finally managed it: https://bugzilla.redhat.com/show_bug.cgi?id=1642074

matthias

Am 07.10.18 um 08:08 schrieb Idan Shaby:

Hi Matthias,

Thanks for the detailed information!
It looks like you've found a bug. A NullPointerException should never occur.
Can you please file a BZ and attach all the relevant logs so we can 
understand the root cause for it?


Thanks,
Idan


On Wed, Oct 3, 2018 at 3:01 PM Matthias Leopold 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:




Am 2018-10-03 um 12:14 schrieb Matthias Leopold:
 > Hi,
 >
 > we're successfully using Cinder as a Block Storage Provider for
oVirt
 > with customized Cinder installation in a CentOS host according to
 > OpenStack docs. Now i wanted to try out running Cinder in Docker and
 > followed the instructions from
 > https://thenewstack.io/deploying-cinder-stand-alone-storage-service/
 > (customized for use with Ceph RBD).
 >
 > This works to the point where i can setup an external provider and
 > consequently a storage domain in oVirt. I set up the provider
without
 > authentication and testing this (by pressing "Test" button)
works. When
 > i want to create disks i realize that oVirt doesn't recognize the
 > "Volume Type" definitions. In engine.log i see messages like
 >
 > 2018-10-03 12:04:46,990+02 ERROR
 >

[org.ovirt.engine.core.bll.storage.disk.cinder.GetCinderVolumeTypesByStorageDomainIdQuery]

 > (default task-72) [06d4fbf7-0b3c-46b3-8166-148ee7f67a4c] Query
 > 'GetCinderVolumeTypesByStorageDomainIdQuery' failed: null
 > 2018-10-03 12:04:46,990+02 ERROR
 >

[org.ovirt.engine.core.bll.storage.disk.cinder.GetCinderVolumeTypesByStorageDomainIdQuery]

 > (default task-72) [06d4fbf7-0b3c-46b3-8166-148ee7f67a4c] Exception:
 > java.lang.NullPointerException
 >
 > I very much suspect that not using authentication (because of
 > "auth_strategy = noauth" in cinder.conf) is the culprit. Listing
types
 > with cinder CLI from the engine host works (when using the
appropriate
 > environment). Can some of the RH devs confirm this behaviour? Is
this an
 > engine bug?
 >
 > thanks
 > Matthias
 >

I switched one of the existing, working Cinder hosts (no Docker
involved) to "auth_strategy = noauth" and the problem (and error
message
in engine.log) with querying volume types is the same. So - regardless
if i wanted to actually use it that way - I consider this a bug in
Cinder integration in oVirt.

Matthias
___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/K6OGQVDIYS7PSDIBXANP2IEMQMSEGQQ3/



--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7LWVY7ALR5V6EKNPINWS5B7PKHT5CFLK/


[ovirt-users] Re: [ANN] oVirt 4.2.7 is now generally available

2018-11-05 Thread Matthias Leopold
Is that really on purpose that ovirt-imageio-common suddenly pulls in 
qemu-img-ev? Up until 4.2.6 you didn't need the virt stack on the engine 
host.


thanks
matthias

Am 02.11.18 um 13:20 schrieb Sandro Bonazzola:
The oVirt Project is pleased to announce the general availability of 
oVirt 4.2.7, as of November 2nd, 2018.
This update is the seventh in a series of stabilization updates to the 
4.2 series.

This release is available now for:
* Red Hat Enterprise Linux 7.5 or later (7.6 recommended)
* CentOS Linux (or similar) 7.5 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.5 or later (7.6 recommended)
* CentOS Linux (or similar) 7.5 or later
* oVirt Node 4.2
See the release notes [1] for installation / upgrade instructions and a 
list of new features and bugs fixed.


Notes:
- oVirt Appliance is available
- oVirt Node update is available, ISO will be available soon [2]
- oVirt Tools Setup ISO is available[2]

oVirt Node has been updated including:
- oVirt 4.2.7: http://www.ovirt.org/release/4.2.7/
- Ansible 2.7.0: 
https://github.com/ansible/ansible/blob/stable-2.7/changelogs/CHANGELOG-v2.7.rst#v2-7-0
- GlusterFS 3.12.15: 
https://docs.gluster.org/en/latest/release-notes/3.12.15/


CentOS Errata included:
- CEBA-2018:3013 CentOS 7 tzdata BugFix Update 
<https://lists.centos.org/pipermail/centos-announce/2018-October/023074.html>
- centos-release-7-5.1804.5.el7.centos update 
<https://lists.centos.org/pipermail/centos-announce/2018-October/023050.html>


Updated packages:
+ansible-2.7.0-1.el7.noarch
+centos-release-7-5.1804.5.el7.centos.x86_64
+cockpit-ovirt-dashboard-0.11.37-1.el7.noarch
+glusterfs-3.12.15-1.el7.x86_64
+glusterfs-api-3.12.15-1.el7.x86_64
+glusterfs-cli-3.12.15-1.el7.x86_64
+glusterfs-client-xlators-3.12.15-1.el7.x86_64
+glusterfs-events-3.12.15-1.el7.x86_64
+glusterfs-fuse-3.12.15-1.el7.x86_64
+glusterfs-geo-replication-3.12.15-1.el7.x86_64
+glusterfs-gnfs-3.12.15-1.el7.x86_64
+glusterfs-libs-3.12.15-1.el7.x86_64
+glusterfs-rdma-3.12.15-1.el7.x86_64
+glusterfs-server-3.12.15-1.el7.x86_64
+imgbased-1.0.29-1.el7.noarch
+ovirt-hosted-engine-ha-2.2.18-1.el7.noarch
+ovirt-hosted-engine-setup-2.2.30-1.el7.noarch
+ovirt-imageio-common-1.4.5-0.el7.x86_64
+ovirt-imageio-daemon-1.4.5-0.el7.noarch
+ovirt-node-ng-image-update-placeholder-4.2.7-1.el7.noarch
+ovirt-provider-ovn-driver-1.2.16-1.el7.noarch
+ovirt-release-host-node-4.2.7-1.el7.noarch
+ovirt-release42-4.2.7-1.el7.noarch
+pulp-rpm-handlers-2.13.4.9-1.el7.noarch
+python-imgbased-1.0.29-1.el7.noarch
+python-ovirt-engine-sdk4-4.2.9-2.el7.x86_64
+python-pulp-agent-lib-2.13.4.14-1.el7.noarch
+python-pulp-common-2.13.4.14-1.el7.noarch
+python-pulp-rpm-common-2.13.4.9-1.el7.noarch
+python2-gluster-3.12.15-1.el7.x86_64
+python2-pyOpenSSL-17.3.0-3.el7.noarch
+qemu-img-ev-2.10.0-21.el7_5.7.1.x86_64
+qemu-kvm-common-ev-2.10.0-21.el7_5.7.1.x86_64
+qemu-kvm-ev-2.10.0-21.el7_5.7.1.x86_64
+tzdata-2018f-2.el7.noarch
+vdsm-4.20.43-1.el7.x86_64
+vdsm-api-4.20.43-1.el7.noarch
+vdsm-client-4.20.43-1.el7.noarch
+vdsm-common-4.20.43-1.el7.noarch
+vdsm-gluster-4.20.43-1.el7.x86_64
+vdsm-hook-ethtool-options-4.20.43-1.el7.noarch
+vdsm-hook-fcoe-4.20.43-1.el7.noarch
+vdsm-hook-openstacknet-4.20.43-1.el7.noarch
+vdsm-hook-vhostmd-4.20.43-1.el7.noarch
+vdsm-hook-vmfex-dev-4.20.43-1.el7.noarch
+vdsm-http-4.20.43-1.el7.noarch
+vdsm-jsonrpc-4.20.43-1.el7.noarch
+vdsm-network-4.20.43-1.el7.x86_64
+vdsm-python-4.20.43-1.el7.noarch
+vdsm-yajsonrpc-4.20.43-1.el7.noarch


Additional Resources:
* Read more about the oVirt 4.2.7 release 
highlights:http://www.ovirt.org/release/4.2.7/

* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt 
blog:http://www.ovirt.org/blog/

[1] http://www.ovirt.org/release/4.2.7/
[2] http://resources.ovirt.org/pub/ovirt-4.2/iso/


--

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA <https://www.redhat.com/>

sbona...@redhat.com <mailto:sbona...@redhat.com>

<https://red.ht/sig>


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7RHL4HWQFLE2HDXCOZAGIYBELRSBEIAH/



--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/arc

[ovirt-users] updating Spacewalk registered Enterprise hosts to 4.2.7

2018-11-05 Thread Matthias Leopold

Hi,

just for the records:
I couldn't update my Spacewalk (2.6) registered Enterprise (CentOS) 
hosts from oVirt 4.2.6 to 4.2.7. Error message is


2018-11-05 15:03:51,059 p=2329 u=ovirt |  Using 
/usr/share/ovirt-engine/playbooks/ansible.cfg as config file
2018-11-05 15:03:52,152 passlib.registry registered 'md5_crypt' handler: 

2018-11-05 15:03:52,165 p=2329 u=ovirt |  PLAY [all] 
*
2018-11-05 15:03:52,184 p=2329 u=ovirt |  TASK [ovirt-host-upgrade : 
Install ovirt-host package if it isn't installed] ***
2018-11-05 15:04:00,078 p=2329 u=ovirt |  fatal: 
[foo.bar.meduniwien.ac.at]: FAILED! => {

"ansible_facts": {
"pkg_mgr": "yum"
},
"changed": false
}

MSG:

Error from repoquery: ['/usr/bin/repoquery', '--show-duplicates', 
'--plugins', '--quiet', '-c', None, '--disablerepo', '', '--enablerepo', 
'', '--qf', '%{epoch}:%{name}-%{version}-%{release}.%{arch}', 
'ovirt-host']: Error accessing file for config file:///root/--disablerepo

Error accessing file for config file:///root/--disablerepo

Since this is only a test environment and i could easily switch the 
hosts to upstream repo mirrors I didn't do any further debugging. Maybe 
the Spacewalk server is too old...


Matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OK4G6TCSXK65ZCREUM6X2666IHBSBPXC/


[ovirt-users] Re: updating Spacewalk registered Enterprise hosts to 4.2.7

2018-11-08 Thread Matthias Leopold

i think this is related to https://github.com/ansible/ansible/issues/46603
my colleague had similar problems with ansible 2.7.0 and yum, he told me 
his issue went away with ansible 2.7.1

would be nice if oVirt ansible could be updated to 2.7.1

thx
matthias

Am 05.11.18 um 16:35 schrieb Matthias Leopold:

Hi,

just for the records:
I couldn't update my Spacewalk (2.6) registered Enterprise (CentOS) 
hosts from oVirt 4.2.6 to 4.2.7. Error message is


2018-11-05 15:03:51,059 p=2329 u=ovirt |  Using 
/usr/share/ovirt-engine/playbooks/ansible.cfg as config file
2018-11-05 15:03:52,152 passlib.registry registered 'md5_crypt' handler: 

2018-11-05 15:03:52,165 p=2329 u=ovirt |  PLAY [all] 
*
2018-11-05 15:03:52,184 p=2329 u=ovirt |  TASK [ovirt-host-upgrade : 
Install ovirt-host package if it isn't installed] ***
2018-11-05 15:04:00,078 p=2329 u=ovirt |  fatal: 
[foo.bar.meduniwien.ac.at]: FAILED! => {

     "ansible_facts": {
     "pkg_mgr": "yum"
     },
     "changed": false
}

MSG:

Error from repoquery: ['/usr/bin/repoquery', '--show-duplicates', 
'--plugins', '--quiet', '-c', None, '--disablerepo', '', '--enablerepo', 
'', '--qf', '%{epoch}:%{name}-%{version}-%{release}.%{arch}', 
'ovirt-host']: Error accessing file for config file:///root/--disablerepo

Error accessing file for config file:///root/--disablerepo

Since this is only a test environment and i could easily switch the 
hosts to upstream repo mirrors I didn't do any further debugging. Maybe 
the Spacewalk server is too old...


Matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OK4G6TCSXK65ZCREUM6X2666IHBSBPXC/ 



--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S7CI6PKCISXFHO7IE3JCSGOSLNTKEFET/


[ovirt-users] Re: Best Openstack version to integrate with oVirt 4.2.7

2018-11-15 Thread Matthias Leopold

Hi,

we are extensively using Ceph storage using the present OpenStack/Cinder 
integration in oVirt 4.2 which works for us. Openstack version in use is 
Pike.


I already heard about the plan to move to cinderlib which sounds 
promising. I very much hope there will be a migration scenario for users 
of "full" Openstack/Cinder installations when upgrading to oVirt 4.3.


thanks
Matthias

Am 11.11.18 um 15:49 schrieb Nir Soffer:
On Sat, Nov 10, 2018 at 6:52 PM Gianluca Cecchi 
mailto:gianluca.cec...@gmail.com>> wrote:


Hello,
do you think it is ok to use Rocky version of Openstack to integrate
its services with oVirt 4.2.7 on CentOS 7?
I see on https://repos.fedorapeople.org/repos/openstack/ that, if
Rocky is too new, between the older releases available there are,
from newer to older:
Queens
Pike
Ocata
Newton


Nobody working on oVirt is testing any release of Openstack in the 
recent years.


The Cinder/Ceph support was released as tech preview in 3.6, and no work was
done since then, and I think this will be deprecated soon.

For 4.3 we are working on a different direction, using Cinderlib
https://github.com/Akrog/cinderlib

This is a way to use Cinder drivers without Openstack installation.
The same library is used to provide Cinder based storage in Kubernetes.
https://github.com/Akrog/ember-csi

You can find an early draft here for this feature. Note that it is 
expected to be

updated in the next weeks, but it can give you some idea on what we are
working on.
https://github.com/oVirt/ovirt-site/blob/f88f38ebb9afff656ab68a2d60c2b3ae88c21860/source/develop/release-management/features/storage/cinderlib-integration.html.md

This will be tested with some version of Cinder drivers. I guess we will 
have

more info about it during 4.3 development.

At the moment I have two separate lab environments:
oVirt with 4.2.7
Openstack with Rocky (single host with packstack allinone)

just trying first integration steps with these versions, it seems
I'm not able to communicate with glance, because I get in engine.log
2018-11-10 17:32:58,386+01 ERROR

[org.ovirt.engine.core.bll.provider.storage.AbstractOpenStackStorageProviderProxy]
(default task-51) [e2fccee7-1bb2-400f-b8d3-b87b679117d1] Not Found
(OpenStack response error code: 404)


I think Glance support should work. Elad, which version of Glance was
tested for 4.2?

Regarding which Openstack version can work best with oVirt, maybe
Openstack guys I added can give a better answer.

Nir

Nothing in glance logs on openstack, apparently.
In my test I'm using
http://xxx.xxx.xxx.xxx:9292 as provider url
checked the authentication check box and
glance user with its password
35357 as the port and services as the tenant

a telnet on port 9292 of openstack server from engine to openstack is ok

similar with cinder I get:
2018-11-10 17:45:42,226+01 ERROR

[org.ovirt.engine.core.bll.provider.storage.AbstractOpenStackStorageProviderProxy]
(default task-50) [32a31aa7-fe3f-460c-a8b9-cc9b277deab7] Not Found
(OpenStack response error code: 404)

So before digging more I would lile to be certain which one is
currently the best combination, possibly keeping as fixed the oVirt
version to 4.2.7.

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/C46XG5YF3JTAT7BF72RXND4EHD4ZB5GC/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZPHSATIWZXFFPDHDCJXKPSYLWYU5VQ4E/



--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NHFWIMZAP222MCHUDOJZVERM2JGJIVRK/


[ovirt-users] Re: OpenStack Block Storage Provider without authentication doesn't work

2018-10-03 Thread Matthias Leopold



Am 2018-10-03 um 12:14 schrieb Matthias Leopold:

Hi,

we're successfully using Cinder as a Block Storage Provider for oVirt 
with customized Cinder installation in a CentOS host according to 
OpenStack docs. Now i wanted to try out running Cinder in Docker and 
followed the instructions from 
https://thenewstack.io/deploying-cinder-stand-alone-storage-service/ 
(customized for use with Ceph RBD).


This works to the point where i can setup an external provider and 
consequently a storage domain in oVirt. I set up the provider without 
authentication and testing this (by pressing "Test" button) works. When 
i want to create disks i realize that oVirt doesn't recognize the 
"Volume Type" definitions. In engine.log i see messages like


2018-10-03 12:04:46,990+02 ERROR 
[org.ovirt.engine.core.bll.storage.disk.cinder.GetCinderVolumeTypesByStorageDomainIdQuery] 
(default task-72) [06d4fbf7-0b3c-46b3-8166-148ee7f67a4c] Query 
'GetCinderVolumeTypesByStorageDomainIdQuery' failed: null
2018-10-03 12:04:46,990+02 ERROR 
[org.ovirt.engine.core.bll.storage.disk.cinder.GetCinderVolumeTypesByStorageDomainIdQuery] 
(default task-72) [06d4fbf7-0b3c-46b3-8166-148ee7f67a4c] Exception: 
java.lang.NullPointerException


I very much suspect that not using authentication (because of 
"auth_strategy = noauth" in cinder.conf) is the culprit. Listing types 
with cinder CLI from the engine host works (when using the appropriate 
environment). Can some of the RH devs confirm this behaviour? Is this an 
engine bug?


thanks
Matthias



I switched one of the existing, working Cinder hosts (no Docker 
involved) to "auth_strategy = noauth" and the problem (and error message 
in engine.log) with querying volume types is the same. So - regardless 
if i wanted to actually use it that way - I consider this a bug in 
Cinder integration in oVirt.


Matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K6OGQVDIYS7PSDIBXANP2IEMQMSEGQQ3/


[ovirt-users] OpenStack Block Storage Provider without authentication doesn't work

2018-10-03 Thread Matthias Leopold

Hi,

we're successfully using Cinder as a Block Storage Provider for oVirt 
with customized Cinder installation in a CentOS host according to 
OpenStack docs. Now i wanted to try out running Cinder in Docker and 
followed the instructions from 
https://thenewstack.io/deploying-cinder-stand-alone-storage-service/ 
(customized for use with Ceph RBD).


This works to the point where i can setup an external provider and 
consequently a storage domain in oVirt. I set up the provider without 
authentication and testing this (by pressing "Test" button) works. When 
i want to create disks i realize that oVirt doesn't recognize the 
"Volume Type" definitions. In engine.log i see messages like


2018-10-03 12:04:46,990+02 ERROR 
[org.ovirt.engine.core.bll.storage.disk.cinder.GetCinderVolumeTypesByStorageDomainIdQuery] 
(default task-72) [06d4fbf7-0b3c-46b3-8166-148ee7f67a4c] Query 
'GetCinderVolumeTypesByStorageDomainIdQuery' failed: null
2018-10-03 12:04:46,990+02 ERROR 
[org.ovirt.engine.core.bll.storage.disk.cinder.GetCinderVolumeTypesByStorageDomainIdQuery] 
(default task-72) [06d4fbf7-0b3c-46b3-8166-148ee7f67a4c] Exception: 
java.lang.NullPointerException


I very much suspect that not using authentication (because of 
"auth_strategy = noauth" in cinder.conf) is the culprit. Listing types 
with cinder CLI from the engine host works (when using the appropriate 
environment). Can some of the RH devs confirm this behaviour? Is this an 
engine bug?


thanks
Matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/B2JG7HGUOPBTGWC2VICJZRVWPDCKB2BT/


[ovirt-users] Re: OpenStack Block Storage Provider without authentication doesn't work

2018-10-03 Thread Matthias Leopold

Am 2018-10-03 um 12:14 schrieb Matthias Leopold:

Query 'GetCinderVolumeTypesByStorageDomainIdQuery' failed: null


Forgot to mention used versions:

oVirt 4.2.6
Cinder 11.1.2

thx
matthias

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z4W2F3SZZMGVVL7O6C3AO3KHJVYMC7RI/


[ovirt-users] Re: [ANN] oVirt 4.3.0 First Alpha Release is now available for testing

2018-11-29 Thread Matthias Leopold

What is the status of cinderlib integration in oVirt 4.3?

thank you
Matthias

Am 26.11.18 um 16:01 schrieb Sandro Bonazzola:
The oVirt Project is pleased to announce the availability of the First 
Alpha Release of oVirt 4.3.0, as of November 26th, 2018



This is pre-release software. This pre-release should not to be used in 
production.



Please take a look at our community page[1] to learn how to ask 
questions and interact with developers and users.


All issues or bugs should be reported via oVirt Bugzilla[2].


This update is the first alpha release of the 4.3.0 version.

This release brings more than 80 enhancements and more than 280 bug 
fixes on top of oVirt 4.2 series.



What's new in oVirt 4.3.0?

* Q35: Support booting virtual machines via UEFI

* Skylake-server and AMD EPYC support

* New smbus driver in windows guest tools

* Improved support for v2v

* Tech preview for oVirt on Fedora 28

* Hundreds of bug fixes on top of oVirt 4.2 series

* New VM portal (see a preview here: https://imgur.com/a/ExINpci)

* New Cluster upgrade UI



This release is available now on x86_64 architecture for:

* Red Hat Enterprise Linux 7.6 or later

* CentOS Linux (or similar) 7.5 or later (7.6 recommended, enable CR 
repo for getting it since it's not yet officially released)




This release supports Hypervisor Hosts on x86_64 and ppc64le 
architectures for:


* Red Hat Enterprise Linux 7.6 or later

* CentOS Linux (or similar) 7.5 or later (7.6 recommended, enable CR 
repo for getting it since it's not yet officially released)


* oVirt Node 4.3 (available for x86_64 only)


Experimental tech preview for x86_64 and s390x architectures for Fedora 
28 is also included.



See the release notes draft [3] for installation / upgrade instructions 
and a list of new features and bugs fixed.



Notes:

- oVirt Appliance is already available for both CentOS 7 and Fedora 28 
(tech preview).


- oVirt Node NG  is already available for both CentOS 7 and Fedora 28 
(tech preview).



Additional Resources:

* Read more about the oVirt 4.3.0 release highlights: 
http://www.ovirt.org/release/4.3.0/


* Get more oVirt project updates on Twitter: https://twitter.com/ovirt

* Check out the latest project news on the oVirt blog: 
http://www.ovirt.org/blog/




[1] https://www.ovirt.org/community/

[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt

[3] http://www.ovirt.org/release/4.3.0/

[4] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/



--

SANDRO BONAZZOLA

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA <https://www.redhat.com/>

sbona...@redhat.com <mailto:sbona...@redhat.com>

<https://red.ht/sig>


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X7OE6TXNYH5JWCYEECZCO4JBRMXIF34L/



--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YVI44PFUGYB5DHSGBKL43AYT7C6D6ONW/


[ovirt-users] Re: [Cannot edit VM. Maximum number of sockets exceeded.]

2019-01-09 Thread Matthias Leopold



Am 09.01.19 um 12:51 schrieb Michal Skrivanek:




On 9 Jan 2019, at 11:12, Lucie Leistnerova  wrote:

Hi Matthias,

On 1/9/19 10:13 AM, Matthias Leopold wrote:

Hi,

when a user is managing a "higher number" (couldn't find the exact number yet, roughly 
>10) of VMs in VM Portal and wants to edit a VM he gets a "[Cannot edit VM. Maximum number of 
sockets exceeded.]" error message in the browser, which I also see in engine.log. I couldn't find 
the reason for this. I'm using squid as a SPICE Proxy at cluster level. oVirt version is 4.2.7, can 
anybody help me?


What exactly are you editing by the VM? The error sounds that CPUs number is 
higher than engine configuration value allows

# engine-config -g MaxNumOfVmSockets
MaxNumOfVmSockets: 16 version: 3.6
MaxNumOfVmSockets: 16 version: 4.0
MaxNumOfVmSockets: 16 version: 4.1
MaxNumOfVmSockets: 16 version: 4.2


I don’t think we’re changing topology in VM Portal and so you’re stuck with the 
restriction in the original template or whatever was set in webadmin. I guess 
we’re chagning just the sockets? But then..e.g. if the VM has 4cores/socket you 
could only set 4, 8, 16, up to 64, and there’ sno indication nor validation in 
the UI, is there?

Thanks,
michal


thanks for answers everybody

this phenomenon seems indeed to be related to VMs with a "higher" count 
of vCPUs (topology chosen by oVirt in admin portal when creating VM), 
now I saw it with VMs with >40 vCPUs. i always thought "sockets" was 
related to a communication protocol in webUI or engine or something...


but the thing is: the user isn't trying to change vCPUs, he only wants 
to insert a cd and when trying to update the VM for that he gets that 
error. that is annoying


thx
matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QP75MDC7O7KH3QYX5337N6QGYMYPVCR2/


[ovirt-users] Re: [Cannot edit VM. Maximum number of sockets exceeded.]

2019-01-09 Thread Matthias Leopold

may be of interest:
users are assigned "UserVmRunTimeManager" role

matthias

Am 09.01.19 um 10:13 schrieb Matthias Leopold:

Hi,

when a user is managing a "higher number" (couldn't find the exact 
number yet, roughly >10) of VMs in VM Portal and wants to edit a VM he 
gets a "[Cannot edit VM. Maximum number of sockets exceeded.]" error 
message in the browser, which I also see in engine.log. I couldn't find 
the reason for this. I'm using squid as a SPICE Proxy at cluster level. 
oVirt version is 4.2.7, can anybody help me?


thx
matthias

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HFLWFO53KAZHHDXK6QYROHKM5TZV3T3T/ 



--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CKBKF7DWIYSUZWSDRCQO3SLKGARMB6YR/


[ovirt-users] [Cannot edit VM. Maximum number of sockets exceeded.]

2019-01-09 Thread Matthias Leopold

Hi,

when a user is managing a "higher number" (couldn't find the exact 
number yet, roughly >10) of VMs in VM Portal and wants to edit a VM he 
gets a "[Cannot edit VM. Maximum number of sockets exceeded.]" error 
message in the browser, which I also see in engine.log. I couldn't find 
the reason for this. I'm using squid as a SPICE Proxy at cluster level. 
oVirt version is 4.2.7, can anybody help me?


thx
matthias

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HFLWFO53KAZHHDXK6QYROHKM5TZV3T3T/


[ovirt-users] Re: icons for custom defined OS

2018-09-12 Thread Matthias Leopold



Am 2018-09-11 um 16:54 schrieb Matthias Leopold:

Hi,

I defined additional operating systems in my oVirt 4.2.6 installation 
according to 
https://www.ovirt.org/develop/release-management/features/virt/os-info/. 
Now i want to have custom default icons for use in VM portal for these 
new OS definitions. How can i do this? I only found a way to change the 
icon as individual VM portal user, this is nice, but not exactly what i 
want.





I found it myself, creating appropriate icons in 
/usr/share/ovirt-engine/icons was sufficient, before that i thought I 
would have to fiddle with the vm_icon* tables


matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y3UKRQGPR2N7QNVUYUWAEOCZNGEM22GP/


[ovirt-users] Re: trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys

2019-04-02 Thread Matthias Leopold

No, I didn't...
I wasn't used to using both "rbd_user" and "rbd_keyring_conf" (I don't 
use "rbd_keyring_conf" in standalone Cinder), nevermind


After fixing that and dealing with the rbd feature issues I could 
proudly start my first VM with a cinderlib provisioned disk :-)


Thanks for help!
I'll keep posting my experiences concerning cinderlib to this list.

Matthias

Am 01.04.19 um 16:24 schrieb Benny Zlotnik:

Did you pass the rbd_user when creating the storage domain?

On Mon, Apr 1, 2019 at 5:08 PM Matthias Leopold
 wrote:



Am 01.04.19 um 13:17 schrieb Benny Zlotnik:

OK, /var/log/ovirt-engine/cinderlib/cinderlib.log says:

2019-04-01 11:14:54,925 - cinder.volume.drivers.rbd - ERROR - Error
connecting to ceph cluster.
Traceback (most recent call last):
 File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
line 337, in _do_conn
   client.connect()
 File "rados.pyx", line 885, in rados.Rados.connect
(/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.11/rpm/el7/BUILD/ceph-12.2.11/build/src/pybind/rados/pyrex/rados.c:9785)
OSError: [errno 95] error connecting to the cluster
2019-04-01 11:14:54,930 - root - ERROR - Failure occurred when trying to
run command 'storage_stats': Bad or unexpected response from the storage
volume backend API: Error connecting to ceph cluster.

I don't really know what to do with that either.
BTW, the cinder version on engine host is "pike"
(openstack-cinder-11.2.0-1.el7.noarch)

Not sure if the version is related (I know it's been tested with
pike), but you can try and install the latest rocky (that's what I use
for development)


I upgraded cinder on engine and hypervisors to rocky and installed
missing "ceph-common" packages on hypervisors. I set "rbd_keyring_conf"
and "rbd_ceph_conf" as indicated and got as far as adding a "Managed
Block Storage" domain and creating a disk (which is also visible through
"rbd ls"). I used a keyring that is only authorized for the pool I
specified with "rbd_pool". When I try to start the VM it fails and I see
the following in supervdsm.log on hypervisor:

ManagedVolumeHelperFailed: Managed Volume Helper failed.: ('Error
executing helper: Command [\'/usr/libexec/vdsm/managedvolume-helper\',
\'attach\'] failed with rc=1 out=\'\' err=\'oslo.privsep.daemon: Running
privsep helper: [\\\'sudo\\\', \\\'privsep-helper\\\',
\\\'--privsep_context\\\', \\\'os_brick.privileged.default\\\',
\\\'--privsep_sock_path\\\',
\\\'/tmp/tmp5S8zZV/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new
privsep daemon via rootwrap\\noslo.privsep.daemon: privsep daemon
starting\\noslo.privsep.daemon: privsep process running with uid/gid:
0/0\\noslo.privsep.daemon: privsep process running with capabilities
(eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon:
privsep daemon running as pid 15944\\nTraceback (most recent call
last):\\n  File "/usr/libexec/vdsm/managedvolume-helper", line 154, in
\\nsys.exit(main(sys.argv[1:]))\\n  File
"/usr/libexec/vdsm/managedvolume-helper", line 77, in main\\n
args.command(args)\\n  File "/usr/libexec/vdsm/managedvolume-helper",
line 137, in attach\\nattachment =
conn.connect_volume(conn_info[\\\'data\\\'])\\n  File
"/usr/lib/python2.7/site-packages/vdsm/storage/nos_brick.py", line 96,
in connect_volume\\nrun_as_root=True)\\n  File
"/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in
_execute\\nresult = self.__execute(*args, **kwargs)\\n  File
"/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line
169, in execute\\nreturn execute_root(*cmd, **kwargs)\\n  File
"/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line
207, in _wrap\\nreturn self.channel.remote_call(name, args,
kwargs)\\n  File
"/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 202, in
remote_call\\nraise
exc_type(*result[2])\\noslo_concurrency.processutils.ProcessExecutionError:
Unexpected error while running command.\\nCommand: rbd map
volume-36f5eb75-329e-4bd2-88d0-6f0bfe5d1040 --pool ovirt-test --conf
/tmp/brickrbd_RmBvxA --id None --mon_host xxx.xxx.216.45:6789 --mon_host
xxx.xxx.216.54:6789 --mon_host xxx.xxx.216.55:6789\\nExit code:
22\\nStdout: u\\\'In some cases useful info is found in syslog - try
"dmesg | tail".n\\\'\\nStderr: u"2019-04-01 15:27:30.743196
7fe0b4632d40 -1 auth: unable to find a keyring on
/etc/ceph/ceph.client.None.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,:
(2) No such file or directorynrbd: sysfs write failedn2019-04-01
15:27:30.746987 7fe0b4632d40 -1 auth: unable to find a keyring on
/etc/ceph/ceph.client.None.keyring,/etc/ceph/ceph.keyring,/etc/ceph/ke

[ovirt-users] share ISO storage domain between 4.2 and 4.3 ??

2019-03-25 Thread Matthias Leopold

Hi,

My test and production oVirt environments share the ISO domain. When I 
upgrade the test environment to 4.3 the ISO domain will be used by oVirt 
4.2 and 4.3 at the same time. Is that a problem?


thx
Matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QXMAUEVC6CS6RAYF2DWFQ5PL5ZW4IFVY/


[ovirt-users] trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys

2019-04-01 Thread Matthias Leopold

Hi,

I upgraded my test environment to 4.3.2 and now I'm trying to set up a 
"Managed Block Storage" domain with our Ceph 12.2 cluster. I think I got 
all prerequisites, but when saving the configuration for the domain with 
volume_driver "cinder.volume.drivers.rbd.RBDDriver" (and a couple of 
other options) I get "VolumeBackendAPIException: Bad or unexpected 
response from the storage volume backend API: Error connecting to ceph 
cluster" in engine log (full error below). Unfortunately this is a 
rather generic error message and I don't really know where to look next. 
Accessing the rbd pool from the engine host with rbd CLI and the 
configured "rbd_user" works flawlessly...


Although I don't think this is directly connected there is one other 
question that comes up for me: how are libvirt "Authentication Keys" 
handled with Ceph "Managed Block Storage" domains? With "standalone 
Cinder" setups like we are using now you have to configure a "provider" 
of type "OpenStack Block Storage" where you can configure these keys 
that are referenced in cinder.conf as "rbd_secret_uuid". How is this 
supposed to work now?


Thanks for any advice, we are using oVirt with Ceph heavily and are very 
interested in a tight integration of oVirt and Ceph.


Matthias


2019-04-01 11:14:55,128+02 ERROR 
[org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] 
(default task-22) [b6665621-6b85-438e-8c68-266f33e55d79] cinderlib 
execution failed: Traceback (most recent call last):

  File "./cinderlib-client.py", line 187, in main
args.command(args)
  File "./cinderlib-client.py", line 275, in storage_stats
backend = load_backend(args)
  File "./cinderlib-client.py", line 217, in load_backend
return cl.Backend(**json.loads(args.driver))
  File "/usr/lib/python2.7/site-packages/cinderlib/cinderlib.py", line 
87, in __init__

self.driver.check_for_setup_error()
  File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", 
line 288, in check_for_setup_error

with RADOSClient(self):
  File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", 
line 170, in __init__

self.cluster, self.ioctx = driver._connect_to_rados(pool)
  File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", 
line 346, in _connect_to_rados

return _do_conn(pool, remote, timeout)
  File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 799, in 
_wrapper

return r.call(f, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/retrying.py", line 229, in call
raise attempt.get()
  File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get
six.reraise(self.value[0], self.value[1], self.value[2])
  File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
  File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", 
line 344, in _do_conn

raise exception.VolumeBackendAPIException(data=msg)
VolumeBackendAPIException: Bad or unexpected response from the storage 
volume backend API: Error connecting to ceph cluster.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G2V53GZEMALXSOUHRJ7PRPZSOSOMRURK/


[ovirt-users] Re: trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys

2019-04-01 Thread Matthias Leopold



Am 01.04.19 um 12:07 schrieb Benny Zlotnik:

Hi,

Thanks for trying this out!
We added a separate log file for cinderlib in 4.3.2, it should be 
available under /var/log/ovirt-engine/cinderlib/cinderlib.log
They are not perfect yet, and more improvements are coming, but it might 
provide some insight about the issue



OK, /var/log/ovirt-engine/cinderlib/cinderlib.log says:

2019-04-01 11:14:54,925 - cinder.volume.drivers.rbd - ERROR - Error 
connecting to ceph cluster.

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", 
line 337, in _do_conn

client.connect()
  File "rados.pyx", line 885, in rados.Rados.connect 
(/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.11/rpm/el7/BUILD/ceph-12.2.11/build/src/pybind/rados/pyrex/rados.c:9785)

OSError: [errno 95] error connecting to the cluster
2019-04-01 11:14:54,930 - root - ERROR - Failure occurred when trying to 
run command 'storage_stats': Bad or unexpected response from the storage 
volume backend API: Error connecting to ceph cluster.


I don't really know what to do with that either.
BTW, the cinder version on engine host is "pike" 
(openstack-cinder-11.2.0-1.el7.noarch)




 >Although I don't think this is directly connected there is one other
 >question that comes up for me: how are libvirt "Authentication Keys"
 >handled with Ceph "Managed Block Storage" domains? With "standalone
 >Cinder" setups like we are using now you have to configure a "provider"
 >of type "OpenStack Block Storage" where you can configure these keys
 >that are referenced in cinder.conf as "rbd_secret_uuid". How is this
 >supposed to work now?

Now you are supposed to pass the secret in the driver options, something 
like this (using REST):


rbd_ceph_conf
/etc/ceph/ceph.conf
  

  
           rbd_keyring_conf
          /etc/ceph/ceph.client.admin.keyring




Shall I pass "rbd_secret_uuid" in the driver options? But where is this 
UUID created? Where is the ceph secret key stored in oVirt?


thanks
Matthias

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MSGWGFANCKLT2UK3KJVZW5R6IBNRJEJS/


[ovirt-users] Re: trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys

2019-04-01 Thread Matthias Leopold


Am 01.04.19 um 13:17 schrieb Benny Zlotnik:

OK, /var/log/ovirt-engine/cinderlib/cinderlib.log says:

2019-04-01 11:14:54,925 - cinder.volume.drivers.rbd - ERROR - Error
connecting to ceph cluster.
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
line 337, in _do_conn
  client.connect()
File "rados.pyx", line 885, in rados.Rados.connect
(/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.11/rpm/el7/BUILD/ceph-12.2.11/build/src/pybind/rados/pyrex/rados.c:9785)
OSError: [errno 95] error connecting to the cluster
2019-04-01 11:14:54,930 - root - ERROR - Failure occurred when trying to
run command 'storage_stats': Bad or unexpected response from the storage
volume backend API: Error connecting to ceph cluster.

I don't really know what to do with that either.
BTW, the cinder version on engine host is "pike"
(openstack-cinder-11.2.0-1.el7.noarch)

Not sure if the version is related (I know it's been tested with
pike), but you can try and install the latest rocky (that's what I use
for development)


I upgraded cinder on engine and hypervisors to rocky and installed 
missing "ceph-common" packages on hypervisors. I set "rbd_keyring_conf" 
and "rbd_ceph_conf" as indicated and got as far as adding a "Managed 
Block Storage" domain and creating a disk (which is also visible through 
"rbd ls"). I used a keyring that is only authorized for the pool I 
specified with "rbd_pool". When I try to start the VM it fails and I see 
the following in supervdsm.log on hypervisor:


ManagedVolumeHelperFailed: Managed Volume Helper failed.: ('Error 
executing helper: Command [\'/usr/libexec/vdsm/managedvolume-helper\', 
\'attach\'] failed with rc=1 out=\'\' err=\'oslo.privsep.daemon: Running 
privsep helper: [\\\'sudo\\\', \\\'privsep-helper\\\', 
\\\'--privsep_context\\\', \\\'os_brick.privileged.default\\\', 
\\\'--privsep_sock_path\\\', 
\\\'/tmp/tmp5S8zZV/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new 
privsep daemon via rootwrap\\noslo.privsep.daemon: privsep daemon 
starting\\noslo.privsep.daemon: privsep process running with uid/gid: 
0/0\\noslo.privsep.daemon: privsep process running with capabilities 
(eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon: 
privsep daemon running as pid 15944\\nTraceback (most recent call 
last):\\n  File "/usr/libexec/vdsm/managedvolume-helper", line 154, in 
\\nsys.exit(main(sys.argv[1:]))\\n  File 
"/usr/libexec/vdsm/managedvolume-helper", line 77, in main\\n 
args.command(args)\\n  File "/usr/libexec/vdsm/managedvolume-helper", 
line 137, in attach\\nattachment = 
conn.connect_volume(conn_info[\\\'data\\\'])\\n  File 
"/usr/lib/python2.7/site-packages/vdsm/storage/nos_brick.py", line 96, 
in connect_volume\\nrun_as_root=True)\\n  File 
"/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in 
_execute\\nresult = self.__execute(*args, **kwargs)\\n  File 
"/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line 
169, in execute\\nreturn execute_root(*cmd, **kwargs)\\n  File 
"/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 
207, in _wrap\\nreturn self.channel.remote_call(name, args, 
kwargs)\\n  File 
"/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 202, in 
remote_call\\nraise 
exc_type(*result[2])\\noslo_concurrency.processutils.ProcessExecutionError: 
Unexpected error while running command.\\nCommand: rbd map 
volume-36f5eb75-329e-4bd2-88d0-6f0bfe5d1040 --pool ovirt-test --conf 
/tmp/brickrbd_RmBvxA --id None --mon_host xxx.xxx.216.45:6789 --mon_host 
xxx.xxx.216.54:6789 --mon_host xxx.xxx.216.55:6789\\nExit code: 
22\\nStdout: u\\\'In some cases useful info is found in syslog - try 
"dmesg | tail".n\\\'\\nStderr: u"2019-04-01 15:27:30.743196 
7fe0b4632d40 -1 auth: unable to find a keyring on 
/etc/ceph/ceph.client.None.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: 
(2) No such file or directorynrbd: sysfs write failedn2019-04-01 
15:27:30.746987 7fe0b4632d40 -1 auth: unable to find a keyring on 
/etc/ceph/ceph.client.None.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: 
(2) No such file or directoryn2019-04-01 15:27:30.747896 
7fe0b4632d40 -1 monclient: authenticate NOTE: no keyring found; disabled 
cephx authenticationn2019-04-01 15:27:30.747903 7fe0b4632d40  0 
librados: client.None authentication error (95) Operation not 
supportednrbd: couldn\\\'t connect to the cluster!nrbd: map 
failed: (22) Invalid argumentn"\\n\'',)


I tried to provide a /etc/ceph directory with ceph.conf and client 
keyring on hypervisors (as configured in driver options). This didn't 
solve it and doesn't seem to be the right way as the mentioned 
/tmp/brickrbd_RmBvxA contains the needed keyring data. Please give me 
some advice what's wrong.



[ovirt-users] cinderlib: VM migration fails

2019-04-08 Thread Matthias Leopold

Hi,

after I successfully started my first VM with a cinderlib attached disk 
in oVirt 4.3.2 I now want to test basic operations. I immediately 
learned that migrating this VM (OS disk: iSCSI, 2nd disk: Managed Block) 
fails with a java.lang.NullPointerException (see below) in engine.log. 
This even happens when the cinderlib disk is deactivated.
Shall I report things like this here, shall I open a bug report or shall 
I just wait because the feature is under development?


thx
Matthias


2019-04-08 12:57:40,250+02 INFO 
[org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] 
(default task-66) [4ef05101] cinderlib output: {"driver_volume_type": 
"rbd", "data": {"secret_type": "ceph", "name": 
"ovirt-test/volume-2f053070-f5b7-4f04-856c-87a56d70cd75", 
"auth_enabled": true, "keyring": "[client.ovirt-test_user_rbd]\n\tkey = 
xxx\n", "cluster_name": "ceph", "secret_uuid": null, "hosts": 
["xxx.xxx.216.45", "xxx.xxx.216.54", "xxx.xxx.216.55"], "volume_id": 
"2f053070-f5b7-4f04-856c-87a56d70cd75", "discard": true, 
"auth_username": "ovirt-test_user_rbd", "ports": ["6789", "6789", "6789"]}}
2019-04-08 12:57:40,256+02 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand] 
(default task-66) [4ef05101] START, 
AttachManagedBlockStorageVolumeVDSCommand(HostName = ov-test-04-01, 
AttachManagedBlockStorageVolumeVDSCommandParameters:{hostId='59efbbfe-904a-4c43-9555-b544f77bb456', 
vds='Host[ov-test-04-01,59efbbfe-904a-4c43-9555-b544f77bb456]'}), log 
id: 67d3a79e
2019-04-08 12:57:40,262+02 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand] 
(default task-66) [4ef05101] Failed in 
'AttachManagedBlockStorageVolumeVDS' method, for vds: 'ov-test-04-01'; 
host: 'ov-test-04-01.foo.bar': null
2019-04-08 12:57:40,262+02 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand] 
(default task-66) [4ef05101] Command 
'AttachManagedBlockStorageVolumeVDSCommand(HostName = ov-test-04-01, 
AttachManagedBlockStorageVolumeVDSCommandParameters:{hostId='59efbbfe-904a-4c43-9555-b544f77bb456', 
vds='Host[ov-test-04-01,59efbbfe-904a-4c43-9555-b544f77bb456]'})' 
execution failed: null
2019-04-08 12:57:40,262+02 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand] 
(default task-66) [4ef05101] FINISH, 
AttachManagedBlockStorageVolumeVDSCommand, return: , log id: 67d3a79e
2019-04-08 12:57:40,310+02 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-66) [4ef05101] EVENT_ID: VM_MIGRATION_FAILED(65), 
Migration failed  (VM: ovirt-test01.srv, Source: ov-test-04-03).
2019-04-08 12:57:40,314+02 INFO 
[org.ovirt.engine.core.bll.MigrateVmCommand] (default task-66) 
[4ef05101] Lock freed to object 
'EngineLock:{exclusiveLocks='[4a8c9902-f9ab-490f-b1dd-82d9aee63b5f=VM]', 
sharedLocks=''}'
2019-04-08 12:57:40,314+02 ERROR 
[org.ovirt.engine.core.bll.MigrateVmCommand] (default task-66) 
[4ef05101] Command 'org.ovirt.engine.core.bll.MigrateVmCommand' failed: 
org.ovirt.engine.core.common.errors.EngineException: EngineException: 
java.lang.NullPointerException (Failed with error ENGINE and code 5001)
2019-04-08 12:57:40,314+02 ERROR 
[org.ovirt.engine.core.bll.MigrateVmCommand] (default task-66) 
[4ef05101] Exception: javax.ejb.EJBException: 
org.ovirt.engine.core.common.errors.EngineException: EngineException: 
java.lang.NullPointerException (Failed with error ENGINE and code 5001)

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HSU2OYTNMR3KU4MN2NV5IAP72G37TYH3/


[ovirt-users] Re: cinderlib: VM migration fails

2019-04-08 Thread Matthias Leopold

https://bugzilla.redhat.com/show_bug.cgi?id=1697496

Am 08.04.19 um 13:15 schrieb Benny Zlotnik:

Please open a bug for this, with vdsm and supervdsm logs

On Mon, Apr 8, 2019 at 2:13 PM Matthias Leopold
 wrote:


Hi,

after I successfully started my first VM with a cinderlib attached disk
in oVirt 4.3.2 I now want to test basic operations. I immediately
learned that migrating this VM (OS disk: iSCSI, 2nd disk: Managed Block)
fails with a java.lang.NullPointerException (see below) in engine.log.
This even happens when the cinderlib disk is deactivated.
Shall I report things like this here, shall I open a bug report or shall
I just wait because the feature is under development?

thx
Matthias


2019-04-08 12:57:40,250+02 INFO
[org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor]
(default task-66) [4ef05101] cinderlib output: {"driver_volume_type":
"rbd", "data": {"secret_type": "ceph", "name":
"ovirt-test/volume-2f053070-f5b7-4f04-856c-87a56d70cd75",
"auth_enabled": true, "keyring": "[client.ovirt-test_user_rbd]\n\tkey =
xxx\n", "cluster_name": "ceph", "secret_uuid": null, "hosts":
["xxx.xxx.216.45", "xxx.xxx.216.54", "xxx.xxx.216.55"], "volume_id":
"2f053070-f5b7-4f04-856c-87a56d70cd75", "discard": true,
"auth_username": "ovirt-test_user_rbd", "ports": ["6789", "6789", "6789"]}}
2019-04-08 12:57:40,256+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
(default task-66) [4ef05101] START,
AttachManagedBlockStorageVolumeVDSCommand(HostName = ov-test-04-01,
AttachManagedBlockStorageVolumeVDSCommandParameters:{hostId='59efbbfe-904a-4c43-9555-b544f77bb456',
vds='Host[ov-test-04-01,59efbbfe-904a-4c43-9555-b544f77bb456]'}), log
id: 67d3a79e
2019-04-08 12:57:40,262+02 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
(default task-66) [4ef05101] Failed in
'AttachManagedBlockStorageVolumeVDS' method, for vds: 'ov-test-04-01';
host: 'ov-test-04-01.foo.bar': null
2019-04-08 12:57:40,262+02 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
(default task-66) [4ef05101] Command
'AttachManagedBlockStorageVolumeVDSCommand(HostName = ov-test-04-01,
AttachManagedBlockStorageVolumeVDSCommandParameters:{hostId='59efbbfe-904a-4c43-9555-b544f77bb456',
vds='Host[ov-test-04-01,59efbbfe-904a-4c43-9555-b544f77bb456]'})'
execution failed: null
2019-04-08 12:57:40,262+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
(default task-66) [4ef05101] FINISH,
AttachManagedBlockStorageVolumeVDSCommand, return: , log id: 67d3a79e
2019-04-08 12:57:40,310+02 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-66) [4ef05101] EVENT_ID: VM_MIGRATION_FAILED(65),
Migration failed  (VM: ovirt-test01.srv, Source: ov-test-04-03).
2019-04-08 12:57:40,314+02 INFO
[org.ovirt.engine.core.bll.MigrateVmCommand] (default task-66)
[4ef05101] Lock freed to object
'EngineLock:{exclusiveLocks='[4a8c9902-f9ab-490f-b1dd-82d9aee63b5f=VM]',
sharedLocks=''}'
2019-04-08 12:57:40,314+02 ERROR
[org.ovirt.engine.core.bll.MigrateVmCommand] (default task-66)
[4ef05101] Command 'org.ovirt.engine.core.bll.MigrateVmCommand' failed:
org.ovirt.engine.core.common.errors.EngineException: EngineException:
java.lang.NullPointerException (Failed with error ENGINE and code 5001)
2019-04-08 12:57:40,314+02 ERROR
[org.ovirt.engine.core.bll.MigrateVmCommand] (default task-66)
[4ef05101] Exception: javax.ejb.EJBException:
org.ovirt.engine.core.common.errors.EngineException: EngineException:
java.lang.NullPointerException (Failed with error ENGINE and code 5001)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HSU2OYTNMR3KU4MN2NV5IAP72G37TYH3/


--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6ZXJDMYLTOZRAO2ZGPAXNIYG35A7HCTD/


[ovirt-users] oVirt and Ceph iSCSI: separating discovery auth / target auth ?

2019-04-15 Thread Matthias Leopold

Hi,

I'm trying to use the Ceph iSCSI gateway with oVirt.

According to my tests with oVirt 4.3.2
* you cannot separate iSCSI discovery auth and target auth
* you cannot use an iSCSI gateway that has no discovery auth, but uses 
CHAP for targets
This means I'm forced to use the same credentials for discovery auth and 
target auth.


In Ceph iSCSI gateway I can have multiple targets with use different 
credentials, but I can define discovery auth only once for the whole 
gateway (or have no discovery auth).


If all of this is correct and I want to use the Ceph iSCSI gateway with 
oVirt

* I have to use discovery auth
* the credentials for discovery auth will give every other Ceph iSCSI 
gateway user access to the oVirt target


This is not a desirable situation.
Did I understand anything wrong? Are there other ways to solve this?

thx
matthias

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IDXLXLROUG4JJ2CAKLHL6AQLYEJBYCCM/


[ovirt-users] Re: High Performance VM: trouble using vNUMA and hugepages

2019-06-14 Thread Matthias Leopold

https://bugzilla.redhat.com/show_bug.cgi?id=1720558

Am 13.06.19 um 15:42 schrieb Andrej Krejcir:

Hi,

this is probably a bug. Can you open a new ticket in Bugzilla?
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine

As a workaround, if you are sure that the VM's NUMA configuration is 
compatible with the host's NUMA configuration, you could create a custom 
cluster scheduling policy and disable the "NUMA" filter. In 
Administration -> Configure -> Scheduling Policies.



Regards,
Andrej


On Thu, 13 Jun 2019 at 12:49, Matthias Leopold 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:

 > Hi,
 >
 > I'm having trouble using vNUMA and hugepages at the same time:
 >
 > - hypervisor host hast 2 CPU and 768G RAM
 > - hypervisor host is configured to allocate 512 1G hugepages
 > - VM configuration
 > * 2 virtual sockets, vCPUs are evenly pinned to 2 physical CPUs
 > * 512G RAM
 > * 2 vNUMA nodes that are pinned to the 2 host NUMA nodes
 > * custom property "hugepages=1048576"
 > - VM is the only VM on hypervisor host
 >
 > when I want to start the VM I'm getting the error message
 > "The host foo did not satisfy internal filter NUMA because cannot
 > accommodate memory of VM's pinned virtual NUMA nodes within host's
 > physical NUMA nodes"
 > VM start only works when VM memory is shrunk so that it fits in (host
 > memory - allocated huge pages)
 >
 > I don't understand why this happens. Can someone explain to me how this
 > is supposed to work?
 >
 > oVirt engine is 4.3.3
 > oVirt host is 4.3.4
 >
 > thanks
 > matthias

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DTTUJE37ZYYP3JUWKDQARHXAXKOMS2DF/


[ovirt-users] vNUMA question

2019-06-12 Thread Matthias Leopold

Hi,

when I want to use vNUMA for VMs is it necessary that the number of VM 
virtual sockets corresponds to the number of vNUMA nodes?


concrete example:
hypervisor host has 2 physical cpus
VM has 2 vNUMA nodes and uses CPU pinning which distributes the VCPUs 
equally over both physical CPUs
is it necessary for the VM to have 2 virtual sockets or is it ok for 
virtual socket count to be powers of 2 (eg. 16)?


thanks
matthias

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/24UIDDFUVXF5FCLOLV442U5TJ2I7FL56/


[ovirt-users] High Performance VM: trouble using vNUMA and hugepages

2019-06-13 Thread Matthias Leopold

Hi,

I'm having trouble using vNUMA and hugepages at the same time:

- hypervisor host hast 2 CPU and 768G RAM
- hypervisor host is configured to allocate 512 1G hugepages
- VM configuration
* 2 virtual sockets, vCPUs are evenly pinned to 2 physical CPUs
* 512G RAM
* 2 vNUMA nodes that are pinned to the 2 host NUMA nodes
* custom property "hugepages=1048576"
- VM is the only VM on hypervisor host

when I want to start the VM I'm getting the error message
"The host foo did not satisfy internal filter NUMA because cannot 
accommodate memory of VM's pinned virtual NUMA nodes within host's 
physical NUMA nodes"
VM start only works when VM memory is shrunk so that it fits in (host 
memory - allocated huge pages)


I don't understand why this happens. Can someone explain to me how this 
is supposed to work?


oVirt engine is 4.3.3
oVirt host is 4.3.4

thanks
matthias

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5Z2LO2D3CI4LDJYVWT6Y6J53DX4JY3QX/


[ovirt-users] Cascade Lake CPU in oVirt

2019-05-01 Thread Matthias Leopold

Hi,

do I get it right that hypervisor hosts with Cascade Lake CPU could be 
used in oVirt, but would be recognized as "Skylake"? Cascade Lake 
specific features could only be used in VMs after CL is supported by 
libvirt (https://bugzilla.redhat.com/show_bug.cgi?id=1677209) and this 
libvirt is imported in oVirt? We're thinking about buying new hardware 
and I want to be sure it can be used...


thx
matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OBY2VPBHVVYUU7JM66B5E7Q4GJZTICLG/


[ovirt-users] Re: oVirt and Ceph iSCSI: separating discovery auth / target auth ?

2019-05-01 Thread Matthias Leopold



Am 18.04.19 um 17:16 schrieb Matthias Leopold:



Am 15.04.19 um 17:48 schrieb Matthias Leopold:

Hi,

I'm trying to use the Ceph iSCSI gateway with oVirt.
According to my tests with oVirt 4.3.2


...

* you cannot use an iSCSI gateway that has no discovery auth, but uses 
CHAP for targets


This seems to be a problem of the Ceph iSCSI gateway only, I didn't see 
this with a FreeNAS iSCSI appliance. I'll turn to the Ceph folks.




if anybody is still interested please look at 
https://github.com/ceph/ceph-iscsi/issues/68


matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/65AJ5NTML7BKQTBZG5GEZ5EMMJMGMAE4/


[ovirt-users] Re: oVirt and Ceph iSCSI: separating discovery auth / target auth ?

2019-04-18 Thread Matthias Leopold



Am 15.04.19 um 17:48 schrieb Matthias Leopold:

Hi,

I'm trying to use the Ceph iSCSI gateway with oVirt.
According to my tests with oVirt 4.3.2


...

* you cannot use an iSCSI gateway that has no discovery auth, but uses 
CHAP for targets


This seems to be a problem of the Ceph iSCSI gateway only, I didn't see 
this with a FreeNAS iSCSI appliance. I'll turn to the Ceph folks.


matthias

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GSPYJQRQ22XYZQUMATGDUDZPQRSJ3DZN/


[ovirt-users] Re: "Actual timezone in guest doesn't match configuration" for Windows VMs since guest agent 4.3

2019-07-12 Thread Matthias Leopold

Am 09.07.19 um 19:27 schrieb Michal Skrivanek:




On 8 Jul 2019, at 16:58, Matthias Leopold  
wrote:

Hi,

the oVirt guest agent seems to report DST configuration for the timezone since version 
4.3 (of the guest agent). this results in "Actual timezone in guest doesn't match 
configuration" messages in the UI for windows VMs because the timezone field can't 
be matched with oVirt configuration anymore (no DST flag). to me this looks like a bug. 
shall I report it?


Yes please. With more details if possible:) We started using qemu-ga for these 
and it’s possible that the report differs in this aspect. Then it needs to be 
fixed

Thanks,
michal



I know there was a similar thread at the beginning of May, but there was no 
solution mentioned.

Matthias


I live in Vienna.

When I configure Windows VM Hardware Clock Time Offset as "Central 
European Standard Time" the problem goes away. I still don't know what 
this "W. Europe Standard Time (UTC+01:00)" timezone that Windows uses 
and reports and that is choosable in oVirt actually is, this causes the 
error IMHO (since "recently"...).


According to Wikipedia there's only "Western European Time" (UTC+00:00) 
and Vienna has "Central European Time" (UTC+01:00).


Timezones and DST are a PITA.

Matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/T3CCNDHOUTYBQTF5K2KPICZ35DKKGHNM/


[ovirt-users] Re: "Actual timezone in guest doesn't match configuration" for Windows VMs since guest agent 4.3

2019-07-12 Thread Matthias Leopold



Am 12.07.19 um 13:26 schrieb Matthias Leopold:

Am 09.07.19 um 19:27 schrieb Michal Skrivanek:



On 8 Jul 2019, at 16:58, Matthias Leopold 
 wrote:


Hi,

the oVirt guest agent seems to report DST configuration for the 
timezone since version 4.3 (of the guest agent). this results in 
"Actual timezone in guest doesn't match configuration" messages in 
the UI for windows VMs because the timezone field can't be matched 
with oVirt configuration anymore (no DST flag). to me this looks like 
a bug. shall I report it?


Yes please. With more details if possible:) We started using qemu-ga 
for these and it’s possible that the report differs in this aspect. 
Then it needs to be fixed


Thanks,
michal



I know there was a similar thread at the beginning of May, but there 
was no solution mentioned.


Matthias


I live in Vienna.

When I configure Windows VM Hardware Clock Time Offset as "Central 
European Standard Time" the problem goes away. I still don't know what 
this "W. Europe Standard Time (UTC+01:00)" timezone that Windows uses 
and reports and that is choosable in oVirt actually is, this causes the 
error IMHO (since "recently"...).


According to Wikipedia there's only "Western European Time" (UTC+00:00) 
and Vienna has "Central European Time" (UTC+01:00).


Timezones and DST are a PITA.




I'm sorry, I didn't want to be rude, but I'm a bit desperate with this...

Just when I thought I explained it I'm seeing a Windows 10 VM (Hardware 
Clock Time Offset: CET) that reports "W. Europe Daylight Time 
(UTC+02:00)" _again_ and the "doesn't match configuration" error is 
back. I thought I beat the "DST" reporting by setting VM "Hardware Clock 
Time Offset" correctly because I didn't see any effect of the Windows 
"Adjust for DST automatically" configuration on the reported timezone. 
This seemed to be true for Windows Server 2016 and Windows 7...


I'm giving up on this for now, I don't control all of the Windows VMs, 
maybe someone finds the missing part


All VMs are using oVirt Guest Tools 4.3-3.el7 (from oVirt 4.3.4) in 
oVirt engine 4.3.3


thanks
Matthias

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QUUGOM6G2VJGMAJHRIS7LLXVSGFJMZDF/


[ovirt-users] "Actual timezone in guest doesn't match configuration" for Windows VMs since guest agent 4.3

2019-07-08 Thread Matthias Leopold

Hi,

the oVirt guest agent seems to report DST configuration for the timezone 
since version 4.3 (of the guest agent). this results in "Actual timezone 
in guest doesn't match configuration" messages in the UI for windows VMs 
because the timezone field can't be matched with oVirt configuration 
anymore (no DST flag). to me this looks like a bug. shall I report it?


I know there was a similar thread at the beginning of May, but there was 
no solution mentioned.


Matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/F7QBHRUKETL77KM6LMDNHIT3UDFLG3ND/


[ovirt-users] Re: engine.log flooded with "Field 'foo' can not be updated when status is 'Up'"

2019-08-13 Thread Matthias Leopold
46c13-223f-4728-b988-5da34139aeb2] Field 'consoleEnabled' can not be 
updated when status is 'Up'
2019-08-13 11:44:39,494+01 WARN  
[org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-855) 
[0b246c13-223f-4728-b988-5da34139aeb2] Field 'virtioScsiEnabled' can not be 
updated when status is 'Up'
2019-08-13 11:44:39,495+01 WARN  
[org.ovirt.engine.core.utils.ObjectIdentityChecker] (default task-855) 
[0b246c13-223f-4728-b988-5da34139aeb2] Field 'graphicsDevices' can not be 
updated when status is 'Up'

____
From: Matthias Leopold 
Sent: 09 August 2019 19:11
To: users
Subject: [ovirt-users] engine.log flooded with "Field 'foo' can not be updated when 
status is 'Up'"

Hi,

I updated my production oVirt environment from 4.3.3 to 4.3.5 today.
Everything went fine so far, but there's one annoying phenomenon:

When I log into the "Administration Portal" and request the VM list
("/ovirt-engine/webadmin/?locale=en_US#vms") engine.log is flooded with
lines like

WARN  [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default
task-10618) [54d8c375-aa72-42f8-876e-8777d9d1a08a] Field
'balloonEnabled' can not be updated when status is 'Up'

"Field", task and UUID vary and the flood stops after a while. Also
listing or trying to edit other entities seems to trigger this "storm"
or loop over and over again to a point that log file size is becoming an
issue and interface is becoming sluggish. I can also see that CPU usage
of engine java process goes up. When I log out everything is quiet and
"VM Portal" is not affected at all.

I have seen lines like that before and know that they are usually OK
(when changing VM properties), but these logs used to be linked to
singular events. I suspect that the present behaviour might be linked to
VMs that have "Pending Virtual Machine changes", which are in most cases
"Custom Compatibility Version" changes that still stem from the upgrade
to Cluster Version 4.3. I can't be sure and I can't resolve all these
pending changes now, but these should not be causing such annoying
behaviour in the first place.

I resorted to setting engine log level to "ERROR" right now to at least
stop the log file from growing, but this not a solution. I can still see
CPU load going up when using the interface. I very much hope that
someone can explain whats happening and tell me how to resolve this.

thanks a lot
Matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fsite%2Fprivacy-policy%2Fdata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Cf3728d98bdd54ea183fe08d71cf59aee%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637009713497869419sdata=SYEkpmfrjitOljgpqWuLQKookZACJNd%2BEi%2F9xN%2F6kwA%3Dreserved=0
oVirt Code of Conduct: 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fcommunity%2Fabout%2Fcommunity-guidelines%2Fdata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Cf3728d98bdd54ea183fe08d71cf59aee%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637009713497869419sdata=z89zQpustu%2BTVXnPZy477G6d3PJf%2FZ6V1SGDRDm6Bsk%3Dreserved=0
List Archives: 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.ovirt.org%2Farchives%2Flist%2Fusers%40ovirt.org%2Fmessage%2FMKFQRCKHRT6NJUHF7URJTQG753MND6PJ%2Fdata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Cf3728d98bdd54ea183fe08d71cf59aee%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637009713497869419sdata=oNe3foQ%2BlF7dxyrwaEN8fYsTUazvJqqjj2d%2FoNBjp%2Bg%3Dreserved=0
To view the terms under which this email is distributed, please go to:-
http://leedsbeckett.ac.uk/disclaimer/email/



--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XWJJL3ESCWCJR7XSTLOHD5NOPUD4NSSZ/


[ovirt-users] Re: engine.log flooded with "Field 'foo' can not be updated when status is 'Up'"

2019-08-17 Thread Matthias Leopold

https://bugzilla.redhat.com/show_bug.cgi?id=1742924

Am 13.08.19 um 19:14 schrieb Sharon Gratch:

Hi,
We checked that issue and found out that you are right and this extra 
logging lines problem is caused by "next run configuration" improvements 
added to oVirt 4.3.5.


The current behaviour is that for each running VM with next run 
configuration existed, a warning line of "Field 'xxx' can not be 
updated when status is 'Up'" appears in log, per each vm device and 
whenever the vms list is refreshed.

This definitely may flood the engine log if there are few such VMs.

Can you please file a bug on that?

Thanks,
Sharon



On Tue, Aug 13, 2019 at 5:01 PM Matthias Leopold 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:


Thanks for the feedback, I think my description was a bit clumsy,
but at
least someone confirms that he has seen this...
I still hope it's linked to unfinished VM "Custom Compatibility
Version"
updates, tomorrow I'll know, when I finally can do the last VM reboots.
My "Administration Portal" is still usable, but nevertheless I think
this is something the upstream developers should look into.

Regards
Matthias

Am 13.08.19 um 12:48 schrieb Staniforth, Paul:
 > Hello Mathias,
 >                          I also had this problem, the flood of
warning messages was most notably generated when showing all the VMs
running from the admin portal dashboard as we had 70 VMs running
this generated the following but for all 70 VMs. I was able to
restart most of the VMs otherwise the admin portal became unusable.
 > I don't recall this being a problem upgrading from 4.1 to 4.2
 >
 > Regards
 >                  Paul S.
 >
 > 2019-08-13 11:44:36,771+01 WARN 
[org.ovirt.engine.core.utils.ObjectIdentityChecker] (default

task-855) [936d48c4-76c6-4363-85f0-7147923bbb2f] Field
'customCompatibilityVersion' can not be updated when status is 'Up'
 > 2019-08-13 11:44:36,771+01 WARN 
[org.ovirt.engine.core.utils.ObjectIdentityChecker] (default

task-855) [936d48c4-76c6-4363-85f0-7147923bbb2f] Field 'exportDate'
can not be updated when status is 'Up'
 > 2019-08-13 11:44:36,771+01 WARN 
[org.ovirt.engine.core.utils.ObjectIdentityChecker] (default

task-855) [936d48c4-76c6-4363-85f0-7147923bbb2f] Field
'managedDeviceMap' can not be updated when status is 'Up'
 > 2019-08-13 11:44:36,771+01 WARN 
[org.ovirt.engine.core.utils.ObjectIdentityChecker] (default

task-855) [936d48c4-76c6-4363-85f0-7147923bbb2f] Field 'ovfVersion'
can not be updated when status is 'Up'
 > 2019-08-13 11:44:36,772+01 WARN 
[org.ovirt.engine.core.utils.ObjectIdentityChecker] (default

task-855) [936d48c4-76c6-4363-85f0-7147923bbb2f] Field
'balloonEnabled' can not be updated when status is 'Up'
 > 2019-08-13 11:44:36,772+01 WARN 
[org.ovirt.engine.core.utils.ObjectIdentityChecker] (default

task-855) [936d48c4-76c6-4363-85f0-7147923bbb2f] Field 'watchdog'
can not be updated when status is 'Up'
 > 2019-08-13 11:44:36,773+01 WARN 
[org.ovirt.engine.core.utils.ObjectIdentityChecker] (default

task-855) [936d48c4-76c6-4363-85f0-7147923bbb2f] Field 'rngDevice'
can not be updated when status is 'Up'
 > 2019-08-13 11:44:36,774+01 WARN 
[org.ovirt.engine.core.utils.ObjectIdentityChecker] (default

task-855) [936d48c4-76c6-4363-85f0-7147923bbb2f] Field
'soundDeviceEnabled' can not be updated when status is 'Up'
 > 2019-08-13 11:44:36,774+01 WARN 
[org.ovirt.engine.core.utils.ObjectIdentityChecker] (default

task-855) [936d48c4-76c6-4363-85f0-7147923bbb2f] Field
'consoleEnabled' can not be updated when status is 'Up'
 > 2019-08-13 11:44:36,775+01 WARN 
[org.ovirt.engine.core.utils.ObjectIdentityChecker] (default

task-855) [936d48c4-76c6-4363-85f0-7147923bbb2f] Field
'virtioScsiEnabled' can not be updated when status is 'Up'
 > 2019-08-13 11:44:36,776+01 WARN 
[org.ovirt.engine.core.utils.ObjectIdentityChecker] (default

task-855) [936d48c4-76c6-4363-85f0-7147923bbb2f] Field
'graphicsDevices' can not be updated when status is 'Up'
 > 2019-08-13 11:44:39,490+01 WARN 
[org.ovirt.engine.core.utils.ObjectIdentityChecker] (default

task-855) [0b246c13-223f-4728-b988-5da34139aeb2] Field
'customCompatibilityVersion' can not be updated when status is 'Up'
 > 2019-08-13 11:44:39,490+01 WARN 
[org.ovirt.engine.core.utils.ObjectIdentityChecker] (default

task-855) [0b246c13-223f-4728-b988-5da34139aeb2] Field 'exportDate'
can not be updated when status is 'Up'
 > 2019-08-13 11:44:39,490+01 WARN 
[org.ovirt.engine.core.utils.ObjectIdentityChecker] (default

task-855) [0b246c13-223f-4728-b988-5da34139

[ovirt-users] Re: iSCSI Multipath/multiple gateways

2019-09-03 Thread Matthias Leopold


Am 03.09.19 um 15:06 schrieb dan.poltaw...@tnp.net.uk:

My iSCSI target (ceph-based) has multiple gateways, I’d like to ensure my 
hosted storage is aware of these such that each gateway can be rebooted for 
maintance without impacting service. What is the appropriate way to configure 
this so that each host knows of the multipath configuration? I do not have 
multiple network paths, just multiple gateways on the same network, so I think 
the multipath UI tool isn’t the way to do this.
  
Any help/docs pointers appreciated.


Regards,

  Dan


Hi,

I'm also planning to use a 2 node Ceph iSCSI gateway with oVirt. Right 
now I only connected it to my DEV oVirt environment, but everything 
works as expected (finally...), so I might be moving it to PROD soon.


My circumstances and findings:
- I'm only using it for customer VM storage, I don't have a hosted engine
- my 2 Ceph iSCSI gateway nodes are also on the same network (network 
connection redundancy is through LACP bonds for them)
- oVirt storage domain setup automatically finds both gateways when you 
"discover" one of them, you must log into both
- multipath configuration on hypervisor hosts is then automatically set 
up without further intervention. be sure to add the multipath.conf 
snippets from 
https://docs.ceph.com/docs/master/rbd/iscsi-initiator-linux/ to your 
hypervisor hosts, so multipathd works correctly (I originally forgot to 
do this...)
- what is called "ISCSI Multipathing" in oVirt on DC level only creates 
iscsiadm "ifaces" (see man iscsiadm), which are not strictly necessary 
IMHO, but are of use in special networking situations. the logical 
networks you use there must not be "required" in Cluster "Logical 
networks" setup


I hope this helps
Matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7I4W3RKKJAEDJ7GGVQTJZQDR65F6EUCW/


[ovirt-users] ovirt-web-ui-1.5.3: immediate logout in VM portal

2019-08-07 Thread Matthias Leopold

Hi,

after upgrading to oVirt 4.3.5 yesterday (which also brought 
ovirt-web-ui-1.5.3) users are immediately logged out after login to the 
"VM Portal" with "You have been logged out due to inactivity" displayed 
in the browser. The "Administration Portal" works as expected.


This happens
- for existing UserRole users/new UserRole users/Administrator users
- with different browsers (Chrome/Firefox/IE)
- also when creating new browser profiles

Logs in engine.log are unsuspicious IMHO (see below for Administrator 
user login).

/var/log/ovirt-engine/ui.log is completely quiet.
Downgrade to 1.5.2 resolves the situation.
Reading about "Added check for inactivity during session and logout 
after expiration" in 1.5.3 changelog suggests that something might have 
gone wrong.

Has anybody seen this?
Shall I file a bug report?

thx
matthias

2019-08-07 12:51:45,556+02 INFO 
[org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-8) 
[] User admin@internal successfully logged in with scopes: 
ovirt-app-admin ovirt-app-api ovirt-app-portal 
ovirt-ext=auth:sequence-priority=~ ovirt-ext=revoke:revoke-all 
ovirt-ext=token-info:authz-search 
ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate 
ovirt-ext=token:password-access
2019-08-07 12:51:45,728+02 INFO 
[org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default 
task-8) [2cb53d8d] Running command: CreateUserSessionCommand internal: 
false.
2019-08-07 12:51:45,768+02 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-8) [2cb53d8d] EVENT_ID: USER_VDC_LOGIN(30), User 
admin@internal-authz connecting from 'xxx.yyy.zzz.63' using session 
'+DY5GdQK35zrApbt971Df0nACY2o5qpT0ebX7zFnYj/SNnJACyH7nKKd5iJSshJZZo0TgkJUoSixB7StGq10VA==' 
logged in.
2019-08-07 12:51:48,385+02 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand] 
(default task-11) [92c5e178-324e-4407-a844-7d1cb67e71b0] START, 
GetFileStatsVDSCommand( 
GetFileStatsParameters:{storagePoolId='1285d24b-53d1-4b4d-bba4-4aa6264f0c4a', 
ignoreFailoverLimit='false'}), log id: cd6a0e4
2019-08-07 12:51:48,394+02 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetFileStatsVDSCommand] 
(default task-11) [92c5e178-324e-4407-a844-7d1cb67e71b0] FINISH, 
GetFileStatsVDSCommand, return: {grml64-full_2018.12.iso={status=0, 
ctime=1553615947.0, size=704905216}, 
CentOS-7-x86_64-Minimal-1810.iso={status=0, ctime=1555410499.0, 
size=962592768}}, log id: cd6a0e4
2019-08-07 12:51:48,419+02 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-11) [92c5e178-324e-4407-a844-7d1cb67e71b0] EVENT_ID: 
REFRESH_REPOSITORY_IMAGE_LIST_SUCCEEDED(998), Refresh image list 
succeeded for domain(s): ISOstar-DEV (All file type)
2019-08-07 12:51:49,478+02 INFO 
[org.ovirt.engine.core.bll.aaa.LogoutSessionCommand] (default task-8) 
[4f195583] Running command: LogoutSessionCommand internal: false.
2019-08-07 12:51:49,524+02 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-8) [4f195583] EVENT_ID: USER_VDC_LOGOUT(31), User 
admin@internal-authz connected from 'xxx.yyy.zzz.63' using session 
'+DY5GdQK35zrApbt971Df0nACY2o5qpT0ebX7zFnYj/SNnJACyH7nKKd5iJSshJZZo0TgkJUoSixB7StGq10VA==' 
logged out.
2019-08-07 12:51:49,581+02 INFO 
[org.ovirt.engine.core.sso.servlets.OAuthRevokeServlet] (default 
task-14) [] User admin@internal successfully logged out
2019-08-07 12:51:49,675+02 INFO 
[org.ovirt.engine.core.bll.aaa.TerminateSessionsForTokenCommand] 
(default task-11) [37daeccc] Running command: 
TerminateSessionsForTokenCommand internal: true.


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LARRDLNCXGLJPLTUB52JMMZMCJ3J6STA/


[ovirt-users] Re: ovirt-web-ui-1.5.3: immediate logout in VM portal

2019-08-08 Thread Matthias Leopold



Am 08.08.19 um 07:49 schrieb Scott Dickerson:



On Wed, Aug 7, 2019 at 11:06 AM Sharon Gratch > wrote:


Hi,
@Scott Dickerson ,  the session logout
issue for VM portal 1.5.3 was handled in the following PRs:
https://github.com/oVirt/ovirt-web-ui/pull/1014
https://github.com/oVirt/ovirt-web-ui/pull/1025

Any idea on what can be the problem?


That is very strange.  We saw a problem similar to that where, when 
web-ui is starting up, the time it took for the app to fetch the 
"UserSessionTimeOutInterval" config value was longer than the time it 
took to load the auto-logout component.  In that case the value was 
considered to be 0 and auto logged the user out right away.  That issue 
was dealt with in PR 1025 and the whole login data load process was 
synchronized properly in PR 1049.


I need some additonal info:
   - The browser console logs from when the page loads to when they're 
logged out

   - the "yum info ovirt-web-ui"

I'll be able to better triage the problem with that info.



Thanks to all for replies. I sent the requested info directly to Scott 
Dickerson.


Matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GLWLBT6KZUEEC76RCIP3QTMJOTDN4MUK/


[ovirt-users] Re: ovirt-web-ui-1.5.3: immediate logout in VM portal

2019-08-08 Thread Matthias Leopold

Thank you very much!

I indeed set this to "-1" in the past and forgot about it. Now 
everything works as expected.


Matthias

Am 08.08.19 um 16:13 schrieb Scott Dickerson:

 From the browser console log:
"09:54:32.004  debug  http GET[7] -> url: 
"/ovirt-engine/api/options/UserSessionTimeOutInterval", headers: 
{"Accept":"application/json","Authorization":"*","Accept-Language":"en_US","Filter":true} 
transport.js:74:9"
"09:54:32.141  debug  Reducing action: 
{"type":"SET_USER_SESSION_TIMEOUT_INTERVAL","payload":{"userSessionTimeoutInterval":-1}} 
utils.js:48:13"


Your engine "UserSessionTimeOutInterval" is set to -1.  VM Portal is 
interpreting this as "auto-logout a second ago" instead of "do not 
auto-logout".


The simple fix is to set that value to something >0 in your engine configs.

I filed https://github.com/oVirt/ovirt-web-ui/issues/1085 to account for 
a -1 value properly in VM Portal.



On Thu, Aug 8, 2019 at 4:07 AM Matthias Leopold 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:




Am 08.08.19 um 07:49 schrieb Scott Dickerson:
 >
 >
 > On Wed, Aug 7, 2019 at 11:06 AM Sharon Gratch mailto:sgra...@redhat.com>
 > <mailto:sgra...@redhat.com <mailto:sgra...@redhat.com>>> wrote:
 >
 >     Hi,
 >     @Scott Dickerson <mailto:sdick...@redhat.com
<mailto:sdick...@redhat.com>>,  the session logout
 >     issue for VM portal 1.5.3 was handled in the following PRs:
 > https://github.com/oVirt/ovirt-web-ui/pull/1014
 > https://github.com/oVirt/ovirt-web-ui/pull/1025
 >
 >     Any idea on what can be the problem?
 >
 >
 > That is very strange.  We saw a problem similar to that where, when
 > web-ui is starting up, the time it took for the app to fetch the
 > "UserSessionTimeOutInterval" config value was longer than the
time it
 > took to load the auto-logout component.  In that case the value was
 > considered to be 0 and auto logged the user out right away.  That
issue
 > was dealt with in PR 1025 and the whole login data load process was
 > synchronized properly in PR 1049.
 >
 > I need some additonal info:
 >    - The browser console logs from when the page loads to when
they're
 > logged out
 >    - the "yum info ovirt-web-ui"
 >
 > I'll be able to better triage the problem with that info.
 >

Thanks to all for replies. I sent the requested info directly to Scott
Dickerson.

Matthias



--
Scott Dickerson
Senior Software Engineer
RHV-M Engineering - UX Team
Red Hat, Inc


--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UQMMGJMJ7AFJGJHSJ7HVOTUCLG6CV6QL/


[ovirt-users] engine.log flooded with "Field 'foo' can not be updated when status is 'Up'"

2019-08-09 Thread Matthias Leopold

Hi,

I updated my production oVirt environment from 4.3.3 to 4.3.5 today. 
Everything went fine so far, but there's one annoying phenomenon:


When I log into the "Administration Portal" and request the VM list 
("/ovirt-engine/webadmin/?locale=en_US#vms") engine.log is flooded with 
lines like


WARN  [org.ovirt.engine.core.utils.ObjectIdentityChecker] (default 
task-10618) [54d8c375-aa72-42f8-876e-8777d9d1a08a] Field 
'balloonEnabled' can not be updated when status is 'Up'


"Field", task and UUID vary and the flood stops after a while. Also 
listing or trying to edit other entities seems to trigger this "storm" 
or loop over and over again to a point that log file size is becoming an 
issue and interface is becoming sluggish. I can also see that CPU usage 
of engine java process goes up. When I log out everything is quiet and 
"VM Portal" is not affected at all.


I have seen lines like that before and know that they are usually OK 
(when changing VM properties), but these logs used to be linked to 
singular events. I suspect that the present behaviour might be linked to 
VMs that have "Pending Virtual Machine changes", which are in most cases 
"Custom Compatibility Version" changes that still stem from the upgrade 
to Cluster Version 4.3. I can't be sure and I can't resolve all these 
pending changes now, but these should not be causing such annoying 
behaviour in the first place.


I resorted to setting engine log level to "ERROR" right now to at least 
stop the log file from growing, but this not a solution. I can still see 
CPU load going up when using the interface. I very much hope that 
someone can explain whats happening and tell me how to resolve this.


thanks a lot
Matthias
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MKFQRCKHRT6NJUHF7URJTQG753MND6PJ/


[ovirt-users] Re: High Performance VM: trouble using vNUMA and hugepages

2019-06-14 Thread Matthias Leopold

Hi,

thanks, this sounds good to me (in the sense of: I didn't make an 
obvious mistake). I'll open a bug report ASAP, probably tomorrow.


Regards
Matthias

Am 13.06.19 um 15:42 schrieb Andrej Krejcir:

Hi,

this is probably a bug. Can you open a new ticket in Bugzilla?
https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine

As a workaround, if you are sure that the VM's NUMA configuration is 
compatible with the host's NUMA configuration, you could create a custom 
cluster scheduling policy and disable the "NUMA" filter. In 
Administration -> Configure -> Scheduling Policies.



Regards,
Andrej


On Thu, 13 Jun 2019 at 12:49, Matthias Leopold 
<mailto:matthias.leop...@meduniwien.ac.at>> wrote:

 > Hi,
 >
 > I'm having trouble using vNUMA and hugepages at the same time:
 >
 > - hypervisor host hast 2 CPU and 768G RAM
 > - hypervisor host is configured to allocate 512 1G hugepages
 > - VM configuration
 > * 2 virtual sockets, vCPUs are evenly pinned to 2 physical CPUs
 > * 512G RAM
 > * 2 vNUMA nodes that are pinned to the 2 host NUMA nodes
 > * custom property "hugepages=1048576"
 > - VM is the only VM on hypervisor host
 >
 > when I want to start the VM I'm getting the error message
 > "The host foo did not satisfy internal filter NUMA because cannot
 > accommodate memory of VM's pinned virtual NUMA nodes within host's
 > physical NUMA nodes"
 > VM start only works when VM memory is shrunk so that it fits in (host
 > memory - allocated huge pages)
 >
 > I don't understand why this happens. Can someone explain to me how this
 > is supposed to work?
 >
 > oVirt engine is 4.3.3
 > oVirt host is 4.3.4
 >
 > thanks
 > matthias


--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 /Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FEHP3UCR7UYMLWHJQFLWOFOJTDK7B35X/


  1   2   >