[ovirt-users] libvirt crashes / update missing

2021-05-07 Thread Rik Theys
Hi,

On our hosts, we periodically see libvirt crashes as described in
https://access.redhat.com/solutions/5694061.

https://access.redhat.com/errata/RHSA-2021:1125 (issued April 7) is the
advisory that contains a fix for this issue.

This bug should be fixed in libvirt 6.6.0-13.2 in the "Advanced
Virtualization" repo. When I look at the
ovirt-4.4-advanced-virtualization repo on my hosts, it does not (yet)
contain this update.

Are updates from the Advanced Virtualization repo not automatically
rebuild for CentOS 8.3?

On a CentOS Stream 8 machine, libvirt is version
libvirt-6.0.0-35.module_el8.5.0+746+bbd5d70c.src.rpm.

Installing centos-release-advanced-virtualization adds the Advanced
Virtualization repo file, which then also brings in the 6.6.0-13 version.

When I look at a CentOS 8 stream mirror, for example
http://mirror.kinamo.be/centos/8-stream/virt/x86_64/, I do see the virt
repos with a newer libvirt, but this is not the repo the
centos-release-advanced-virtualization package is using?

What is the recommended setup to use to have this bug fix available?

Regards,

Rik

-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3UVKERTV4QYUCOCLNPLUA3CA2Z65CT7G/


[ovirt-users] Re: 4.4.5 released? Fails to upgrade

2021-03-18 Thread Rik Theys
Hi Gianluca,

On 3/18/21 12:42 PM, Gianluca Cecchi wrote:
> On Thu, Mar 18, 2021 at 12:35 PM Rik Theys  <mailto:rik.th...@esat.kuleuven.be>> wrote:
>
> Hi,
>
> I'm confused: has 4.4.5 been released or did I pull in some
> intermediate
> version with known issues?
>
> Regards,
>
> Rik
>
>
> I see here the iso for the ng node in the standard location
> https://resources.ovirt.org/pub/ovirt-4.4/iso/ovirt-node-ng-installer/4.4.5-2021031723/el8/
> <https://resources.ovirt.org/pub/ovirt-4.4/iso/ovirt-node-ng-installer/4.4.5-2021031723/el8/>
> and also the engine appliance rpm here
> https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/x86_64/
> <https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/x86_64/>
> has name ovirt-engine-appliance-4.4-20210317223637.1.el8.x86_64.rpm
> so it seems somehow released but no announce yet.
>
The packages are the correct ones and 4.4.5 will be released today. It
seems I hit a bug on one of our engine machines.

https://bugzilla.redhat.com/show_bug.cgi?id=1940448

Regards,

Rik

-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OIX6FE7MOSLL65R4EQ6DCD5ORG32KIXT/


[ovirt-users] 4.4.5 released? Fails to upgrade

2021-03-18 Thread Rik Theys
Hi,

My systems pulled in 4.4.5 packages last night, so I assume oVirt 4.4.5
was released? The release notes page does not list the release and I
also did not see any announcement.

The packages are 4.4.5.10-1.el8.

I ran an engine-setup and upgraded to this release but the upgrade has
failed due to a failure to update the database schema and it seems the
rollback was not successful as my instance failed to start afterwards.

I've downgraded all packages and ran engine-setup --offline, which seems
to at least bring back my engine to a working state.

I'm confused: has 4.4.5 been released or did I pull in some intermediate
version with known issues?

Regards,

Rik


-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VFNPB3GH4DK2TEVOSX3HR3H56FNLKBF6/


[ovirt-users] Re: storage use after storage live migration

2020-03-20 Thread Rik Theys
Hi,

On 3/19/20 5:29 PM, Strahil Nikolov wrote:
> On March 19, 2020 2:34:16 PM GMT+02:00, Rik Theys 
>  wrote:
>> Hi,
>>
>> We are in the process of migrating our VM's from one storage domain to
>> another. Both domains are FC storage.
>>
>> There are VM's with thin provisioned disks of 16G that currently only
>> occupy 3G according to the interface. When we live migrate the disks
>> (with the VM running), I can see that a snapshot is being taken and
>> removed afterwards.
>>
>> After the storage migration, the occupied disk space on the new storage
>> domain is 6G. Even for a VM that hardly has any writes. How can I
>> reclaim this space? I've powered down the VM and did a sparsify on the
>> disk but this doesn't seem to have any effect.
>>
>> When I do a storage migration of a VM with a thin provisioned disk that
>> is down during the migration, the used disk space does not increase.
>>
>> VM's with fully allocated disks also don't seem to exhibit this
>> behavior.
>>
>> My storage domain now also contains VM with more occupied storage space
>> and the size of the disk?? There are no snapshots listed for those
>> disks. Is there a way to clean up this situation?
>>
>> Regards,
>>
>> Rik
> Are  you sure that you zeroed out  the empty space?
> I would  enable the trim option for the VMs' disks,  then run fstrim from the 
> vm and last try to sparsify ?
I've enabled lvm discards and ran fstrim. It indicates blocks where
freed, but the disk size in oVirt has not reduced.
> If it's  linux,  you can do storage migration from the OS.

You mean add a disk from the new storage domain and use pvmove?

I'm not sure I've explained my issue well enough.

1. If I have a running VM with a 16G disk, of which oVirt tells me the
actual size is now 3G, and I live migrate it to my new storage domain,
the actual size grows to 7G.

2. If I do the same with a similar VM that is down, there is no increase
in size.

3. If I live migrate a VM with a preallocated disk, I notice that the
"actual size" increases during the migration (and the type switched to
thin provisioned), but the additional space is reduced again (and the
type switched back) after the migration finishes.

My question is how do I reduce the actual size of the disk again in
scenario 1? If the additional used space can be freed in scenario 3, why
not in scenario 1?

Regards,

Rik



>
> Best Regards,
> Strahil Nikolov

-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5EKKDPV4RQDJFU64DTHJAEVB6KUX6OHQ/


[ovirt-users] Import storage domain with different storage type?

2020-03-19 Thread Rik Theys
Hi,

We have an oVirt environment with a FC storage domain. Multiple LUNs on
a SAN are exported to the oVirt nodes and combined in a single FC
storage domain.

The SAN replicates the disks to another storage box that has iSCSI
connectivity.

Is it possible to - in case of disaster - import the existing,
replicated, storage domain as an iSCSI domain and import/run the VM's
from that domain? Or is import of a storage domain only possible if they
are the same type? Does it also work if multiple LUNs are needed to form
the storage domain?

Are there any special actions that should be performed beyond the
regular import action?

Regards,

Rik

-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YQZLKJYPMYUXO42FWEIBTRWHZQFRQEYZ/


[ovirt-users] storage use after storage live migration

2020-03-19 Thread Rik Theys
Hi,

We are in the process of migrating our VM's from one storage domain to
another. Both domains are FC storage.

There are VM's with thin provisioned disks of 16G that currently only
occupy 3G according to the interface. When we live migrate the disks
(with the VM running), I can see that a snapshot is being taken and
removed afterwards.

After the storage migration, the occupied disk space on the new storage
domain is 6G. Even for a VM that hardly has any writes. How can I
reclaim this space? I've powered down the VM and did a sparsify on the
disk but this doesn't seem to have any effect.

When I do a storage migration of a VM with a thin provisioned disk that
is down during the migration, the used disk space does not increase.

VM's with fully allocated disks also don't seem to exhibit this behavior.

My storage domain now also contains VM with more occupied storage space
and the size of the disk?? There are no snapshots listed for those
disks. Is there a way to clean up this situation?

Regards,

Rik

-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SWXGAJ7JMLM67NFZRXEQGNNT265YV26R/


[ovirt-users] Storage performance comparison (gluster vs FC)

2020-02-26 Thread Rik Theys
Hi,

We currently use oVirt on two hosts that connect to a shared storage
using SAS. In oVirt this is a "FC" storage domain. Since the warrantly
on the storage box is ending, we are looking at alternatives.

One of the options would be to use gluster and use a "hyperconverged"
setup where compute and gluster are on the same hosts. We would probably
end up with 3 hosts and a "replica 3 arbiter 1" gluster volume. (Or is
another volume type more recommended for this type of setup?)

I was wondering what the expected performance would be of this type of
setup compared to a shared storage over FC. I expect the I/O latency of
gluster to be much higher than the latency for the SAS connected storage
box? Has anybody compared these storage setups?

Regards,

Rik

-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6HQF3N2PJRQY6T27LEKR3ZFMO62MUFC/


[ovirt-users] Re: [ANN] oVirt 4.3.6 is now generally available

2019-09-30 Thread Rik Theys

Hi,

On 9/29/19 12:54 AM, Nir Soffer wrote:
On Sat, Sep 28, 2019 at 11:04 PM Rik Theys <mailto:rik.th...@esat.kuleuven.be>> wrote:


Hi Nir,

Thank you for your time.

On 9/27/19 4:27 PM, Nir Soffer wrote:



On Fri, Sep 27, 2019, 12:37 Rik Theys mailto:rik.th...@esat.kuleuven.be>> wrote:

Hi,

After upgrading to 4.3.6, my storage domain can no longer be
activated, rendering my data center useless.

My storage domain is local storage on a filesystem backed by
VDO/LVM. It seems 4.3.6 has added support for 4k storage.
My VDO does not have the 'emulate512' flag set.


This configuration is not supported before 4.3.6. Various
operations may fail when
reading or writing to storage.

I was not aware of this when I set it up as I did not expect this
to influence a setup where oVirt uses local storage (a file system
location).


4.3.6 detects storageblock size, creates compatible storage
domain metadata, and
consider the block size when accessing storage.

I've tried downgrading all packages on the host to the
previous versions (with ioprocess 1.2), but this does not
seem to make any difference.


Downgrading should solve your issue, but without any logs we only
guess.


I was able to work around my issue by downgrading to ioprocess 1.1
(and vdsm-4.30.24). Downgrading to only 1.2 did not solve my
issue. With ioprocess downgraded to 1.1, I did not have to
downgrade the engine (still on 4.3.6).

ioprocess 1.1. is not recommended, you really want to use 1.3.0.

I think I now have a better understanding what happened that
triggered this.

During a nightly yum-cron, the ioprocess and vdsm packages on the
host were upgraded to 1.3 and vdsm 4.30.33. At this point, the
engine log started to log:

2019-09-27 03:40:27,472+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc]
Executing with domain map: {6bdf1a0d-274b-4195-8f
f5-a5c002ea1a77=active}
2019-09-27 03:40:27,646+02 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc]
Unexpected return value: Status [code=348, message=Block size does
not match storage block size: 'block_size=512,
storage_block_size=4096']

This means that when activating the storage domain, vdsm detected that 
the storage block size

is 4k, but the domain metadata reports block size of 512.

This combination may partly work for localfs domain since we don't use 
sanlock with local storage,
and vdsm does not use direct I/O when writing to storage, and always 
use 4k block size when

reading metadata from storage.

Note that with older ovirt-imageio < 1.5.2, image uploads and 
downloads may fail when using 4k storage.

in recent ovirt-imageio we detect and use the correct block size.

2019-09-27 03:40:27,646+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] FINISH,
ConnectStoragePoolVDSCommand, return: , log id: 483c7a17

I did not notice at first that this was a storage related issue
and assumed it may get resolved by also upgrading the engine. So
in the morning I upgraded the engine to 4.3.6 but this did not
resolve my issue.

I then found the above error in the engine log. In the release
notes of 4.3.6 I read about the 4k support.

I then downgraded ioprocess (and vdsm) to ioprocess 1.2 but that
did also not solve my issue. This is when I contacted the list
with my question.

Afterwards I found in the ioprocess rpm changelog that (partial?)
4k support was also in 1.2. I kept on downgrading until I got
ioprocess 1.1 (without 4k support) and at this point I could
re-attach my storage domain.

You mention above that 4.3.6 will detect the block size and
configure the metadata on the storage domain? I've checked the
dom_md/metadata file and it shows:

ALIGNMENT=1048576
*BLOCK_SIZE=512*
CLASS=Data
DESCRIPTION=studvirt1-Local
IOOPTIMEOUTSEC=10
LEASERETRIES=3
LEASETIMESEC=60
LOCKPOLICY=
LOCKRENEWALINTERVALSEC=5
MASTER_VERSION=1
POOL_DESCRIPTION=studvirt1-Local
POOL_DOMAINS=6bdf1a0d-274b-4195-8ff5-a5c002ea1a77:Active
POOL_SPM_ID=-1
POOL_SPM_LVER=-1
POOL_UUID=085f02e8-c3b4-4cef-a35c-e357a86eec0c
REMOTE_PATH=/data/images
ROLE=Master
SDUUID=6bdf1a0d-274b-4195-8ff5-a5c002ea1a77
TYPE=LOCALFS
VERSION=5
_SHA_CKSUM=9dde06bbc9f2316efc141565738ff32037b1ff66

So you have a v5 localfs storage domain - because we don't use leases, 
this domain should work

with 4.3.6 if you modify this line in the domain metadata.

BLOCK_SIZE=4096

To modify the line, you have to delete the che

[ovirt-users] Re: [ANN] oVirt 4.3.6 is now generally available

2019-09-28 Thread Rik Theys

Hi Nir,

Thank you for your time.

On 9/27/19 4:27 PM, Nir Soffer wrote:



On Fri, Sep 27, 2019, 12:37 Rik Theys <mailto:rik.th...@esat.kuleuven.be>> wrote:


Hi,

After upgrading to 4.3.6, my storage domain can no longer be
activated, rendering my data center useless.

My storage domain is local storage on a filesystem backed by
VDO/LVM. It seems 4.3.6 has added support for 4k storage.
My VDO does not have the 'emulate512' flag set.


This configuration is not supported before 4.3.6. Various operations 
may fail when

reading or writing to storage.
I was not aware of this when I set it up as I did not expect this to 
influence a setup where oVirt uses local storage (a file system location).


4.3.6 detects storageblock size, creates compatible storage domain 
metadata, and

consider the block size when accessing storage.

I've tried downgrading all packages on the host to the previous
versions (with ioprocess 1.2), but this does not seem to make any
difference.


Downgrading should solve your issue, but without any logs we only guess.


I was able to work around my issue by downgrading to ioprocess 1.1 (and 
vdsm-4.30.24). Downgrading to only 1.2 did not solve my issue. With 
ioprocess downgraded to 1.1, I did not have to downgrade the engine 
(still on 4.3.6).


I think I now have a better understanding what happened that triggered this.

During a nightly yum-cron, the ioprocess and vdsm packages on the host 
were upgraded to 1.3 and vdsm 4.30.33. At this point, the engine log 
started to log:


2019-09-27 03:40:27,472+02 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] Executing with 
domain map: {6bdf1a0d-274b-4195-8f

f5-a5c002ea1a77=active}
2019-09-27 03:40:27,646+02 WARN 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] Unexpected 
return value: Status [code=348, message=Block size does not match 
storage block size: 'block_size=512, storage_block_size=4096']
2019-09-27 03:40:27,646+02 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-384418) [695f38cc] FINISH, 
ConnectStoragePoolVDSCommand, return: , log id: 483c7a17


I did not notice at first that this was a storage related issue and 
assumed it may get resolved by also upgrading the engine. So in the 
morning I upgraded the engine to 4.3.6 but this did not resolve my issue.


I then found the above error in the engine log. In the release notes of 
4.3.6 I read about the 4k support.


I then downgraded ioprocess (and vdsm) to ioprocess 1.2 but that did 
also not solve my issue. This is when I contacted the list with my question.


Afterwards I found in the ioprocess rpm changelog that (partial?) 4k 
support was also in 1.2. I kept on downgrading until I got ioprocess 1.1 
(without 4k support) and at this point I could re-attach my storage domain.


You mention above that 4.3.6 will detect the block size and configure 
the metadata on the storage domain? I've checked the dom_md/metadata 
file and it shows:


ALIGNMENT=1048576
*BLOCK_SIZE=512*
CLASS=Data
DESCRIPTION=studvirt1-Local
IOOPTIMEOUTSEC=10
LEASERETRIES=3
LEASETIMESEC=60
LOCKPOLICY=
LOCKRENEWALINTERVALSEC=5
MASTER_VERSION=1
POOL_DESCRIPTION=studvirt1-Local
POOL_DOMAINS=6bdf1a0d-274b-4195-8ff5-a5c002ea1a77:Active
POOL_SPM_ID=-1
POOL_SPM_LVER=-1
POOL_UUID=085f02e8-c3b4-4cef-a35c-e357a86eec0c
REMOTE_PATH=/data/images
ROLE=Master
SDUUID=6bdf1a0d-274b-4195-8ff5-a5c002ea1a77
TYPE=LOCALFS
VERSION=5
_SHA_CKSUM=9dde06bbc9f2316efc141565738ff32037b1ff66

I assume that at this point it works because ioprocess 1.1 does not 
report the block size to the engine (as it doesn't support this option?)?


Can I update the storage domain metadata manually to report 4096 instead?

I also noticed that the storage_domain_static table has the block_size 
stored. Should I update this field at the same time as I update the 
metadata file?


If the engine log and database dump is still needed to better understand 
the issue, I will send it on Monday.


Regards,

Rik



Should I also downgrade the engine to 4.3.5 to get this to work
again. I expected the downgrade of the host to be sufficient.

As an alternative I guess I could enable the emulate512 flag on
VDO but I can not find how to do this on an existing VDO volume.
Is this possible?


Please share more data so we can understand the failure:

- complete vdsm log showing the failure to activate the domain
- with 4.3.6
- with 4.3.5 (after you downgraded
- contents of 
/rhev/data-center/mnt/_/domain-uuid/dom_md/metadata

(assuming your local domain mount is /domaindir)
- engine db dump

Nir


Regards,
Rik


On 9/26/19 4:58 PM, Sandro Bonazzola wrote:


The oVirt Project is pleased to announce the general availability
of oVirt 4.3.6 as of Septe

[ovirt-users] Re: [ANN] oVirt 4.3.6 is now generally available

2019-09-27 Thread Rik Theys
   QEMU KVM EV2.12.0-33.1 :
https://cbs.centos.org/koji/buildinfo?buildID=26484


Given the amount of security fixes provided by this release, upgrade 
is recommended as soon as practical.



Additional Resources:

* Read more about the oVirt 4.3.6 release 
highlights:http://www.ovirt.org/release/4.3.6/


* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt

* Check out the latest project news on the oVirt 
blog:http://www.ovirt.org/blog/


[1] http://www.ovirt.org/release/4.3.6/

[2] http://resources.ovirt.org/pub/ovirt-4.3/iso/

--

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA <https://www.redhat.com/>

sbona...@redhat.com <mailto:sbona...@redhat.com>

<https://www.redhat.com/>

*Red Hat respects your work life balance. Therefore there is no need 
to answer this email out of your office hours. 
<https://mojo.redhat.com/docs/DOC-1199578>*


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AY66CEQHHYOVBWAQQYYSPEG5DXEIUAAT/



--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JPIYWV2OUNLNHUY6EU7YZI2RYFW2SW5L/


[ovirt-users] Re: oVirt and CentOS Stream

2019-09-27 Thread Rik Theys

Hi Sandro,

I could not find anything regarding security support for CentOS Stream. 
Will the updated packages in CentOS Stream receive the same security 
support as regular RHEL/CentOS?


Regards,
Rik

On 9/27/19 9:10 AM, Sandro Bonazzola wrote:



Il giorno gio 26 set 2019 alle ore 17:29 Strahil 
mailto:hunter86...@yahoo.com>> ha scritto:


Should I understand that the most tested platform will be CentOS
Stream 8 ?


We expect CentOS Stream 8 to become the platform used to develop oVirt 
so we expect it to be the most tested on development.


Will Fedora & CentOS 8 still viable option ?

Best Regards,
Strahil Nikolov



Since CentOS Stream will be upstream to CentOS Linux, CentOS Linux 
should still be a viable option.
Please note that at oVirt GA time CentOS Linux may be missing some 
packages or features that should be included in next CentOS Linux so 
staying on CentOS Linux may mean you'll probably need to wait 
upgrading to latest oVirt till next CentOS Linux will go GA.

Details about exact flow are still under review.

About Fedora, there's no plan to change our support policy for it, we 
are going to work with it as usual, trying to support it as best 
effort / tech preview.


Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA <https://www.redhat.com/>

sbona...@redhat.com <mailto:sbona...@redhat.com>

<https://www.redhat.com/>

*Red Hat respects your work life balance. Therefore there is no need 
to answer this email out of your office hours.*


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BMQ6HRH572IWFHFHZPGRZNC2ZPTJ3I55/



--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4XR3GEH4FHSE7ZZJTKYPNRTPYO7C6UIG/


[ovirt-users] Re: VM pools broken in 4.3

2019-05-17 Thread Rik Theys
Hi,

Things are going from bad to worse it seems.

I've created a new VM to be used as a template and installed it with
CentOS 7. I've created a template of this VM and created a new pool
based on this template.

When I try to boot one of the VM's from the pool, it fails and logs the
following error:

2019-05-17 14:48:01,709+0200 ERROR (vm/f7da02e4) [virt.vm]
(vmId='f7da02e4-725c-4c6c-bdd4-9f2cae8b10e4') The vm start process
failed (vm:937)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 866, in
_startUnderlyingVm
    self._run()
  File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2861, in
_run
    dom = self._connection.defineXML(self._domain.xml)
  File
"/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
line 131, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line
94, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3743, in
defineXML
    if ret is None:raise libvirtError('virDomainDefineXML() failed',
conn=self)
libvirtError: XML error: requested USB port 3 not present on USB bus 0
2019-05-17 14:48:01,709+0200 INFO  (vm/f7da02e4) [virt.vm]
(vmId='f7da02e4-725c-4c6c-bdd4-9f2cae8b10e4') Changed state to Down: XML
error: requested USB port 3 not present on USB bus 0 (code=1) (vm:1675)

Strange thing is that this error was not present when I created the
initial master VM.

I get similar errors when I select Q35 type VM's instead of the default.

Did your test pool have VM's with USB enabled?

Regards,

Rik

On 5/17/19 10:48 AM, Rik Theys wrote:
>
> Hi Lucie,
>
> On 5/16/19 6:27 PM, Lucie Leistnerova wrote:
>>
>> Hi Rik,
>>
>> On 5/14/19 2:21 PM, Rik Theys wrote:
>>>
>>> Hi,
>>>
>>> It seems VM pools are completely broken since our upgrade to 4.3. Is
>>> anybody else also experiencing this issue?
>>>
>> I've tried to reproduce this issue. And I can use pool VMs as
>> expected, no problem. I've tested clean install and also upgrade from
>> 4.2.8.7.
>> Version: ovirt-engine-4.3.3.7-0.1.el7.noarch with
>> ovirt-web-ui-1.5.2-1.el7ev.noarch 
> That is strange. I will try to create a new pool to verify if I also
> have the problem with the new pool. Currently we are having this issue
> with two different pools. Both pools were created in August or
> September of last year. I believe it was on 4.2 but could still have
> been 4.1.
>>>
>>> Only a single instance from a pool can be used. Afterwards the pool
>>> becomes unusable due to a lock not being released. Once ovirt-engine
>>> is restarted, another (single) VM from a pool can be used.
>>>
>> What users are running the VMs? What are the permissions?
>
> The users are taking VM's from the pools using the user portal. They
> are all member of a group that has the UserRole permission on the pools.
>
>> Each VM is running by other user? Were already some VMs running
>> before the upgrade?
>
> A user can take at most 1 VM from each pool. So it's possible a user
> has two VM's running (but not from the same pool). It doesn't matter
> which user is taking a VM from the pool. Once a user has taken a VM
> from the pool, no other user can take a VM. If the user that was able
> to take a VM powers it down and tries to run a new VM, it will also fail.
>
> During the upgrade of the host, no VM's were running.
>
>> Please provide exact steps. 
>
> 1. ovirt-engine is restarted.
>
> 2. User A takes a VM from the pool and can work.
>
> 3. User B can not take a VM from that pool.
>
> 4. User A powers off the VM it was using. Once the VM is down, (s)he
> tries to take a new VM, which also fails now.
>
> It seems the VM pool is locked when the first user takes a VM and the
> lock is never released.
>
> In our case, there are no prestarted VM's. I can try to see if that
> makes a difference.
>
>
> Are there any more steps I can take to debug this issue regarding the
> locks?
>
> Regards,
>
> Rik
>
>>> I've added my findings to bug 1462236, but I'm no longer sure the
>>> issue is the same as the one initially reported.
>>>
>>> When the first VM of a pool is started:
>>>
>>> 2019-05-14 13:26:46,058+02 INFO  
>>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
>>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] START, 
>>> IsVmDuringInitiatingVDSCommand( 
>>> IsVmDuringInitiatingVDSCommandParameters:{vmId='d8a99676-d520-425e-9974-1b1efe6da8a5'}),
>>>  log id: 2fb4f7f5
>>> 2019-05-14 

[ovirt-users] Re: VM pools broken in 4.3

2019-05-17 Thread Rik Theys
Hi Gianluca,

We are not using gluster, but FC storage.

All VM's from the pool are created from a template.

Regards,

Rik

On 5/16/19 6:48 PM, Gianluca Cecchi wrote:
> On Thu, May 16, 2019 at 6:32 PM Lucie Leistnerova  <mailto:lleis...@redhat.com>> wrote:
>
> Hi Rik,
>
>     On 5/14/19 2:21 PM, Rik Theys wrote:
>>
>> Hi,
>>
>> It seems VM pools are completely broken since our upgrade to 4.3.
>> Is anybody else also experiencing this issue?
>>
> I've tried to reproduce this issue. And I can use pool VMs as
> expected, no problem. I've tested clean install and also upgrade
> from 4.2.8.7.
> Version: ovirt-engine-4.3.3.7-0.1.el7.noarch with
> ovirt-web-ui-1.5.2-1.el7ev.noarch
>>
>> Only a single instance from a pool can be used. Afterwards the
>> pool becomes unusable due to a lock not being released. Once
>> ovirt-engine is restarted, another (single) VM from a pool can be
>> used.
>>
> What users are running the VMs? What are the permissions?
> Each VM is running by other user? Were already some VMs running
> before the upgrade?
> Please provide exact steps.
>>
>>
> Hi, just an idea... could it be related in any way with disks always
> created as preallocated problems reported by users using gluster as
> backend storage?
> What kind of storage domains are you using Rik?
>
> Gianluca 

-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7YU4VIWOFBN4MB4FDOHQCUIUSEHNW2TV/


[ovirt-users] Re: VM pools broken in 4.3

2019-05-17 Thread Rik Theys
Hi Lucie,

On 5/16/19 6:27 PM, Lucie Leistnerova wrote:
>
> Hi Rik,
>
> On 5/14/19 2:21 PM, Rik Theys wrote:
>>
>> Hi,
>>
>> It seems VM pools are completely broken since our upgrade to 4.3. Is
>> anybody else also experiencing this issue?
>>
> I've tried to reproduce this issue. And I can use pool VMs as
> expected, no problem. I've tested clean install and also upgrade from
> 4.2.8.7.
> Version: ovirt-engine-4.3.3.7-0.1.el7.noarch with
> ovirt-web-ui-1.5.2-1.el7ev.noarch 
That is strange. I will try to create a new pool to verify if I also
have the problem with the new pool. Currently we are having this issue
with two different pools. Both pools were created in August or September
of last year. I believe it was on 4.2 but could still have been 4.1.
>>
>> Only a single instance from a pool can be used. Afterwards the pool
>> becomes unusable due to a lock not being released. Once ovirt-engine
>> is restarted, another (single) VM from a pool can be used.
>>
> What users are running the VMs? What are the permissions?

The users are taking VM's from the pools using the user portal. They are
all member of a group that has the UserRole permission on the pools.

> Each VM is running by other user? Were already some VMs running before
> the upgrade?

A user can take at most 1 VM from each pool. So it's possible a user has
two VM's running (but not from the same pool). It doesn't matter which
user is taking a VM from the pool. Once a user has taken a VM from the
pool, no other user can take a VM. If the user that was able to take a
VM powers it down and tries to run a new VM, it will also fail.

During the upgrade of the host, no VM's were running.

> Please provide exact steps. 

1. ovirt-engine is restarted.

2. User A takes a VM from the pool and can work.

3. User B can not take a VM from that pool.

4. User A powers off the VM it was using. Once the VM is down, (s)he
tries to take a new VM, which also fails now.

It seems the VM pool is locked when the first user takes a VM and the
lock is never released.

In our case, there are no prestarted VM's. I can try to see if that
makes a difference.


Are there any more steps I can take to debug this issue regarding the locks?

Regards,

Rik

>> I've added my findings to bug 1462236, but I'm no longer sure the
>> issue is the same as the one initially reported.
>>
>> When the first VM of a pool is started:
>>
>> 2019-05-14 13:26:46,058+02 INFO  
>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] START, 
>> IsVmDuringInitiatingVDSCommand( 
>> IsVmDuringInitiatingVDSCommandParameters:{vmId='d8a99676-d520-425e-9974-1b1efe6da8a5'}),
>>  log id: 2fb4f7f5
>> 2019-05-14 13:26:46,058+02 INFO  
>> [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default 
>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] FINISH, 
>> IsVmDuringInitiatingVDSCommand, return: false, log id: 2fb4f7f5
>> 2019-05-14 13:26:46,208+02 INFO  [org.ovirt.engine.core.bll.VmPoolHandler] 
>> (default task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Lock Acquired to 
>> object 
>> 'EngineLock:{exclusiveLocks='[d8a99676-d520-425e-9974-1b1efe6da8a5=VM]', 
>> sharedLocks=''}'
>>
>> -> it has acquired a lock (lock1)
>>
>> 2019-05-14 13:26:46,247+02 INFO  
>> [org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default 
>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Lock Acquired to object 
>> 'EngineLock:{exclusiveLocks='[a5bed59c-d2fe-4fe4-bff7-52efe089ebd6=USER_VM_POOL]',
>>  sharedLocks=''}'
>>
>> -> it has acquired another lock (lock2)
>>
>> 2019-05-14 13:26:46,352+02 INFO  
>> [org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default 
>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Running command: 
>> AttachUserToVmFromPoolAndRunCommand internal: false. Entities affected :  
>> ID: 4c622213-e5f4-4032-8639-643174b698cc Type: VmPoolAction group 
>> VM_POOL_BASIC_OPERATIONS with role type USER
>> 2019-05-14 13:26:46,393+02 INFO  
>> [org.ovirt.engine.core.bll.AddPermissionCommand] (default task-6) 
>> [e3c5745c-e593-4aed-ba67-b173808140e8] Running command: AddPermissionCommand 
>> internal: true. Entities affected :  ID: 
>> d8a99676-d520-425e-9974-1b1efe6da8a5 Type: VMAction group 
>> MANIPULATE_PERMISSIONS with role type USER
>> 2019-05-14 13:26:46,433+02 INFO  
>> [org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default 
>> task-6) [e3c5745c-e593-4aed-ba67-b173808140e8] Succeeded giving user 
>> 'a5bed59c-d2fe-4fe4-bff7-52efe089ebd6' permission to Vm 
>&

[ovirt-users] VM pools broken in 4.3

2019-05-14 Thread Rik Theys
4-bff7-52efe089ebd6=USER_VM_POOL]',
 sharedLocks=''}'
2019-05-14 13:49:32,700+02 WARN  
[org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default 
task-11) [55cc0796-4f53-49cd-8739-3b7e7dd2d95b] Validation of action 
'AttachUserToVmFromPoolAndRun' failed for user u0045...@esat.kuleuven.be-authz 
<mailto:u0045...@esat.kuleuven.be-authz>. Reasons: 
VAR__ACTION__ALLOCATE_AND_RUN,VAR__TYPE__VM_FROM_VM_POOL,ACTION_TYPE_FAILED_NO_AVAILABLE_POOL_VMS
2019-05-14 13:49:32,700+02 INFO  
[org.ovirt.engine.core.bll.AttachUserToVmFromPoolAndRunCommand] (default 
task-11) [55cc0796-4f53-49cd-8739-3b7e7dd2d95b] Lock freed to object 
'EngineLock:{exclusiveLocks='[a5bed59c-d2fe-4fe4-bff7-52efe089ebd6=USER_VM_POOL]',
 sharedLocks=''}'
2019-05-14 13:49:32,706+02 ERROR 
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default 
task-11) [] Operation Failed: [Cannot allocate and run VM from VM-Pool. There 
are no available VMs in the VM-Pool.]


Regards,
Rik


-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3IASEV7U7DIDVHAGAR2E2WQVFTCFH7QU/


[ovirt-users] USB3 redirection

2017-12-20 Thread Rik Theys
Hi,

I'm trying to assign a USB3 controller to a CentOS 7.4 VM in oVirt 4.1
with USB redirection enabled.

I've created the following file in /etc/ovirt-engine/osinfo.conf.d:

01-usb.properties with content

os.other.devices.usb.controller.value = nec-xhci

and have restarted ovirt-engine.

If I disable USB-support in the web interface for the VM, the xhci
controller is added to the VM (I can see it in the qemu-kvm
commandline), but usb redirection is not available.

If I enable USB-support in the UI, no xhci controller is added (only 4
uhci controllers).

Is there a way to make the controllers for usb redirection xhci controllers?

Regards,

Rik


-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] thin_check run on VM disk by host on startup ?!

2016-08-31 Thread Rik Theys
Hi,

On 08/31/2016 02:04 PM, Nir Soffer wrote:
> On Wed, Aug 31, 2016 at 2:30 PM, Rik Theys <rik.th...@esat.kuleuven.be> wrote:
>> Hi,
>>
>> On 08/31/2016 11:51 AM, Nir Soffer wrote:
>>> On Wed, Aug 31, 2016 at 11:07 AM, Rik Theys <rik.th...@esat.kuleuven.be> 
>>> wrote:
>>>> On 08/31/2016 09:43 AM, Rik Theys wrote:
>>>>> On 08/30/2016 04:47 PM, Nir Soffer wrote:
>>>>>> On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys <rik.th...@esat.kuleuven.be> 
>>>>>> wrote:
>>>>>>> While rebooting one of the hosts in an oVirt cluster, I noticed that
>>>>>>> thin_check is run on the thin pool devices of one of the VM's on which
>>>>>>> the disk is assigned to.
>>>>>>>
>>>>>>> That seems strange to me. I would expect the host to stay clear of any
>>>>>>> VM disks.
>>>>>>
>>>>>> We expect the same thing, but unfortunately systemd and lvm try to
>>>>>> auto activate stuff. This may be good idea for desktop system, but
>>>>>> probably bad idea for a server and in particular a hypervisor.
>>>>>>
>>>>>> We don't have a solution yet, but you can try these:
>>>>>>
>>>>>> 1. disable lvmetad service
>>>>>>
>>>>>> systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket
>>>>>> systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
>>>>>>
>>>>>> Edit /etc/lvm/lvm.conf:
>>>>>>
>>>>>> use_lvmetad = 0
>>>>>>
>>>>>> 2. disable lvm auto activation
>>>>>>
>>>>>> Edit /etc/lvm/lvm.conf:
>>>>>>
>>>>>> auto_activation_volume_list = []
>>>>>>
>>>>>> 3. both 1 and 2
>>>>>>
>>>>>
>>>>> I've now applied both of the above and regenerated the initramfs and
>>>>> rebooted and the host no longer lists the LV's of the VM. Since I
>>>>> rebooted the host before without this issue, I'm not sure a single
>>>>> reboot is enough to conclude it has fully fixed the issue.
>>>>>
>>>>> You mention that there's no solution yet. Does that mean the above
>>>>> settings are not 100% certain to avoid this behaviour?
>>>>>
>>>>> I was thinking of setting a global_filter in /etc/lvm/lvm.conf to only
>>>>> include the PV's for the hypervisor disks (on which the OS is installed)
>>>>> so the system lvm commands only touches those. Since vdsm is using its
>>>>> own lvm.conf this should be OK for vdsm?
>>>>
>>>> This does not seem to work. The host can not be activated as it can't
>>>> find his volume group(s). To be able to use the global_filter in
>>>> /etc/lvm/lvm.conf, I believe it should be overridden in vdsm's lvm.conf
>>>> to revert back to the default.
>>>>
>>>> I've moved my filter from global_filter to filter and that seems to
>>>> work. When lvmetad is disabled I believe this should have the same
>>>> effect as global_filter? The comments in /etc/lvm/lvm.conf indicate also
>>>> udev might ignore the filter setting?
>>>
>>> Right, global_filter exist so you can override filter used from the command
>>> line.
>>>
>>> For example, hiding certain devices from vdsm. This is why we are using
>>> filter in vdsm, leaving global_filter for the administrator.
>>>
>>> Can you explain why do you need global_filter or filter for the
>>> hypervisor disks?
>>
>> Based on the comment in /etc/lvm/lvm.conf regarding global_filter I
>> concluded that not only lvmetad but also udev might perform action on
>> the devices and I wanted to prevent that.
>>
>> I've now set the following settings in /etc/lvm/lvm.conf:
>>
>> use_lvmetad = 0
>> auto_activation_volume_list = []
>> filter = ["a|/dev/sda5|", "r|.*|" ]
> 
> Better use /dev/disk/by-uuid/ to select the specific device, without
> depending on device order.
> 
>>
>> On other systems I have kept the default filter.
>>
>>> Do you have any issue with the current settings, disabling auto activation 
>>> and
>>> lvmetad?
>>
>> Keeping those t

Re: [ovirt-users] thin_check run on VM disk by host on startup ?!

2016-08-31 Thread Rik Theys
Hi,

On 08/31/2016 11:51 AM, Nir Soffer wrote:
> On Wed, Aug 31, 2016 at 11:07 AM, Rik Theys <rik.th...@esat.kuleuven.be> 
> wrote:
>> On 08/31/2016 09:43 AM, Rik Theys wrote:
>>> On 08/30/2016 04:47 PM, Nir Soffer wrote:
>>>> On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys <rik.th...@esat.kuleuven.be> 
>>>> wrote:
>>>>> While rebooting one of the hosts in an oVirt cluster, I noticed that
>>>>> thin_check is run on the thin pool devices of one of the VM's on which
>>>>> the disk is assigned to.
>>>>>
>>>>> That seems strange to me. I would expect the host to stay clear of any
>>>>> VM disks.
>>>>
>>>> We expect the same thing, but unfortunately systemd and lvm try to
>>>> auto activate stuff. This may be good idea for desktop system, but
>>>> probably bad idea for a server and in particular a hypervisor.
>>>>
>>>> We don't have a solution yet, but you can try these:
>>>>
>>>> 1. disable lvmetad service
>>>>
>>>> systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket
>>>> systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
>>>>
>>>> Edit /etc/lvm/lvm.conf:
>>>>
>>>> use_lvmetad = 0
>>>>
>>>> 2. disable lvm auto activation
>>>>
>>>> Edit /etc/lvm/lvm.conf:
>>>>
>>>> auto_activation_volume_list = []
>>>>
>>>> 3. both 1 and 2
>>>>
>>>
>>> I've now applied both of the above and regenerated the initramfs and
>>> rebooted and the host no longer lists the LV's of the VM. Since I
>>> rebooted the host before without this issue, I'm not sure a single
>>> reboot is enough to conclude it has fully fixed the issue.
>>>
>>> You mention that there's no solution yet. Does that mean the above
>>> settings are not 100% certain to avoid this behaviour?
>>>
>>> I was thinking of setting a global_filter in /etc/lvm/lvm.conf to only
>>> include the PV's for the hypervisor disks (on which the OS is installed)
>>> so the system lvm commands only touches those. Since vdsm is using its
>>> own lvm.conf this should be OK for vdsm?
>>
>> This does not seem to work. The host can not be activated as it can't
>> find his volume group(s). To be able to use the global_filter in
>> /etc/lvm/lvm.conf, I believe it should be overridden in vdsm's lvm.conf
>> to revert back to the default.
>>
>> I've moved my filter from global_filter to filter and that seems to
>> work. When lvmetad is disabled I believe this should have the same
>> effect as global_filter? The comments in /etc/lvm/lvm.conf indicate also
>> udev might ignore the filter setting?
> 
> Right, global_filter exist so you can override filter used from the command
> line.
> 
> For example, hiding certain devices from vdsm. This is why we are using
> filter in vdsm, leaving global_filter for the administrator.
> 
> Can you explain why do you need global_filter or filter for the
> hypervisor disks?

Based on the comment in /etc/lvm/lvm.conf regarding global_filter I
concluded that not only lvmetad but also udev might perform action on
the devices and I wanted to prevent that.

I've now set the following settings in /etc/lvm/lvm.conf:

use_lvmetad = 0
auto_activation_volume_list = []
filter = ["a|/dev/sda5|", "r|.*|" ]

On other systems I have kept the default filter.

> Do you have any issue with the current settings, disabling auto activation and
> lvmetad?

Keeping those two disabled also seems to work. The ovirt LV's do show up
in 'lvs' output but are not activated.

I wanted to be absolutely sure the VM LV's were not touched, I added the
filter on some of our hosts.

Regards,

Rik


-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] thin_check run on VM disk by host on startup ?!

2016-08-31 Thread Rik Theys
On 08/31/2016 09:43 AM, Rik Theys wrote:
> On 08/30/2016 04:47 PM, Nir Soffer wrote:
>> On Tue, Aug 30, 2016 at 3:51 PM, Rik Theys <rik.th...@esat.kuleuven.be> 
>> wrote:
>>> While rebooting one of the hosts in an oVirt cluster, I noticed that
>>> thin_check is run on the thin pool devices of one of the VM's on which
>>> the disk is assigned to.
>>>
>>> That seems strange to me. I would expect the host to stay clear of any
>>> VM disks.
>>
>> We expect the same thing, but unfortunately systemd and lvm try to
>> auto activate stuff. This may be good idea for desktop system, but
>> probably bad idea for a server and in particular a hypervisor.
>>
>> We don't have a solution yet, but you can try these:
>>
>> 1. disable lvmetad service
>>
>> systemctl stop lvm2-lvmetad.service lvm2-lvmetad.socket
>> systemctl mask lvm2-lvmetad.service lvm2-lvmetad.socket
>>
>> Edit /etc/lvm/lvm.conf:
>>
>> use_lvmetad = 0
>>
>> 2. disable lvm auto activation
>>
>> Edit /etc/lvm/lvm.conf:
>>
>> auto_activation_volume_list = []
>>
>> 3. both 1 and 2
>>
> 
> I've now applied both of the above and regenerated the initramfs and
> rebooted and the host no longer lists the LV's of the VM. Since I
> rebooted the host before without this issue, I'm not sure a single
> reboot is enough to conclude it has fully fixed the issue.
> 
> You mention that there's no solution yet. Does that mean the above
> settings are not 100% certain to avoid this behaviour?
> 
> I was thinking of setting a global_filter in /etc/lvm/lvm.conf to only
> include the PV's for the hypervisor disks (on which the OS is installed)
> so the system lvm commands only touches those. Since vdsm is using its
> own lvm.conf this should be OK for vdsm?

This does not seem to work. The host can not be activated as it can't
find his volume group(s). To be able to use the global_filter in
/etc/lvm/lvm.conf, I believe it should be overridden in vdsm's lvm.conf
to revert back to the default.

I've moved my filter from global_filter to filter and that seems to
work. When lvmetad is disabled I believe this should have the same
effect as global_filter? The comments in /etc/lvm/lvm.conf indicate also
udev might ignore the filter setting?

Rik

-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] thin_check run on VM disk by host on startup ?!

2016-08-30 Thread Rik Theys
On 08/30/2016 02:51 PM, Rik Theys wrote:
> While rebooting one of the hosts in an oVirt cluster, I noticed that
> thin_check is run on the thin pool devices of one of the VM's on which
> the disk is assigned to.
> 
> That seems strange to me. I would expect the host to stay clear of any
> VM disks.

> We had a thin pool completely break on an VM a while ago and I never
> determined the root cause (was a test VM). If the host changed something
> on the disk while the VM was running on the other host this might have
> been the root cause.

I just rebooted the affected VM and indeed the systems fails to activate
the thinpool now :-(.

When I try to activate it I get:

Check of pool maildata/pool0 failed: (status:1). Manual repair required!
0 logical volume(s) in volume group "maildata" now active.

Mvg,

Rik


-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] thin_check run on VM disk by host on startup ?!

2016-08-30 Thread Rik Theys
---7.00g

  fa1d8c9c-9874-4fad-b059-a0c60053dcfb
a7ba2db3-517c-408a-8b27-ea45989d6416 -wi---   16.00g

  ids
a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-a-  128.00m

  inbox
a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-a-  128.00m

  leases
a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-a-2.00g

  master
a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-a-1.00g

  metadata
a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-a-  512.00m

  outbox
a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-a-  128.00m

  imap maildata
   Vwi---tz--  750.00g pool0
  ldap maildata
   Vwi---tz--3.00g pool0
  log  maildata
   Vwi---tz--   10.00g pool0
  mailconfig   maildata
   Vwi---tz--2.00g pool0
  pool0maildata
   twi---tz-- 1020.00g
  postfix  maildata
   Vwi---tz--   10.00g pool0
  root vg_amazone
   -wi-ao   32.00g
  swap vg_amazone
   -wi-ao   64.00g
  phplogs  vg_logs
   -wi-a-   12.00g
  wwwlogs  vg_logs
   -wi-a-   12.00g



-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Procedure to upgrade single-host datacenter from el6 to el7

2016-04-24 Thread Rik Theys

Hi,

If the host is the only host in the cluster and I remove the host from
the cluster prior to the OS upgrade, is it still necessary to create a
new cluster?

Should I:

A) remove the host from the cluster and add the host to a new cluster
after the OS upgrade
B) remove the host from the cluster and add it to the same cluster after
OS upgrade
C) Keep the host in the cluster and choose the "reinstall" option after
the OS upgrade?

What does the "reinstall" option do?

Regards,

Rik

--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>

On Sun, 24 Apr 2016, Amit Aviram wrote:


Hi Rik.The flow should work, as long as the OS is supported by oVirt.
However, you will still need to move it to a new cluster.

On Fri, Apr 22, 2016 at 3:44 PM, Rik Theys <rik.th...@esat.kuleuven.be> wrote:
  Hi,

  I'm looking for the best procedure to upgrade a host from CentOS 6 to
  CentOS 7. The host is the only host in the oVirt data center (the engine
  is running on another machine and manages multiple data centers).

  For datacenters with multiple hosts I followed the following steps:
   - Add new cluster
   - Put host in maintenance
   - Remove host from old cluster
   - Reinstall host
   - Add host to new cluster
   - repeat for all hosts until old cluster is empty

  This worked OK and the data center was never "non operational".

  Is the procedure identical for a data center with only one host?

  Should I also remove the host from the (only) cluster in the data
  center, or should I reinstall it and select the "reinstall" option in
  the oVirt web interface? Since there is only one host in the cluster
  there's no need to create a new cluster?

  Is there any state on the host that I should keep when performing the
  reinstall with CentOS 7?

  The host is using FC storage (local disks configured as FC through
  multipath).

  Regards,

  Rik

  --
  Rik Theys
  System Engineer
  KU Leuven - Dept. Elektrotechniek (ESAT)
  Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
  +32(0)16/32.11.07
  
  <>
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Procedure to upgrade single-host datacenter from el6 to el7

2016-04-22 Thread Rik Theys

Hi,

I'm looking for the best procedure to upgrade a host from CentOS 6 to
CentOS 7. The host is the only host in the oVirt data center (the engine
is running on another machine and manages multiple data centers).

For datacenters with multiple hosts I followed the following steps:
 - Add new cluster
 - Put host in maintenance
 - Remove host from old cluster
 - Reinstall host
 - Add host to new cluster
 - repeat for all hosts until old cluster is empty

This worked OK and the data center was never "non operational".

Is the procedure identical for a data center with only one host?

Should I also remove the host from the (only) cluster in the data
center, or should I reinstall it and select the "reinstall" option in
the oVirt web interface? Since there is only one host in the cluster
there's no need to create a new cluster?

Is there any state on the host that I should keep when performing the
reinstall with CentOS 7?

The host is using FC storage (local disks configured as FC through
multipath).

Regards,

Rik

--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Install oVirt with directly-attached SCSI devices

2016-04-09 Thread Rik Theys

Hi Paolo,

If your storage is detected by multipathd as a multipath capable device
(it should, even if connected through only one connection), oVirt will
detect it as "fibre channel" storage and selecting that as the storage
type should work.

We use a similar setup (with Dell powervault storage)  and haven't had 
any problems with it.


Regards,

Rik

--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>

On Sat, 9 Apr 2016, Paolo Smiraglia wrote:


Hi all!

My name is Paolo and I'm the sysadmin of the research group (infosec)
where I work.

Recently our infrastructure was updated and I'm planning to use oVirt
as virtualisation manager. The new "toys" they gave me are

 - HP DL360G9 (x2)
 - HP MSA1040 (double controller with mini SAS connection)

By exploring the oVIrt documentation, seems that the best solution for
storage management would be having something like NSF, iSCSI and so
on. Unfortunately, LUNs exposed by our MSA1040 are recognised by the
two DL36G9 as directly-attached SCSI devices.

I asked Google and I found this old post from the far 2013

  http://lists.ovirt.org/pipermail/users/2013-November/017924.html

that seems to be very similar to my case and the scenario is not rosy... :-(

Is now, in 2016, something changed? Have you something to suggest?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can't remove snapshot

2016-02-18 Thread Rik Theys
Hi,

On 02/17/2016 05:29 PM, Adam Litke wrote:
> On 17/02/16 11:14 -0500, Greg Padgett wrote:
>> On 02/17/2016 03:42 AM, Rik Theys wrote:
>>> Hi,
>>>
>>> On 02/16/2016 10:52 PM, Greg Padgett wrote:
>>>> On 02/16/2016 08:50 AM, Rik Theys wrote:
>>>>>  From the above I conclude that the disk with id that ends with
>>>> Similar to what I wrote to Marcelo above in the thread, I'd recommend
>>>> running the "VM disk info gathering tool" attached to [1].  It's the
>>>> best way to ensure the merge was completed and determine which image is
>>>> the "bad" one that is no longer in use by any volume chains.
>>>
>>> I've ran the disk info gathering tool and this outputs (for the affected
>>> VM):
>>>
>>> VM lena
>>> Disk b2390535-744f-4c02-bdc8-5a897226554b
>>> (sd:a7ba2db3-517c-408a-8b27-ea45989d6416)
>>> Volumes:
>>> 24d78600-22f4-44f7-987b-fbd866736249
>>>
>>> The id of the volume is the ID of the snapshot that is marked "illegal".
>>> So the "bad" image would be the dc39 one, which according to the UI is
>>> in use by the "Active VM" snapshot. Can this make sense?
>>
>> It looks accurate.  Live merges are "backwards" merges, so the merge
>> would have pushed data from the volume associated with "Active VM"
>> into the volume associated with the snapshot you're trying to remove.
>>
>> Upon completion, we "pivot" so that the VM uses that older volume, and
>> we update the engine database to reflect this (basically we
>> re-associate that older volume with, in your case, "Active VM").
>>
>> In your case, it seems the pivot operation was done, but the database
>> wasn't updated to reflect it.  Given snapshot/image associations e.g.:
>>
>>  VM Name  Snapshot Name  Volume
>>  ---  -  --
>>  My-VMActive VM  123-abc
>>  My-VMMy-Snapshot789-def
>>
>> My-VM in your case is actually running on volume 789-def.  If you run
>> the db fixup script and supply ("My-VM", "My-Snapshot", "123-abc")
>> (note the volume is the newer, "bad" one), then it will switch the
>> volume association for you and remove the invalid entries.
>>
>> Of course, I'd shut down the VM, and back up the db beforehand.

I've executed the sql script and it seems to have worked. Thanks!

>> "Active VM" should now be unused; it previously (pre-merge) was the
>> data written since the snapshot was taken.  Normally the larger actual
>> size might be from qcow format overhead.  If your listing above is
>> complete (ie one volume for the vm), then I'm not sure why the base
>> volume would have a larger actual size than virtual size.
>>
>> Adam, Nir--any thoughts on this?
> 
> There is a bug which has caused inflation of the snapshot volumes when
> performing a live merge.  We are submitting fixes for 3.5, 3.6, and
> master right at this moment.

Which bug number is assigned to this bug? Will upgrading to a release
with a fix reduce the disk usage again?


Regards,

Rik


-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can't remove snapshot

2016-02-17 Thread Rik Theys
Hi,

On 02/16/2016 10:52 PM, Greg Padgett wrote:
> On 02/16/2016 08:50 AM, Rik Theys wrote:
>> Hi,
>>
>> I'm trying to determine the correct "bad_img" uuid in my case.
>>
>> The VM has two snapshots:
>>
>> * The "Active VM" snapshot which has a disk that has an actual size
>> that's 5GB larger than the virtual size. It has a creation date that
>> matches the timestamp at which I created the second snapshot. The "disk
>> snapshot id" for this snapshot ends with dc39.
>>
>> * A "before jessie upgrade" snapshot that has status "illegal". It has
>> an actual size that's 2GB larger than the virtual size. The creation
>> date matches the date the VM was initialy created. The disk snapshot id
>> ends with 6249.
>>
>>  From the above I conclude that the disk with id that ends with 6249 is
>> the "bad" img I need to specify.
> 
> Similar to what I wrote to Marcelo above in the thread, I'd recommend
> running the "VM disk info gathering tool" attached to [1].  It's the
> best way to ensure the merge was completed and determine which image is
> the "bad" one that is no longer in use by any volume chains.

I've ran the disk info gathering tool and this outputs (for the affected
VM):

VM lena
Disk b2390535-744f-4c02-bdc8-5a897226554b
(sd:a7ba2db3-517c-408a-8b27-ea45989d6416)
Volumes:
24d78600-22f4-44f7-987b-fbd866736249

The id of the volume is the ID of the snapshot that is marked "illegal".
So the "bad" image would be the dc39 one, which according to the UI is
in use by the "Active VM" snapshot. Can this make sense?

Both the "Active VM" and the defective snapshot have an actual size
that's bigger than the virtual size of the disk. When I remove the bad
disk image/snapshot, will the actual size of the "Active VM" snapshot
return to the virtual size of the disk? What's currently stored in the
"Active VM" snapshot?

Would cloning the VM (and removing the original VM afterwards) work as
an alternate way to clean this up? Or will the clone operation also
clone the snapshots?

Regards,

Rik

> If indeed the "bad" image (whichever one it is) is no longer in use,
> then it's possible the image wasn't successfully removed from storage. 
> There are 2 ways to fix this:
> 
>   a) Run the db fixup script to remove the records for the merged image,
>  and run the vdsm command by hand to remove it from storage.
>   b) Adjust the db records so a merge retry would start at the right
>  place, and re-run live merge.
> 
> Given that your merge retries were failing, option a) seems most likely
> to succeed.  The db fixup script is attached to [1]; as parameters you
> would need to provide the vm name, snapshot name, and the id of the
> unused image as verified by the disk info tool.
> 
> To remove the stale LV, the vdsm deleteVolume verb would then be run
> from `vdsClient` -- but note that this must be run _on the SPM host_. 
> It will not only perform lvremove, but also do housekeeping on other
> storage metadata to keep everything consistent.  For this verb I believe
> you'll need to supply not only the unused image id, but also the pool,
> domain, and image group ids from your database queries.
> 
> I hope that helps.
> 
> Greg
> 
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1306741
> 
>>
>> However, I grepped the output from 'lvs' on the SPM host of the cluster
>> and both disk id's are returned:
>>
>> [root@amazone ~]# lvs | egrep 'cd39|6249'
>>24d78600-22f4-44f7-987b-fbd866736249
>> a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-ao   34.00g
>>
>>81458622-aa54-4f2f-b6d8-75e7db36cd39
>> a7ba2db3-517c-408a-8b27-ea45989d6416 -wi---5.00g
>>
>>
>> I expected the "bad" img would no longer be found?
>>
>> The SQL script only cleans up the database and not the logical volumes.
>> Would running the script not keep a stale LV around?
>>
>> Also, from the lvs output it seems the "bad" disk is bigger than the
>> "good" one.
>>
>> Is it possible the snapshot still needs to be merged?? If so, how can I
>> initiate that?
>>
>> Regards,
>>
>> Rik
>>
>>
>> On 02/16/2016 02:02 PM, Rik Theys wrote:
>>> Hi Greg,
>>>
>>>>
>>>> 2016-02-09 21:30 GMT-03:00 Greg Padgett <gpadg...@redhat.com>:
>>>>> On 02/09/2016 06:08 AM, Michal Skrivanek wrote:
>>>>>>
>>>>>>
>>>>>>> On 03 Feb 2016, at 10:37, Rik Theys <rik.th...@esat.kuleuven.be&

Re: [ovirt-users] Can't remove snapshot

2016-02-16 Thread Rik Theys
Hi,

I'm trying to determine the correct "bad_img" uuid in my case.

The VM has two snapshots:

* The "Active VM" snapshot which has a disk that has an actual size
that's 5GB larger than the virtual size. It has a creation date that
matches the timestamp at which I created the second snapshot. The "disk
snapshot id" for this snapshot ends with dc39.

* A "before jessie upgrade" snapshot that has status "illegal". It has
an actual size that's 2GB larger than the virtual size. The creation
date matches the date the VM was initialy created. The disk snapshot id
ends with 6249.

From the above I conclude that the disk with id that ends with 6249 is
the "bad" img I need to specify.

However, I grepped the output from 'lvs' on the SPM host of the cluster
and both disk id's are returned:

[root@amazone ~]# lvs | egrep 'cd39|6249'
  24d78600-22f4-44f7-987b-fbd866736249
a7ba2db3-517c-408a-8b27-ea45989d6416 -wi-ao   34.00g

  81458622-aa54-4f2f-b6d8-75e7db36cd39
a7ba2db3-517c-408a-8b27-ea45989d6416 -wi---5.00g


I expected the "bad" img would no longer be found?

The SQL script only cleans up the database and not the logical volumes.
Would running the script not keep a stale LV around?

Also, from the lvs output it seems the "bad" disk is bigger than the
"good" one.

Is it possible the snapshot still needs to be merged?? If so, how can I
initiate that?

Regards,

Rik


On 02/16/2016 02:02 PM, Rik Theys wrote:
> Hi Greg,
> 
>>
>> 2016-02-09 21:30 GMT-03:00 Greg Padgett <gpadg...@redhat.com>:
>>> On 02/09/2016 06:08 AM, Michal Skrivanek wrote:
>>>>
>>>>
>>>>> On 03 Feb 2016, at 10:37, Rik Theys <rik.th...@esat.kuleuven.be> wrote:
> 
>>>>>> I can see the snapshot in the "Disk snapshot" tab of the storage. It has
>>>>>> a status of "illegal". Is it OK to (try to) remove this snapshot? Will
>>>>>> this impact the running VM and/or disk image?
>>>>
>>>>
>>>> No, it’s not ok to remove it while live merge(apparently) is still ongoing
>>>> I guess that’s a live merge bug?
>>>
>>>
>>> Indeed, this is bug 1302215.
>>>
>>> I wrote a sql script to help with cleanup in this scenario, which you can
>>> find attached to the bug along with a description of how to use it[1].
>>>
>>> However, Rik, before trying that, would you be able to run the attached
>>> script [2] (or just the db query within) and forward the output to me? I'd
>>> like to make sure everything looks as it should before modifying the db
>>> directly.
> 
> I ran the following query on the engine database:
> 
> select images.* from images join snapshots ON (images.vm_snapshot_id =
> snapshots.snapshot_id)
> join vm_static on (snapshots.vm_id = vm_static.vm_guid)
> where vm_static.vm_name = 'lena' and snapshots.description='before
> jessie upgrade';
> 
> The resulting output is:
> 
>   image_guid  | creation_date  |size
> |   it_guid|   parentid
>   | images
> tatus |lastmodified|vm_snapshot_id
>   | volume_type | volume_format |image_group_id|
> _create_da
> te  | _update_date  | active |
> volume_classification
> --++-+--+--+---
> --++--+-+---+--+---
> +---++---
>  24d78600-22f4-44f7-987b-fbd866736249 | 2015-05-19 15:00:13+02 |
> 34359738368 | ---- |
> ---- |
> 4 | 2016-01-30 08:45:59.998+01 |
> 4b4930ed-b52d-47ec-8506-245b7f144102 |   1 | 5 |
> b2390535-744f-4c02-bdc8-5a897226554b | 2015-05-19 15:00:1
> 1.864425+02 | 2016-01-30 08:45:59.999422+01 | f  | 1
> (1 row)
> 
> Regards,
> 
> Rik
> 
> 
>>>
>>> Thanks,
>>> Greg
>>>
>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1302215#c13
>>> (Also note that the engine should be stopped before running this.)
>>>
>>> [2] Arguments are the ovirt db name, db user, and the name of the vm you
>>> were performing live merge on.
>>>
>>>
>>>> Thanks,
>>>> m

Re: [ovirt-users] Can't remove snapshot

2016-02-16 Thread Rik Theys
Hi Greg,

> 
> 2016-02-09 21:30 GMT-03:00 Greg Padgett <gpadg...@redhat.com>:
>> On 02/09/2016 06:08 AM, Michal Skrivanek wrote:
>>>
>>>
>>>> On 03 Feb 2016, at 10:37, Rik Theys <rik.th...@esat.kuleuven.be> wrote:

>>>>> I can see the snapshot in the "Disk snapshot" tab of the storage. It has
>>>>> a status of "illegal". Is it OK to (try to) remove this snapshot? Will
>>>>> this impact the running VM and/or disk image?
>>>
>>>
>>> No, it’s not ok to remove it while live merge(apparently) is still ongoing
>>> I guess that’s a live merge bug?
>>
>>
>> Indeed, this is bug 1302215.
>>
>> I wrote a sql script to help with cleanup in this scenario, which you can
>> find attached to the bug along with a description of how to use it[1].
>>
>> However, Rik, before trying that, would you be able to run the attached
>> script [2] (or just the db query within) and forward the output to me? I'd
>> like to make sure everything looks as it should before modifying the db
>> directly.

I ran the following query on the engine database:

select images.* from images join snapshots ON (images.vm_snapshot_id =
snapshots.snapshot_id)
join vm_static on (snapshots.vm_id = vm_static.vm_guid)
where vm_static.vm_name = 'lena' and snapshots.description='before
jessie upgrade';

The resulting output is:

  image_guid  | creation_date  |size
|   it_guid|   parentid
  | images
tatus |lastmodified|vm_snapshot_id
  | volume_type | volume_format |image_group_id|
_create_da
te  | _update_date  | active |
volume_classification
--++-+--+--+---
--++--+-+---+--+---
+---++---
 24d78600-22f4-44f7-987b-fbd866736249 | 2015-05-19 15:00:13+02 |
34359738368 | ---- |
---- |
4 | 2016-01-30 08:45:59.998+01 |
4b4930ed-b52d-47ec-8506-245b7f144102 |   1 | 5 |
b2390535-744f-4c02-bdc8-5a897226554b | 2015-05-19 15:00:1
1.864425+02 | 2016-01-30 08:45:59.999422+01 | f  | 1
(1 row)

Regards,

Rik


>>
>> Thanks,
>> Greg
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1302215#c13
>> (Also note that the engine should be stopped before running this.)
>>
>> [2] Arguments are the ovirt db name, db user, and the name of the vm you
>> were performing live merge on.
>>
>>
>>> Thanks,
>>> michal
>>>
>>>>
>>>>
>>>> Regards,
>>>>
>>>> Rik
>>>>
>>>> On 02/03/2016 10:26 AM, Rik Theys wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> I created a snapshot of a running VM prior to an OS upgrade. The OS
>>>>> upgrade has now been succesful and I would like to remove the snapshot.
>>>>> I've selected the snapshot in the UI and clicked Delete to start the
>>>>> task.
>>>>>
>>>>> After a few minutes, the task has failed. When I click delete again on
>>>>> the same snapshot, the failed message is returned after a few seconds.
>>>>>
>>>>>>  From browsing through the engine log (attached) it seems the snapshot
>>>>>
>>>>> was correctly merged in the first try but something went wrong in the
>>>>> finalizing fase. On retries, the log indicates the snapshot/disk image
>>>>> no longer exists and the removal of the snapshot fails for this reason.
>>>>>
>>>>> Is there any way to clean up this snapshot?
>>>>>
>>>>> I can see the snapshot in the "Disk snapshot" tab of the storage. It has
>>>>> a status of "illegal". Is it OK to (try to) remove this snapshot? Will
>>>>> this impact the running VM and/or disk image?

-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can't remove snapshot

2016-02-03 Thread Rik Theys
Hi,

In the mean time I've noticed the following entries in our periodic
logcheck output:

Feb  3 09:05:53 orinoco journal: block copy still active: disk 'vda' not
ready for pivot yet
Feb  3 09:05:53 orinoco journal: vdsm root ERROR Unhandled
exception#012Traceback (most recent call last):#012  File
"/usr/lib/python2.7/site-packages/vdsm/utils.py", line 734, in
wrapper#012return f(*a, **kw)#012  File
"/usr/share/vdsm/virt/vm.py", line 5168, in run#012
self.tryPivot()#012  File "/usr/share/vdsm/virt/vm.py", line 5137, in
tryPivot#012ret = self.vm._dom.blockJobAbort(self.drive.name,
flags)#012  File "/usr/share/vdsm/virt/virdomain.py", line 68, in f#012
   ret = attr(*args, **kwargs)#012  File
"/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 124,
in wrapper#012ret = f(*args, **kwargs)#012  File
"/usr/lib64/python2.7/site-packages/libvirt.py", line 733, in
blockJobAbort#012if ret == -1: raise libvirtError
('virDomainBlockJobAbort() failed', dom=self)#012libvirtError: block
copy still active: disk 'vda' not ready for pivot yet

This is from the host running the VM.

Note that this host is not the SPM of the cluster. I always thought all
operations on disk volumes happened on the SPM host?

My question still remains:

> I can see the snapshot in the "Disk snapshot" tab of the storage. It has
> a status of "illegal". Is it OK to (try to) remove this snapshot? Will
> this impact the running VM and/or disk image?


Regards,

Rik

On 02/03/2016 10:26 AM, Rik Theys wrote:
> Hi,
> 
> I created a snapshot of a running VM prior to an OS upgrade. The OS
> upgrade has now been succesful and I would like to remove the snapshot.
> I've selected the snapshot in the UI and clicked Delete to start the task.
> 
> After a few minutes, the task has failed. When I click delete again on
> the same snapshot, the failed message is returned after a few seconds.
> 
>>From browsing through the engine log (attached) it seems the snapshot
> was correctly merged in the first try but something went wrong in the
> finalizing fase. On retries, the log indicates the snapshot/disk image
> no longer exists and the removal of the snapshot fails for this reason.
> 
> Is there any way to clean up this snapshot?
> 
> I can see the snapshot in the "Disk snapshot" tab of the storage. It has
> a status of "illegal". Is it OK to (try to) remove this snapshot? Will
> this impact the running VM and/or disk image?
> 
> Regards,
> 
> Rik
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 


-- 
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] report option gone missing

2015-11-22 Thread Rik Theys

Hi,

I was able to resolve my issue by adding the reports.cer file to the
/etc/pki/ovirt-engine/.truststore file. It seems the certificate got
updated by an engine update but was not added automatically to the
truststore.

Regards,

Rik

--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>

On Sun, 22 Nov 2015, Shirly Radco wrote:


Hi Rik,

The ovirt-engine-dwhd process is responsible for collecting the data from the 
engine to the oviry_engine_history database where samples data is stored and 
then aggregated to Hourly and daily.

If this service is running and there are no errors in the log then you should 
be able to see up to date data in the statistics tables in the history database 
(They end with samples/hourly/daily).

Please check these table are up to date.

http://www.ovirt.org/Ovirt_DWH


The ovirt-engine-reportsd is the service of the reports server. If it is 
running and it is configured with a FQDN you should be able to log in through 
http://FQDN:8090/jasperserver/

If you want to remove the reports you can just run

$yum remove ovirt-engine-reports

and run
$ engine-setup

and when you install it again
$ yum install ovirt-engine-reports

run again
$ engine-setup

to configure the reports.


http://www.ovirt.org/Ovirt_Reports

Please let me know how it goes.

Best regards,
---
Shirly Radco
BI Software Engineer
Red Hat Israel Ltd.


- Original Message -

From: "Rik Theys" <rik.th...@esat.kuleuven.be>
To: users@ovirt.org
Sent: Thursday, November 19, 2015 5:35:52 PM
Subject: Re: [ovirt-users] report option gone missing

Hi,

I was able to login on the ovirt-engine-reports web application as
'admin' but don't see any reports there.

What is the procedure to fully remove the reporting from the system and
start anew? Will a remove of the package automatically clean up the
databases and such?

Rik

On 11/19/2015 04:25 PM, Rik Theys wrote:

Hi,

At some point I had the oVirt reporting configured on my engine and it
worked. I had a "reports" option in the menu and could generate
reports for various resources.

At some point I've noticed that the "reports" option was no longer
there but did not have time to investigate. I believe it happened when
I migrated the engine host from CentOS 6 to 7 using engine-backup and
restore.

How can I debug this?

In the ovirt-engine-dwh log I used to see the following error:

Exception in component tJDBCRollback_4
org.postgresql.util.PSQLException: FATAL: terminating connection due
to administrator command
at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at
org.postgresql.jdbc2.AbstractJdbc2Connection.executeTransactionCommand(AbstractJdbc2Connection.java:793)
at
org.postgresql.jdbc2.AbstractJdbc2Connection.rollback(AbstractJdbc2Connection.java:846)
at
ovirt_engine_dwh.historyetl_3_5.HistoryETL.tJDBCRollback_4Process(HistoryETL.java:2079)
at
ovirt_engine_dwh.historyetl_3_5.HistoryETL.tJDBCRollback_3Process(HistoryETL.java:1997)
at
ovirt_engine_dwh.historyetl_3_5.HistoryETL.tJDBCRollback_2Process(HistoryETL.java:1882)
at
ovirt_engine_dwh.historyetl_3_5.HistoryETL.tJDBCRollback_1Process(HistoryETL.java:1767)
at
ovirt_engine_dwh.historyetl_3_5.HistoryETL.tPostjob_1Process(HistoryETL.java:1647)
at
ovirt_engine_dwh.historyetl_3_5.HistoryETL.runJobInTOS(HistoryETL.java:10785)
at
ovirt_engine_dwh.historyetl_3_5.HistoryETL.main(HistoryETL.java:10277)
2015-11-19
15:42:02|rza8ri|rza8ri|rza8ri|OVIRT_ENGINE_DWH|HistoryETL|Default|6|Java
Exception|tJDBCRollback_4|org.postgresql.util.PSQLException:FATAL: term

But after rebooting the engine host it now only lists 'Service Started'.

The ovirt-engine-reportsd is also running.

Which of these two processes (reportsd vs dwhd) is generating the
reports (and showing it in the engine admin interface)?

In /var/log/ovirt-engine-reports, the reports.log file is empty, the
server.log reports Deployed ovirt-engine-reports.war as the last line
(without any obvious errors). Only jasperserver.log shows:

015-11-19 15:41:53,304 ERROR DiskStorageFactory,MSC service thread
1-2:948 - Could not flush disk cache. Initial cause was
/tmp/dataSnapshots/snapshot%0043ontents.index (No such file or directory)
java.io.FileNotFoundException:
/tmp/dataSnapshots/snapshot%0043ontents.index (No such file or directory)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:221)
at java.io.FileOutputStream.(FileOutputStream.java:171)
at
net.sf.ehcache.store.d

[ovirt-users] report option gone missing

2015-11-19 Thread Rik Theys
rollerImpl.java:1911)
at 
org.jboss.msc.service.ServiceControllerImpl$StopTask.run(ServiceControllerImpl.java:1874)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)

I have no idea on how to proceed debugging this. How is the reporting 
connected to the engine?


Rik


--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

<>

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] report option gone missing

2015-11-19 Thread Rik Theys

Hi,

I was able to login on the ovirt-engine-reports web application as 
'admin' but don't see any reports there.


What is the procedure to fully remove the reporting from the system and 
start anew? Will a remove of the package automatically clean up the 
databases and such?


Rik

On 11/19/2015 04:25 PM, Rik Theys wrote:

Hi,

At some point I had the oVirt reporting configured on my engine and it 
worked. I had a "reports" option in the menu and could generate 
reports for various resources.


At some point I've noticed that the "reports" option was no longer 
there but did not have time to investigate. I believe it happened when 
I migrated the engine host from CentOS 6 to 7 using engine-backup and 
restore.


How can I debug this?

In the ovirt-engine-dwh log I used to see the following error:

Exception in component tJDBCRollback_4
org.postgresql.util.PSQLException: FATAL: terminating connection due 
to administrator command
at 
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
at 
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
at 
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
at 
org.postgresql.jdbc2.AbstractJdbc2Connection.executeTransactionCommand(AbstractJdbc2Connection.java:793)
at 
org.postgresql.jdbc2.AbstractJdbc2Connection.rollback(AbstractJdbc2Connection.java:846)
at 
ovirt_engine_dwh.historyetl_3_5.HistoryETL.tJDBCRollback_4Process(HistoryETL.java:2079)
at 
ovirt_engine_dwh.historyetl_3_5.HistoryETL.tJDBCRollback_3Process(HistoryETL.java:1997)
at 
ovirt_engine_dwh.historyetl_3_5.HistoryETL.tJDBCRollback_2Process(HistoryETL.java:1882)
at 
ovirt_engine_dwh.historyetl_3_5.HistoryETL.tJDBCRollback_1Process(HistoryETL.java:1767)
at 
ovirt_engine_dwh.historyetl_3_5.HistoryETL.tPostjob_1Process(HistoryETL.java:1647)
at 
ovirt_engine_dwh.historyetl_3_5.HistoryETL.runJobInTOS(HistoryETL.java:10785)
at 
ovirt_engine_dwh.historyetl_3_5.HistoryETL.main(HistoryETL.java:10277)
2015-11-19 
15:42:02|rza8ri|rza8ri|rza8ri|OVIRT_ENGINE_DWH|HistoryETL|Default|6|Java 
Exception|tJDBCRollback_4|org.postgresql.util.PSQLException:FATAL: term


But after rebooting the engine host it now only lists 'Service Started'.

The ovirt-engine-reportsd is also running.

Which of these two processes (reportsd vs dwhd) is generating the 
reports (and showing it in the engine admin interface)?


In /var/log/ovirt-engine-reports, the reports.log file is empty, the 
server.log reports Deployed ovirt-engine-reports.war as the last line 
(without any obvious errors). Only jasperserver.log shows:


015-11-19 15:41:53,304 ERROR DiskStorageFactory,MSC service thread 
1-2:948 - Could not flush disk cache. Initial cause was 
/tmp/dataSnapshots/snapshot%0043ontents.index (No such file or directory)
java.io.FileNotFoundException: 
/tmp/dataSnapshots/snapshot%0043ontents.index (No such file or directory)

at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:221)
at java.io.FileOutputStream.(FileOutputStream.java:171)
at 
net.sf.ehcache.store.disk.DiskStorageFactory$IndexWriteTask.call(DiskStorageFactory.java:1120)
at 
net.sf.ehcache.store.disk.DiskStorageFactory.unbind(DiskStorageFactory.java:946)
at 
net.sf.ehcache.store.disk.DiskStore.dispose(DiskStore.java:616)
at 
net.sf.ehcache.store.FrontEndCacheTier.dispose(FrontEndCacheTier.java:521)

at net.sf.ehcache.Cache.dispose(Cache.java:2473)
at net.sf.ehcache.CacheManager.shutdown(CacheManager.java:1446)
at 
org.springframework.cache.ehcache.EhCacheManagerFactoryBean.destroy(EhCacheManagerFactoryBean.java:134)
at 
org.springframework.beans.factory.support.DisposableBeanAdapter.destroy(DisposableBeanAdapter.java:211)
at 
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroyBean(DefaultSingletonBeanRegistry.java:498)
at 
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingleton(DefaultSingletonBeanRegistry.java:474)
at 
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.destroySingletons(DefaultSingletonBeanRegistry.java:442)
at 
org.springframework.context.support.AbstractApplicationContext.destroyBeans(AbstractApplicationContext.java:1066)
at 
org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:1040)
at 
org.springframework.context.support.AbstractApplicationContext.close(AbstractApplicationContext.java:988)
at 
org.springframework.web.context.ContextLoader.closeWebApplicationContext(ContextLoader.java:541)
at 
org.springframework.web.context.ContextLoaderListener.contextDestroyed(ContextLoa

Re: [ovirt-users] Missing CPU features

2015-05-19 Thread Rik Theys

Hi,

On Tue, 19 May 2015, Bloemen, Jurriën wrote:

I try to add a new host to a new cluster but I get this message:

Host hostname moved to Non-Operational state as host does not meet the 
cluster's minimum CPU
level. Missing CPU features : model_Haswell


The Haswell cpu had a processor feature (hle, TSX-NI) that turned out to
be flaky and Intel disabled this feature with a microcode update.

The cpu's with the updated microcode no longer have the 'hle' feature in
/proc/cpuinfo. oVirt/libvirt does not see the cpu as Haswell as the
required feature is missing.

Upstream libvirt has introduced Haswell-noTSX variants and oVirt will
probably get a new CPU type to match in a future release.

For now I've configured the CPU model as SandyBridge on my Haswell
cpu's.

I filed a bug about this in the oVirt bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=1218673

Regards,

Rik
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt user permissions for fence_rhevm

2015-05-18 Thread Rik Theys

Hi,

I've created a user in AD that should only be able to power off/on a 
specific VM in oVirt.


I've granted this user UserRole permission on this specific VM.

If I log into the user portal with these credentials I can see the VM 
and power it off/on.


When I use the fence_rhevm agent it fails to find the correct plug. I 
fixed this by adding the Filter: true header to the fence_rhevm 
script. When running manually, fence_rhevm can show me the status of the 
plug and can power it on/off.


When I try to integrate this into a pacemaker cluster (on Debian 7) 
using the fence_rhevm resource agent it reboots the VM on every monitor 
action.


Has anyone succeeded in using fence_rhevm with oVirt on pacemaker 1.1? 
Are there any additional oVirt permissions the user needs to make this 
work? I don't want to make this fence user an admin for my entire ovirt 
datacenter.


The stonith primitive is configured:

primitive p_fence_vm1 stonith:fence_rhevm \
params port=vm1 login=fence-...@mydomain.ad 
ipaddr=ovirt-engine.mydomain ipport=443 ssl=1 passwd=secret 
verbose=1 pcmk_host_list=vm1 pcmk_host_check=static-list \

op monitor interval=15m


Regards,

Rik

--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

Any errors in spelling, tact or fact are transmission errors
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Windows Internet connection issue

2015-05-12 Thread Rik Theys

Hi,

On Tue, 12 May 2015, gefter chongong - NOAA Affiliate wrote:

We are running oVirt Engine Version: 3.5.0.1-1.el6, this is running on Super 
micros. We have 5 hosts in a cluster on VMs built on this. Hosts Network 
interfaces are bonded and mode 4
(LACP). On the bonds we have VLANS for different networks, and all linux vms 
built on these hosts work and can access the internet.

We are having issues with windows VMs. They cannot access the internet. Pings 
to external sites work, no failures, DNS not an issue but when browsing pages, 
the page keeps spinning and
would not open...

Anybody experience this issue, would be greatful for any pointers.


We had an issue with windows vm's on RHEL 7.1 hosts in that
they would have _very_ slow network access and the host would log a lot of
WARNING messages in the system log about skb_warn_bad_offload.

We were able to fix this by disabling LRO (large receive offload) on the
NICs. You can try that manually and if it resolves your problem you can
use the procedure described on
http://www.ovirt.org/Features/Network_Custom_Properties about
ethtool_opts to have oVirt set the necessary properties.

Hope this helps.

Regards,

Rik

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] multiple OVF_STORE disks?

2015-05-11 Thread Rik Theys

Hi,

I'm still on 3.5.2 and noticed that I have two OVF_STORE disks. Is this 
normal, or am I hitting the bug described in the recent announcement?


This might have been caused by one of my tests that made my storage 
domain unavailable. When I activated the storage domain again I noticed 
that a lot of VM's I created as part of a pool suddenly reappeared  and 
I had to delete them again.


Now that I have two OVF_STORE disks, how do I know which one I can remove?

When I list the virtual machines tab on the disks, they both show an 
empty list.


Regards,

Rik

On 05/11/2015 03:28 PM, Sandro Bonazzola wrote:

[1] https://bugzilla.redhat.com/1214408 - Importing storage domains into an 
uninitialized datacenter leads to
duplicate OVF_STORE disks being created, and can cause catastrophic loss of VM 
configuration data



--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

Any errors in spelling, tact or fact are transmission errors
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] selectively disabling IPv6 on bridges

2015-05-07 Thread Rik Theys

Hi,

On 05/06/2015 02:53 PM, Dan Kenigsberg wrote:

On Wed, May 06, 2015 at 01:28:30PM +0200, Rik Theys wrote:

I'm looking for a way to selectively disable IPv6 on the bridge interfaces
on the oVirt hosts.

When oVirt creates the bridges for all logical networks on the host, it
keeps the default settings for IPv6 which means all bridges get a link-local
address and accept router advertisements.

When a VM is created on the logical network, it can now reach the host over
IPv6 (but not over IPv4 if no IP address has been assigned on the host). If
it sends out a router advertisement it can even create a global IPv6 address
(haven't tested this).

How can I prevent this?

I would like to prevent the guest from IPv6 access to the host but the guest
itself still needs IPv6 access (global IPv6 addresses).

Is it sufficient to create a sysctl config file that says:

net.ipv6.conf.default.disable_ipv6 = 1


Yes, I believe that this would do the trick. For any newly-created
device on the system, regardless of ovirt bridges.

I now see that el7 has changed the default for IPV6INIT to yes. We
should be more prudent and set IPV6INIT=no on all our devices.

Would you open a bug about this, so it is tracked?


I've opened bug 1219363 for this.

Regards,

Rik


--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

Any errors in spelling, tact or fact are transmission errors
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] R: PXE boot of a VM on vdsm don't read DHCP offer

2015-05-07 Thread Rik Theys

Hi,

Is your DHCP server a VM running on the same host? I've seem some
strange issues where a VM could not obtain a DHCP lease if it was
running on the same physical machine as the client. If this is the case,
I can look up what I had to change, otherwise ignore it.

Regards,

Rik


Further info about this case:

An already installed and running VM (Centos7) with static IPv4 assignment, if  
changed to DHCP mode, do not acquire the
IP address.

 

In this case, tcpdump taken on the VM, do not show DHCP offer packets that are 
instead seen on the host bond interface.

Seems that something is filtering DHCP offers between host physical eth 
interfaces and VM virtio eth interface.

Physical servers on the same VLAN keep DHCP offers and boot from PXE correctly.

 

Roberto

 

 

 

Hi all

 

We are using oVirt engine 3.5.1-0.0 on Centos 6.6

We are deploying two hosts with vdsm-4.16.10-8.gitc937927.el7.x86_64

No hosted-engine, it run on a dedicates VM, outside oVirt.

 

Behavior: PXE boot of a VM, ends in timeout (0x4c106035), instead to accept 
the DHCP offer coming from DHCP server.

Tcpdump capture started on the vdsm host, bond0 interface shows clearly that 
DHCP offers reach the vdsm interfaces
three times before PXE client ends in timeout.

Incoming DHCP offer is correctly tagged when it comes to the bond0 interface 
and forwarded to the bond0.bridge
interface.

PXE simply ignore it. PXE version is gPXE 0.9.7.

bond0.bridge interface is already setup with STP=off and DELAY=0.

 

If we install a VM using command  line boot parameters, VM install  run fine. 
The issue is only related to PXE
process, when it is expected to use the DHCP offer.

 

I can provide tcpdump capture, but I’ve not attached to the email because I’m 
quite new of the community and don’t
know if it is allowed/correct.

 

On another host, under the same engine, running 
vdsm-4.16.12-7.gita30da75.el6.x86_64 on Centos6.6, this behavior is
not happening, everything works fine.

 

Any idea/suggestion/further investigation ?

Thanks for attention

Best regards

 

 

Roberto Nunin

Infrastructure Manager

Italy

 

 

Here are interfaces configs:

eno1:

DEVICE=eno1

HWADDR=38:63:bb:4a:47:b0

MASTER=bond0

NM_CONTROLLED=no

ONBOOT=yes

SLAVE=yes

eno2:

DEVICE=eno2

HWADDR=38:63:bb:4a:47:b4

MASTER=bond0

NM_CONTROLLED=no

ONBOOT=yes

SLAVE=yes

bond0:

BONDING_OPTS=mode=4 miimon=100

DEVICE=bond0

NM_CONTROLLED=no

ONBOOT=yes

TYPE=Bond

bond0.3500:

DEVICE=bond0.3500

VLAN=yes

BRIDGE=DMZ3_DEV

ONBOOT=no

MTU=1500

NM_CONTROLLED=no

HOTPLUG=no

DMZ3_DEV:

DEVICE=DMZ3_DEV

TYPE=Bridge

DELAY=0

STP=off

ONBOOT=no

MTU=1500

DEFROUTE=no

NM_CONTROLLED=no

HOTPLUG=no

 

 

 


___
Questo messaggio e' indirizzato esclusivamente al destinatario indicato e 
potrebbe contenere informazioni
confidenziali, riservate o proprietarie. Qualora la presente venisse ricevuta 
per errore, si prega di segnalarlo
immediatamente al mittente, cancellando l'originale e ogni sua copia e 
distruggendo eventuali copie cartacee. Ogni
altro uso e' strettamente proibito e potrebbe essere fonte di violazione di 
legge.

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private
information. If you have received it in error, please notify the sender 
immediately, deleting the original and all
copies and destroying any hard copies. Any other use is strictly prohibited and 
may be unlawful.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] selectively disabling IPv6 on bridges

2015-05-07 Thread Rik Theys

Hi,

On 05/07/2015 12:46 PM, Dan Kenigsberg wrote:

On Wed, May 06, 2015 at 01:53:35PM +0100, Dan Kenigsberg wrote:

On Wed, May 06, 2015 at 01:28:30PM +0200, Rik Theys wrote:

Hi,

I'm looking for a way to selectively disable IPv6 on the bridge interfaces
on the oVirt hosts.

When oVirt creates the bridges for all logical networks on the host, it
keeps the default settings for IPv6 which means all bridges get a link-local
address and accept router advertisements.

When a VM is created on the logical network, it can now reach the host over
IPv6 (but not over IPv4 if no IP address has been assigned on the host). If
it sends out a router advertisement it can even create a global IPv6 address
(haven't tested this).

How can I prevent this?

I would like to prevent the guest from IPv6 access to the host but the guest
itself still needs IPv6 access (global IPv6 addresses).

Is it sufficient to create a sysctl config file that says:

net.ipv6.conf.default.disable_ipv6 = 1


Yes, I believe that this would do the trick. For any newly-created
device on the system, regardless of ovirt bridges.

I now see that el7 has changed the default for IPV6INIT to yes. We
should be more prudent and set IPV6INIT=no on all our devices.


Lukáš, it seems that setting IPV6INIT=no is not enough:

 IPV6INIT=yes|no
   Enable or disable IPv6 static, DHCP, or autoconf configuration for this 
interface
   Default: yes

The bridge still gets a link-local ipv6 address anyway. Is there an initscript
means to disable this completely, or should we resort to
/proc/sys/net/ipv6/conf/bridge-name/disable_ipv6 ?


I think you also have to disable this on the physical interface that's 
part of the bridge to fully disable this?


Regards,

Rik

--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

Any errors in spelling, tact or fact are transmission errors
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] multipath.conf changes removed on host activation

2015-05-06 Thread Rik Theys

Hi,

I have some specific device settings in multipath.conf for my storage 
box as it's not yet in the default settings of multipath for this device.


Upon activation of my host, the multipath.conf file is always replaced 
by the default version and my changes are lost.


How can I either prevent vdsm from touching the file, or merge my 
configuration?


Regards,

Rik
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] multipath.conf changes removed on host activation

2015-05-06 Thread Rik Theys

Hi,

On 05/06/2015 10:39 AM, Yeela Kaplan wrote:

What version of vdsm are you using?


vdsm-4.16.14-0.el7.x86_64


You can avoid overriding /etc/multipath.conf by editing it,
and adding the following line:
# RHEV PRIVATE
as the second line in the conf file,
right after the first line which is supposed to state the version of vdsm's 
multipath configuration.


Thanks, that does it!

Regards,

Rik



Let me know if it helps.

Yeela

- Original Message -

From: Rik Theys rik.th...@esat.kuleuven.be
To: users@ovirt.org
Sent: Wednesday, May 6, 2015 11:30:06 AM
Subject: [ovirt-users] multipath.conf changes removed on host activation

Hi,

I have some specific device settings in multipath.conf for my storage
box as it's not yet in the default settings of multipath for this device.

Upon activation of my host, the multipath.conf file is always replaced
by the default version and my changes are lost.

How can I either prevent vdsm from touching the file, or merge my
configuration?

Regards,

Rik
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users






--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

Any errors in spelling, tact or fact are transmission errors
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] selectively disabling IPv6 on bridges

2015-05-06 Thread Rik Theys

Hi,

I'm looking for a way to selectively disable IPv6 on the bridge 
interfaces on the oVirt hosts.


When oVirt creates the bridges for all logical networks on the host, it 
keeps the default settings for IPv6 which means all bridges get a 
link-local address and accept router advertisements.


When a VM is created on the logical network, it can now reach the host 
over IPv6 (but not over IPv4 if no IP address has been assigned on the 
host). If it sends out a router advertisement it can even create a 
global IPv6 address (haven't tested this).


How can I prevent this?

I would like to prevent the guest from IPv6 access to the host but the 
guest itself still needs IPv6 access (global IPv6 addresses).


Is it sufficient to create a sysctl config file that says:

net.ipv6.conf.default.disable_ipv6 = 1

?

Regards,

Rik


--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

Any errors in spelling, tact or fact are transmission errors
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] selectively disabling IPv6 on bridges

2015-05-06 Thread Rik Theys

Hi,

On 05/06/2015 02:53 PM, Dan Kenigsberg wrote:

On Wed, May 06, 2015 at 01:28:30PM +0200, Rik Theys wrote:

Hi,

I'm looking for a way to selectively disable IPv6 on the bridge interfaces
on the oVirt hosts.

When oVirt creates the bridges for all logical networks on the host, it
keeps the default settings for IPv6 which means all bridges get a link-local
address and accept router advertisements.

When a VM is created on the logical network, it can now reach the host over
IPv6 (but not over IPv4 if no IP address has been assigned on the host). If
it sends out a router advertisement it can even create a global IPv6 address
(haven't tested this).

How can I prevent this?

I would like to prevent the guest from IPv6 access to the host but the guest
itself still needs IPv6 access (global IPv6 addresses).

Is it sufficient to create a sysctl config file that says:

net.ipv6.conf.default.disable_ipv6 = 1


Yes, I believe that this would do the trick. For any newly-created
device on the system, regardless of ovirt bridges.


I've tried that and it seems to work. But IPv6 seems partially broken 
anyway even without applying this trick :-(.


When two VM's run on the same host and the host has ipv6 enabled (but no 
global addresses assigned), they can not reach each other when they are 
in the same network (and have statically configured IPv6 addresses). 
They can ping hosts in the same network that are on other physical boxes.


When you migrate one of the hosts to another physical machine they can 
ping each other. But not when they're running on the same host.


We have the same issue with hosts running on our CentOS 6 hosts with 
libvirt (no ovirt involved), so this isn't ovirt specific.


The neighbor solicitations are visible on the vnet0 (tcpdump running on 
the host) interface of the VM running the ping, and on the ovirtmgmt 
bridge. But not on the vnet1 (tcpdump running on the host) of the target VM.



I now see that el7 has changed the default for IPV6INIT to yes. We
should be more prudent and set IPV6INIT=no on all our devices.

Would you open a bug about this, so it is tracked?


OK, will do.

Regards,

Rik

--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

Any errors in spelling, tact or fact are transmission errors
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] NFS storage domain fail after engine upgrade [SOLVED]

2015-04-28 Thread Rik Theys

Hi,

The root cause of my problem was that there was a stale NFS mount on my 
hosts. They still had an nfs mount active.


Killing the ioprocess processes that were keeping the mount busy and 
then unmounting the nfs mounts allowed me to activate the iso and export 
domain again.


Regards,

Rik

On 04/28/2015 01:48 PM, Rik Theys wrote:

Hi,

I migrated my engine from CentOS 6 to CentOS 7 by taking an
engine-backup on the CentOS 6 install and running the restore on a
CentOS 7.1 install.

This worked rather well. I can log into the admin webinterface and see
my still running VM's.

The only issue I'm facing is that the hosts can no longer access the
export and ISO domain (which are nfs exports on my engine).

When I try to activate the storage domain on a host I get the following
message in the engine log (see below).

It seems the engine thinks the storage domain does not exist. I copied
the files from the old installation into the same directories on the new
installation and I can nfs mount them manually from the hosts. I can
also nfs mount it on the engine itself.

Any idea on how to debug this?

My engine is running 3.5.1 (actually 3.5.2 now as it just got upgraded,
but the upgrade did not change anything regarding this bug).

Is there a way to remove the export/iso domain? I can not detach it from
my data centers using the web interface.

Regards,

Rik


2015-04-28 12:59:19,271 INFO
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(ajp--127.0.0.1-8702-8) [450f2bb2] Lock Acquired to object EngineLock
[exclusiveLocks= key: 31ba6486-d4ef-45ae-a184-8296185ef79b value: STORAGE
, sharedLocks= ]
2015-04-28 12:59:19,330 INFO
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(org.ovirt.thread.pool-8-thread-25) [450f2bb2] Running command:
ActivateStorageDomainCommand internal: false. Entities affected :  ID:
31ba6486-d4ef-45ae-a184-8296185ef79b Type: StorageAction group
MANIPULATE_STORAGE_DOMAIN with role type ADMIN
2015-04-28 12:59:19,360 INFO
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(org.ovirt.thread.pool-8-thread-25) [450f2bb2] Lock freed to object
EngineLock [exclusiveLocks= key: 31ba6486-d4ef-45ae-a184-8296185ef79b
value: STORAGE
, sharedLocks= ]
2015-04-28 12:59:19,362 INFO
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(org.ovirt.thread.pool-8-thread-25) [450f2bb2] ActivateStorage Domain.
Before Connect all hosts to pool. Time:4/28/15 12:59 PM
2015-04-28 12:59:19,383 INFO
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand]
(org.ovirt.thread.pool-8-thread-28) [3e09aa16] Running command:
ConnectStorageToVdsCommand internal: true. Entities affected :  ID:
aaa0----123456789aaa Type: SystemAction group
CREATE_STORAGE_DOMAIN with role type ADMIN
2015-04-28 12:59:19,388 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(org.ovirt.thread.pool-8-thread-28) [3e09aa16] START,
ConnectStorageServerVDSCommand(HostName = stadius-virt2, HostId =
7212971a-d38a-42e7-8e6a-24d3396dfa6a, storagePoolId =
----, storageType = NFS, connectionList
= [{ id: 5f18ed21-8c71-4e71-874a-a6a8594c3138, connection:
iron:/var/lib/exports/export-domain, iqn: null, vfsType: null,
mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null
};]), log id: 13d9ec07
2015-04-28 12:59:19,409 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(org.ovirt.thread.pool-8-thread-28) [3e09aa16] FINISH,
ConnectStorageServerVDSCommand, return:
{5f18ed21-8c71-4e71-874a-a6a8594c3138=0}, log id: 13d9ec07
2015-04-28 12:59:19,417 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-25) [450f2bb2] START,
ActivateStorageDomainVDSCommand( storagePoolId =
e7bdba88-e718-41a9-8d2b-0ca79c517630, ignoreFailoverLimit = false,
storageDomainId = 31ba6486-d4ef-45ae-a184-8296185ef79b), log id: 1da2c4c4
2015-04-28 12:59:19,774 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-25) [450f2bb2] Failed in
ActivateStorageDomainVDS method
2015-04-28 12:59:19,781 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(org.ovirt.thread.pool-8-thread-25) [450f2bb2]
IrsBroker::Failed::ActivateStorageDomainVDS due to: IRSErrorException:
IRSGenericException: IRSErrorException: Failed to
ActivateStorageDomainVDS, error = Storage domain does not exist:
('31ba6486-d4ef-45ae-a184-8296185ef79b',), code = 358
2015-04-28 12:59:19,793 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-25) [450f2bb2] FINISH,
ActivateStorageDomainVDSCommand, log id: 1da2c4c4
2015-04-28 12:59:19,794 ERROR
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
(org.ovirt.thread.pool-8-thread-25) [450f2bb2] Command
org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand throw Vdc
Bll exception. With error

Re: [ovirt-users] status of ovirt 3.5.1 with centos 7.1

2015-04-24 Thread Rik Theys

Hi,

On 04/24/2015 05:40 AM, Chris Adams wrote:

Hmm, I also have idrac7, but I only need lanplus.  It worked okay until
I upgraded the hosts to CentOS 7.1 this week.  I suspect something
changed in the resource agents and either vdsm was sending something
slightly wrong that no longer works, or there's a new bug in the
resource agents (since vdsm didn't change).


Experienced a similar problem with an upgrade from 7.0 to 7.1 on my 
pacemaker cluster.


This was fixed in fence-agents 4.0.11-11.el7_1 with RHBA-2015:0801-1[1] 
for me.


Maybe you need to apply the latest fence-agents update?

Regards,

Rik

[1] https://rhn.redhat.com/errata/RHBA-2015-0801.html

--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

Any errors in spelling, tact or fact are transmission errors
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] bonding problems

2015-02-12 Thread Rik Theys
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users [2]
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users [2]
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users [2]

-- 
Rik Theys
 

Links:
--
[1] http://www.netbulae.eu
[2] http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can't remove snapshot due to low disk space on storage domain?

2015-02-11 Thread Rik Theys

Hi,

Thats unfortunate :-(. It would have been great if oVirt told me during 
the creation of the snapshot that I would be unable to remove it later 
due to unsufficient free space in the storage domain :-).


The VM has two disks - a small and a large one - so the snapshot 
contains both. Can I still remove the large disk while it has snapshots?


Luckily the used disk space in the (preallocated) large disk is low 
enough so I can fit it on another temporary disk then.


Does the free space have to be in the same storage domain, or can oVirt 
use another storage domain for the temporary volume? IOW, can I add an 
NFS storage domain which oVirt can use to create the temporary disk on?


Regards,

Rik


On 02/11/2015 12:57 PM, Elad Ben Aharon wrote:

*equals or larger than the disk size


*From: *Elad Ben Aharon ebena...@redhat.com
*To: *Rik Theys rik.th...@esat.kuleuven.be
*Cc: *users@ovirt.org
*Sent: *Wednesday, 11 February, 2015 12:18:35 PM
*Subject: *Re: [ovirt-users] Can't remove snapshot due to low disk
spaceonstorage domain?

Snapshot removal (merge) includes a create volume phase. This volume is
temporary and gets removed once the snapshot merge is completed. Its
size is the size of the disk.
That means that in order to remove the snapshot, the storage domain
should have available size that is equal to the disk size.









Elad Ben Aharon
RHEV-QE storage



*From: *Rik Theys rik.th...@esat.kuleuven.be
*To: *users@ovirt.org
*Sent: *Tuesday, 10 February, 2015 12:00:10 PM
*Subject: *[ovirt-users] Can't remove snapshot due to low disk space
onstorage domain?

Hi,

I'm running the ovirt engine 3.4 series. I've created a snapshot of a VM
with an OS and data disk before upgrading the machine.

The upgrade went fine and I now want to remove the snapshot.
Unfortunately this fails with the error:

Cannot remove Snapshot. Low disk space on target Storage Domain
stadius-virt2_PERC.

So I can't free disk space by removing the snapshot because I don't have
enough space?

When I look at the VM the disks are shown as preallocated (which is what
I selected during installation). When I look at the storage tab and list
the disks in my storage domain, the disks are now shown as thin
provisioned with the actual size  virtual size.

How can I remove this snapshot? I don't have enough free disk space in
my storage domain to duplicate the data disk of my VM.

Regards,

Rik


--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

Any errors in spelling, tact or fact are transmission errors
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

Any errors in spelling, tact or fact are transmission errors
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can't remove snapshot due to low disk space on storage domain?

2015-02-11 Thread Rik Theys

Hi,

I created the snapshot on the Snapshots subtab of the Virtual machines 
tab. I did this when my engine (and host) was running oVirt 3.4. It 
is/was a snapshot of the entire VM (I deselected the memory state).


I since upgraded the engine to 3.5.

When I look at the snapshots subtab of the virtual machine, I see the 
Current snapshot which is Active VM and the snapshot I created before. 
This snapshot is a snapshot of both the OS and data disk.


I can not delete the second snapshot when the VM is running. If I power 
down the VM, I can't remove the snapshot as it brings up the error I 
mentioned before.


However, when I look at the disk snapshots subtab of the storage tab 
(like you suggested), I see both disks with a snapshot and I can select 
Remove there while the VM is running. Is it safe to remove them this 
way? This will only remove the snapshot and not the disk (and data) from 
my (running) VM?


Regards,

Rik


On 02/11/2015 01:26 PM, Elad Ben Aharon wrote:

I'm not sure I understand which RHEV version you're using now.
If you're using RHEV 3.5, it is possible to remove a snapshot of a
single disk.
You can do it via webadmin, Under the 'Disks Snapshots' subtab of the
relevant storage domain, in the 'Storage' main tab.

As for your second question, the free space has to be in the same
storage domain.

*From: *Rik Theys rik.th...@esat.kuleuven.be
*To: *Elad Ben Aharon ebena...@redhat.com
*Cc: *users@ovirt.org
*Sent: *Wednesday, 11 February, 2015 2:10:13 PM
*Subject: *Re: [ovirt-users] Can't remove snapshot due to low disk
spaceon storage domain?

Hi,

Thats unfortunate :-(. It would have been great if oVirt told me during
the creation of the snapshot that I would be unable to remove it later
due to unsufficient free space in the storage domain :-).

The VM has two disks - a small and a large one - so the snapshot
contains both. Can I still remove the large disk while it has snapshots?

Luckily the used disk space in the (preallocated) large disk is low
enough so I can fit it on another temporary disk then.

Does the free space have to be in the same storage domain, or can oVirt
use another storage domain for the temporary volume? IOW, can I add an
NFS storage domain which oVirt can use to create the temporary disk on?

Regards,

Rik


On 02/11/2015 12:57 PM, Elad Ben Aharon wrote:
  *equals or larger than the disk size
 
  
  *From: *Elad Ben Aharon ebena...@redhat.com
  *To: *Rik Theys rik.th...@esat.kuleuven.be
  *Cc: *users@ovirt.org
  *Sent: *Wednesday, 11 February, 2015 12:18:35 PM
  *Subject: *Re: [ovirt-users] Can't remove snapshot due to low disk
  spaceonstorage domain?
 
  Snapshot removal (merge) includes a create volume phase. This volume is
  temporary and gets removed once the snapshot merge is completed. Its
  size is the size of the disk.
  That means that in order to remove the snapshot, the storage domain
  should have available size that is equal to the disk size.
 
 
 
 
 
 
 
 
 
  Elad Ben Aharon
  RHEV-QE storage
 
 
  
  *From: *Rik Theys rik.th...@esat.kuleuven.be
  *To: *users@ovirt.org
  *Sent: *Tuesday, 10 February, 2015 12:00:10 PM
  *Subject: *[ovirt-users] Can't remove snapshot due to low disk space
  onstorage domain?
 
  Hi,
 
  I'm running the ovirt engine 3.4 series. I've created a snapshot of a VM
  with an OS and data disk before upgrading the machine.
 
  The upgrade went fine and I now want to remove the snapshot.
  Unfortunately this fails with the error:
 
  Cannot remove Snapshot. Low disk space on target Storage Domain
  stadius-virt2_PERC.
 
  So I can't free disk space by removing the snapshot because I don't have
  enough space?
 
  When I look at the VM the disks are shown as preallocated (which is what
  I selected during installation). When I look at the storage tab and list
  the disks in my storage domain, the disks are now shown as thin
  provisioned with the actual size  virtual size.
 
  How can I remove this snapshot? I don't have enough free disk space in
  my storage domain to duplicate the data disk of my VM.
 
  Regards,
 
  Rik
 
 
  --
  Rik Theys
  System Engineer
  KU Leuven - Dept. Elektrotechniek (ESAT)
  Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
  +32(0)16/32.11.07
  
  Any errors in spelling, tact or fact are transmission errors
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 


--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus

Re: [ovirt-users] Can't remove snapshot due to low disk space on storage domain?

2015-02-11 Thread Rik Theys

Hi,

OK since my hosts are still CentOS 6.6, I will schedule downtime.

Will removing it this way also require the additional disk space in the 
storage domain?


Rik


On 02/11/2015 02:03 PM, Elad Ben Aharon wrote:

Indeed, removing the disk snapshot from the 'Disk Snapshots' subtab will
remove only the snapshot of the disk, not the disk itself.
Regarding snapshot removal while the  VM is running - this feature is
pretty new (live snapshot merge). AFAIK, this feature is tech preview
for 3.5. It also depends on the OS you're using on the hosts. RHEL 7.1
supports it, but again, as tech preview.
Therefore, the safest way to remove the snapshot of the disk would be to
do it while the VM is not running.



*From: *Rik Theys rik.th...@esat.kuleuven.be
*To: *Elad Ben Aharon ebena...@redhat.com
*Cc: *users@ovirt.org
*Sent: *Wednesday, 11 February, 2015 2:44:55 PM
*Subject: *Re: [ovirt-users] Can't remove snapshot due to low disk
spaceon storage domain?

Hi,

I created the snapshot on the Snapshots subtab of the Virtual machines
tab. I did this when my engine (and host) was running oVirt 3.4. It
is/was a snapshot of the entire VM (I deselected the memory state).

I since upgraded the engine to 3.5.

When I look at the snapshots subtab of the virtual machine, I see the
Current snapshot which is Active VM and the snapshot I created before.
This snapshot is a snapshot of both the OS and data disk.

I can not delete the second snapshot when the VM is running. If I power
down the VM, I can't remove the snapshot as it brings up the error I
mentioned before.

However, when I look at the disk snapshots subtab of the storage tab
(like you suggested), I see both disks with a snapshot and I can select
Remove there while the VM is running. Is it safe to remove them this
way? This will only remove the snapshot and not the disk (and data) from
my (running) VM?

Regards,

Rik


On 02/11/2015 01:26 PM, Elad Ben Aharon wrote:
  I'm not sure I understand which RHEV version you're using now.
  If you're using RHEV 3.5, it is possible to remove a snapshot of a
  single disk.
  You can do it via webadmin, Under the 'Disks Snapshots' subtab of the
  relevant storage domain, in the 'Storage' main tab.
 
  As for your second question, the free space has to be in the same
  storage domain.
  
  *From: *Rik Theys rik.th...@esat.kuleuven.be
  *To: *Elad Ben Aharon ebena...@redhat.com
  *Cc: *users@ovirt.org
  *Sent: *Wednesday, 11 February, 2015 2:10:13 PM
  *Subject: *Re: [ovirt-users] Can't remove snapshot due to low disk
  spaceon storage domain?
 
  Hi,
 
  Thats unfortunate :-(. It would have been great if oVirt told me during
  the creation of the snapshot that I would be unable to remove it later
  due to unsufficient free space in the storage domain :-).
 
  The VM has two disks - a small and a large one - so the snapshot
  contains both. Can I still remove the large disk while it has snapshots?
 
  Luckily the used disk space in the (preallocated) large disk is low
  enough so I can fit it on another temporary disk then.
 
  Does the free space have to be in the same storage domain, or can oVirt
  use another storage domain for the temporary volume? IOW, can I add an
  NFS storage domain which oVirt can use to create the temporary disk on?
 
  Regards,
 
  Rik
 
 
  On 02/11/2015 12:57 PM, Elad Ben Aharon wrote:
*equals or larger than the disk size
   
   

*From: *Elad Ben Aharon ebena...@redhat.com
*To: *Rik Theys rik.th...@esat.kuleuven.be
*Cc: *users@ovirt.org
*Sent: *Wednesday, 11 February, 2015 12:18:35 PM
*Subject: *Re: [ovirt-users] Can't remove snapshot due to low disk
spaceonstorage domain?
   
Snapshot removal (merge) includes a create volume phase. This
volume is
temporary and gets removed once the snapshot merge is completed. Its
size is the size of the disk.
That means that in order to remove the snapshot, the storage domain
should have available size that is equal to the disk size.
   
   
   
   
   
   
   
   
   
Elad Ben Aharon
RHEV-QE storage
   
   
   

*From: *Rik Theys rik.th...@esat.kuleuven.be
*To: *users@ovirt.org
*Sent: *Tuesday, 10 February, 2015 12:00:10 PM
*Subject: *[ovirt-users] Can't remove snapshot due to low disk space
onstorage domain?
   
Hi,
   
I'm running the ovirt engine 3.4 series. I've created a snapshot
of a VM
with an OS and data disk before upgrading the machine.
   
The upgrade went fine and I now want to remove the snapshot.
Unfortunately this fails with the error:
   
Cannot remove Snapshot. Low disk space on target Storage Domain
stadius-virt2_PERC

[ovirt-users] Can't remove snapshot due to low disk space on storage domain?

2015-02-10 Thread Rik Theys

Hi,

I'm running the ovirt engine 3.4 series. I've created a snapshot of a VM 
with an OS and data disk before upgrading the machine.


The upgrade went fine and I now want to remove the snapshot. 
Unfortunately this fails with the error:


Cannot remove Snapshot. Low disk space on target Storage Domain 
stadius-virt2_PERC.


So I can't free disk space by removing the snapshot because I don't have 
enough space?


When I look at the VM the disks are shown as preallocated (which is what 
I selected during installation). When I look at the storage tab and list 
the disks in my storage domain, the disks are now shown as thin 
provisioned with the actual size  virtual size.


How can I remove this snapshot? I don't have enough free disk space in 
my storage domain to duplicate the data disk of my VM.


Regards,

Rik


--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440  - B-3001 Leuven-Heverlee
+32(0)16/32.11.07

Any errors in spelling, tact or fact are transmission errors
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Bring down one of multiple storage domains in a data center

2015-02-04 Thread Rik Theys

Hi,

We are planning to use oVirt to manage our virtual machine 
infrastructure. We would like to connect two different storage boxes to 
the hosts, which I believe will result in two storage domains in the 
same datacenter for oVirt?


One of the storage boxes sometimes has to be powered down during 
building maintenance (electricity, cooling, ...). Will the data center 
with the two storage domains attached still be considered up when one 
of the storage domains is no longer available?


Is it sufficient to power down the VM's with disks on the affected 
storage domain and to put the affected storage domain in maintenance?


Will oVirt keep the datacenter up and keep on managing the remaining 
VM's on the other storage domain?




One of the storage domains will be a SAS-connected external storage box. 
There will be two SAS connections per host to the storage box so 
multipath should see the two paths. My understanding is that anything 
detected by multipathd is considered FC storage by oVirt. Is that correct?


Regards,

Rik


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users