Re: [ovirt-users] Snapshot removal time

2017-09-22 Thread Troels Arvin
Hello,

Ala wrote:
> What's the version of the manager (engine)?

4.1.1



> Could you please provide the link or the SPM and the host 
> running the VM?

I don't understand that. I cannot provide intimate details about the 
installation, nor a link to it.


-- 
Regards,
Troels

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Snapshot removal time

2017-09-22 Thread Troels Arvin
Hello,

I have a RHV 4.1 virtualized guest-server with a number of rather large 
VirtIO virtual disks attached. The virtual disks are allocated from a 
fibre channel (block) storage domain. The hypervisor servers run RHEL 7.4.

When I take a snapshot of the guest, then it takes a long time to remove 
the snapshots again, when the guest is powered off (a snapshot of a 2 TiB 
disk takes around 3 hours to remove). However, when the guest is running, 
then snapshot removal is very quick (perhaps around five minutes per 
snapshot). The involved disks have not been written much to while they 
had snapshots.

I would expect the opposite: I.e., when the guest is turned off, then I 
would assume that oVirt can handle snapshot removal in a much more 
aggressive fashion than when performing a live snapshot removal?

When performing offline snapshot removal, then on the hypervisor having 
the SPM role, I see the following in output from "ps xauw":

vdsm 10255 8.3 0.0 389144 27196 ? Shttps://www.ovirt.org/develop/release-management/
features/storage/remove-snapshot/
My interpretation from that document is that I should expect to see "qemu-
img commit" commands instead of "qemu-img convert" processes. Or?

The RHV system involved is somewhat old, having been upgrade many times 
from 3.x through 4.1. Could it be that it carries around old left-overs 
which results in obsolete snapshot removal behavior?

-- 
Regards,
Troels Arvin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Discard After Delete: Why not the default?

2017-09-10 Thread Troels Arvin
Hello,

In the settings for a storage domain, in "Advanced Options", there is an 
option "Discard After Delete". The default is that the checkbox is not 
set.

The documentation at
https://www.ovirt.org/develop/release-management/features/storage/discard-
after-delete/ and 
https://www.ovirt.org/develop/release-management/features/storage/pass-
discard-from-guest-to-underlying-storage/
does not mention why the option is not the default. What drawback does 
the "Dicard After Delete" feature have, since it's not the default?

-- 
Regards,
Troels Arvin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Maintenance mode for storage?

2017-05-14 Thread Troels Arvin
Hello,

Fred Rolland wrote:
> You cannot put a storage domain to maintenance if you have running VMs
> using disks located in this domain.

OK.

Am I right about the following: A storage domain needs to be in 
maintenance mode before a LUN may be removed from it?

- And if so: This means that ovirt does not support online removal of a 
LUN from a storage domain, even if there's sufficient free space in other 
LUNs in the storage domain?


-- 
Regards,
Troels arvin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Maintenance mode for storage?

2017-05-14 Thread Troels Arvin
Hello,

I would like to make use of the new ovirt feature where a LUN may be 
removed from a storage domain, provided there's enough free space on the 
remaining LUNs in the domain.

It seems the storage domain needs to be put in maintenance mode before 
this may be accomplished. But what's the consequence of putting a storage 
domain in maintenance mode? Can guests using the domain keep running 
while the storage domain is in maintenance mode? Does something stop 
working while a storage domain is in maintenance mode?

-- 
Regards,
Troels Arvin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] collectd, fluentd, ...

2017-05-03 Thread Troels Arvin
Hello,

Setup: RHV 4.1 with full RHEL 7 servers as hypervisor hosts.

I noticed an "Update available" symbol for all the hypervisor hosts. 
Running "yum update" on the hosts themselves didn't list any pending 
updates.

Running "check for available updates" on the hosts in the RHV web 
interface resulted in a long list of software which RHV would like to 
add, it seems, see below.

What's all that, and is it really needed?

The packages:
found updates for packages collectd-5.7.0-4.el7, collectd-
disk-5.7.0-4.el7, collectd-netlink-5.7.0-4.el7, collectd-
virt-5.7.0-4.el7, collectd-write_http-5.7.0-4.el7, fluentd-0.12.29-1.el7, 
libcollectdclient-5.7.0-4.el7, ruby-2.0.0.648-29.el7, ruby-
irb-2.0.0.648-29.el7, ruby-libs-2.0.0.648-29.el7, rubygem-
bigdecimal-1.2.0-29.el7, rubygem-cool.io-1.4.5-2.el7, rubygem-fluent-
plugin-rewrite-tag-filter-1.5.5-3.el7, rubygem-fluent-plugin-secure-
forward-0.4.3-2.el7, rubygem-http_parser.rb-0.6.0-3.el7, rubygem-io-
console-0.4.2-29.el7, rubygem-json-1.7.7-29.el7, rubygem-
msgpack-0.5.12-2.el7, rubygem-proxifier-1.0.3-2.el7, rubygem-
psych-2.0.0-29.el7, rubygem-rdoc-4.0.0-29.el7, rubygem-resolve-
hostname-0.0.4-1.el7, rubygem-sigdump-0.2.4-1.el7, rubygem-string-
scrub-0.0.5-3.el7, rubygem-thread_safe-0.3.5-2.el7, rubygem-
tzinfo-1.2.2-3.el7

-- 
Regards,
Troels Arvin, Copenhagen

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to add the clean-traffic network-filter to a guest

2016-09-25 Thread Troels Arvin
Edward Haas wrote:
> In 4.0 you can set this in the vnic profile (per network).
> 
> With 3.6, you will need to create a hook to do it.

Thanks - it sounds like a very good reason for an upgrade, then.

-- 
Regards,
Troels Arvin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How to add the clean-traffic network-filter to a guest

2016-09-25 Thread Troels Arvin
I would like to minimize the risk of virtual servers harming each other. 
As part of this, I would like to prevent them from changing their IP 
address to something different from what they are expected to have. In 
other words, I would like to prevent IP address spoofing in the guests. 
And I want to be able to do this without having to assign a different VLAN 
to each guest.

Setup: RHEV 3.6 with RH7-based RHEV-H hypervisor hosts.

Using virsh -r dumpxml   on a host, I can see that the guests 
have the "vdsm-no-mac-spoofing" network filter active for the virtual 
network interface.

But what if I want the "clean-traffic" filter to be active for the 
guests, as well (or instead): Is there a way to accomplish that in the 
RHEV-M/oVirt management interface? If so: Where's the option(s) to be 
found in the management interface? Can it be done globally, i.e. as a 
default when guests are started?

-- 
Regards,
Troels Arvin

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Shrinking a storage domain

2016-02-26 Thread Troels Arvin
Hello,

I'm moving some storage from one SAN storage unit to another SAN storage 
unit (i.e.: block device storage). So I have two RHEV storage domains:

SD1
SD2

We need to move most of SD1s data to SD2.

Live storage migration for a guest works fine: It's easy to migrate a 
disk from being in SD1 to SD2.

But now that I have released around half of SD1, I would like to let go 
of a LUN which is part of SD1. How is that done? (If this were a stand-
alone Linux box, I would use vgreduce to release a PV and then remove the 
PV from the server.)

-- 
Regards,
Troels Arvin


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users