omeone picks up the pieces, i don't have the
resources to also maintain the growing list of depreciated / cut
features in the base OS.
On 2023-07-14 02:27, Sandro Bonazzola wrote:
Il giorno ven 14 lug 2023 alle ore 00:07 Alex McWhirter
ha scritto:
I would personally put CloudStack in
to containers'
"The whole purpose behind OpenShift Virtualization is to aid in
organization modernization as a way to consolidate workloads onto a
single platform while giving app dev time to migrate their work to
containers and microservice based deployments."
BR,
Konstantin
We still have a few oVirt and RHV installs kicking around, but between
this and some core features we use being removed from el8/9 (gluster,
spice / qxl, and probably others soon at this rate) we've heavily been
shifting gears away from both Red Hat and oVirt. Not to mention the
recent drama...
I have some manpower im willing to throw at oVirt, but i somewhat need
to know if what the community wants and what we want are in line.
1. We'd bring back spice and maybe qxl. We are already maintaining forks
of the ovirt and RH kernels for this. We use ovirt currently for a lot
of VDI soluti
Oh i have spent years looking.
ProxMox is probably the closest option, but has no multi-clustering
support. The clusters are more or less isolated from each other, and
would need another layer if you needed the ability to migrate between
them.
XCP-ng, cool. No spice support. No UI for managi
It's being removed from RHEL 9, unsure of reasoning.
So this mean that oVirt cannot offer SPICE/QXL on RHEL9, there is no
spice package, qemu is compiled without SPICE/QXL support, the kernel
does not support QXL video drivers.
It's not that oVirt is killing off SPICE/QXL, but rather RHEL9 is
Additionally, should this go forward. I would be interested in
maintaining a 3rd party repo with patched packages to keep SPICE/QXL
support if anyone else would like to join.
On 2021-12-07 10:08, Alex McWhirter wrote:
I've sent my concerns to redhat, this would force us to look at
I've sent my concerns to redhat, this would force us to look at other
software and more than likely no longer be red hat customers.
On 2021-12-07 09:15, Neal Gompa wrote:
On Tue, Dec 7, 2021 at 8:49 AM Rik Theys
wrote:
Hi,
Will SPICE be deprecated or fully removed in oVirt 4.5?
Since spice
On 2021-11-17 13:50, Sina Owolabi wrote:
Ok thanks
Sounds odd but no problem
How do I make the new VM use its own disk, named after itself?
On Wed, 17 Nov 2021 at 19:45, Alex McWhirter wrote:
On 2021-11-17 12:02, notify.s...@gmail.com wrote:
Hi All
Im very stumped on how to create
On 2021-11-17 12:02, notify.s...@gmail.com wrote:
Hi All
Im very stumped on how to create VMs from templates I've made, but
having them installed with their own disks.
Please can some one guide me on how to do this?
I have Ovirt running, with local storage hypervisors.
Anytime I try to use a te
On 2021-11-03 16:52, Patrick Lomakin wrote:
I think it's a bug. I couldn't find any rpm "libvirt-daemon-kvm"
package in CentOS or Ovirt repo.(only libvirt-daemon-kvm 7.0.0). Try
to use --nobest flag to install updates.
___
Users mailing list -- users@ov
On 2021-10-27 16:09, Sina Owolabi wrote:
Its really weird.
Just tried again, with the same failure, on a freshly reinstalled
CentOS 8.
Server has a number of vlan interfaces, on a physical interface
enp2s0f1, all in the defined notation,
one vlan interface has an IP, 10.200.10.3/23,
Second phy
On 2021-08-27 13:24, Thomas Hoberg wrote:
I'd rather doubt the GUI would help you there and what's worse, the
GUI doesn't easily tell you what it tries to do. By the time you've
found and understood what it tries from the logfiles, you'd have it
done on your own.
It's an unfortunate thing that
On 2021-08-27 13:09, Thomas Hoberg wrote:
Ubuntu support: I feel ready to bet a case of beer, that that won't
happen.
I'd tend to agree with this. oVirt embeds itself so deep into the RHEL
architecture that moving to anything else that doesn't provide
the same provisions will be a huge underta
I've run into a similar problem when using VDO + LVM + XFS stacks, also
with ZFS.
If you're trying to use ZFS on 4.4, my recommendation is don't. You have
to run the testing branch at minimum, and quiet a few things just don't
work.
As for VDO, i ran into this issue when using VDO and a NVME
You can route it to a private address on your router if you want...
We use EVPN/VXLAN (but regular old vlans work too) Just put the public
space on a vlan, add public as a vlan tagged network in ovirt. Only your
public facing VM's need addresses in the space.
On 2021-03-08 05:53, David White
e with sharding enabled, on oVirt 4.4.4+ (possibly earlier is also
affected)
If this is something current qemu-img cannot handle, i don't think
supporting sparse disks on sharded gluster volumes is wise.
On 2021-03-08 01:06, Alex McWhirter wrote:
This actually looks to be related to sharding. D
and use qemu-img to overwrite that new
unsharded file, there are no issues.
On 2021-03-07 18:33, Nir Soffer wrote:
On Sun, Mar 7, 2021 at 1:14 PM Alex McWhirter wrote:
I've been wrestling with this all night, digging through various bits of VDSM code trying to figure why and how th
I see, yes running the command by hand results in the same error.
Gluster version is 8.3 (I upgraded to see if 8.4.5 would result in a
different outcome)
Previously it was gluster 7.9, same issue either way.
On 2021-03-07 18:33, Nir Soffer wrote:
On Sun, Mar 7, 2021 at 1:14 PM Alex
I've been wrestling with this all night, digging through various bits of
VDSM code trying to figure why and how this is happening. I need to make
some templates, but i simply can't.
VDSM command HSMGetAllTasksStatusesVDS failed: value=low level
Image copy failed: ("Command ['/usr/bin/qemu-img'
On 2021-02-23 07:39, cpo cpo wrote:
Is anyone using Ovirt for a Windows 10 VDI deployment? If so are you
using a connection broker? If you are what are you using?
Thanks for your time
We use ovirt quite a lot for windows 10 vdi, 4.4 / EL8 is quite a bit
nicer spice version wise.
No broker
IIRC oVirt 4.3 should have the basic hooks in place for mdev
passthrough. For nvidia this mean you need the vgpu drivers and a
license server. These licenses have a recurring cost.
AMD's solution uses SR-IOV, and requires a custom kernel module that is
not well tested YMMV.
You can also pass
On 2020-12-10 15:02, tho...@hoberg.net wrote:
I came to oVirt thinking that it was like CentOS: There might be bugs,
but given the mainline usage in home and coporate labs with light
workloads and nothing special, chances to hit one should be pretty
minor: I like looking for new fronteers atop of
On 2020-12-10 15:47, Charles Kozler wrote:
I guess this is probably a question for all current open source projects that red hat runs but -
Does this mean oVirt will effectively become a rolling release type situation as well?
How exactly is oVirt going to stay open source and stay in cadenc
You have to put all hosts in the cluster to maintenance mode first, then
you can change the compat version.
On 2020-12-10 11:09, Gianluca Cecchi wrote:
Hello,
my engine is 4.4.3.12-1.el8 and my 3 oVirt nodes (based on plain CentOS due to megaraid_sas kernel module needed) have been updated, b
On 2020-12-08 14:37, Strahil Nikolov via Users wrote:
Hello All,
I'm really worried about the following news:
https://blog.centos.org/2020/12/future-is-centos-stream/
Did anyone tried to port oVirt to SLES/openSUSE or any Debian-based
distro ?
Best Regards,
Strahil Nikolov
On 2020-12-02 09:56, cpo cpo wrote:
Thanks. Trying to figure out if I should use dedup/comp now. If I
don't is the total usable space if I follow your setup guide for my
storage domain going to be 21tb once I am done with everything (3 vm
storage volumes, one per disk)?
Thanks,
Donnie
I'm not sure if I' missing something, but it seems there is no way built
in to oVirt to manage OVN outside of network / subnet creation. In
particular routing both between networks and to external networks.
Of course you have the OVN utilities, but it seems that the provider API
is the preffere
In my experience, the ovirt optimized defaults are fairly sane. I may
change a few things like enabling read ahead or increasing the shard
size, but these are minor performance bumps if anything.
The most important thing is the underlying storage, RAID 10 is ideal
performance wise, large stripe
A few things to consider,
what is your RAID situation per host. If you're using mdadm based soft
raid, you need to make sure your drives support power loss data
protection. This is mostly only a feature on enterprise drives.
Essenstially it ensures the drives reserve enough energy to flush the
Upgraded a installation to 4.3, and update the guest agent on all VM's,
now all of my Windows VM's have a exclamation point telling me to
install the latest guest agent. Some parts of the guest agent still seem
to work, the IP addresses are still showing in the portal, but not FQDN
and for some ma
I've never had any of these issues... These are my usual windows steps.
1. boot fresh vm into windows ISO image installer.
2. when i get to disk screen (blank because i use VirtIO-SCSI), change
windows ISO to ovirt guest tools ISO
3. click load drivers, browse, load VirtIO-SCSI driver from
lites, what other caveats?
-Chris.
On 24/03/2020 19:25, Alex McWhirter wrote:
Red hat also recommends a shard size of 512mb, it's actually the only
shard size they support. Also check the chunk size on the LVM thin
pools running the bricks, should be at least 2mb. Note that changing
the
Red hat also recommends a shard size of 512mb, it's actually the only
shard size they support. Also check the chunk size on the LVM thin pools
running the bricks, should be at least 2mb. Note that changing the shard
size only applies to new VM disks after the change. Changing the chunk
size req
building a new gluster volume this weekend, trying to optimize it fully
for virt. RHGS states that it supports only a 512mb shard size, so i ask
why is the default for ovirt 64mb?___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to u
I also use libgfapi in prod.
1. This is a pretty annoying issue, i wish engine-config would look to
see if it already enabled and just keep it that way.
2. Edit /etc/libvirt/qemu.conf and set dynamic ownership to 0, will stop
the permission changes.
3. I don't see this error on any of my cl
At the moment i am running 4.3, latest virt-viewer installed on the client, and latest qxl-dod driver installed on the vm.
Any thoughts on solving video performance and audio redirection ?
Thank you again,
Leo
On Mon, Sep 23, 2019, 22:53 Alex McWhirter wrote:
To achieve that all you need
ly they simply "check-out" the Gold Image, update it and check-in, while all users are running...
Fabio
On Mon, Sep 23, 2019 at 8:04 PM Alex McWhirter wrote:
yes, we do. All spice, with some customizations done at source level for spice / kvm packages.
On 2019-09-23 13:44, Fabio M
PM Milan Zamazal wrote:
>
>> Alex McWhirter writes:
>>
>>> In this case, i should be able to edit /etc/libvirtd/qemu.conf on all
>>> the nodes to disable dynamic ownership as a temporary measure until
>>> this is patched for libgfapi?
>>
>> N
19 at 12:18 PM Alex McWhirter
wrote:
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
images are become owned by root:root. Live migration succeeds and the
vm
stays up, but after shutting down the VM from this point, starting it
up
again will cause it to fail. At th
still be problems when using GlusterFS with libgfapi:
https://bugzilla.redhat.com/1719789.
What's your Vdsm version and which kind of storage do you use?
*Regards,*
*Shani Leviim*
On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter
wrote:
after upgrading from 4.2 to 4.3, after a vm live mig
happen for new VMs as well?
On Thu, Jun 13, 2019 at 12:15 PM Alex McWhirter
wrote:
>
> after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
> images are become owned by root:root. Live migration succeeds and the vm
> stays up, but after shutting down the VM from thi
rds,
Strahil NikolovOn Jun 13, 2019 09:46, Alex McWhirter
wrote:
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
images are become owned by root:root. Live migration succeeds and the
vm
stays up, but after shutting down the VM from this point, starting it
up
again wi
SPICE Version:
0.14.0 - 6.el7_6.1
GlusterFS Version:
[N/A]
On 2019-06-13 06:51, Simone Tiraboschi wrote:
> On Thu, Jun 13, 2019 at 11:18 AM Alex McWhirter wrote:
>
>> after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
>> images are become owned by ro
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
images are become owned by root:root. Live migration succeeds and the vm
stays up, but after shutting down the VM from this point, starting it up
again will cause it to fail. At this point i have to go in and change
the permiss
After moving from 4.2 -> 4.3 libvirtd seems to be leaking memory, it
recently crashed a host by eating 123GB of RAM. Seems to follow one
specific VM around, this is the only VM i have created since 4.3, the
others were all made in 4.2 them upgraded to 4.3
What logs would be applicable?
Libvir
Updated to 4.3.3.7, the date picker for gluster snapshot appears to not
be working? It wont register clicks, and manually typing in times
doesn't work.
Can anyone else confirm?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to
Basically i want to take out all of the HDD's in the main gluster pool,
and replace with SSD's.
My thought was to put everything in maintenance, copy the data manually
over to a transient storage server. Destroy the gluster volume, swap in
all the new drives, build a new gluster volume with th
use sd* (multpath device).
>
> thanks,
>
> On Thu, Apr 25, 2019 at 8:41 AM Alex McWhirter wrote:
>
> You create the brick on top of the multipath device. Look for one that is the
> same size as the /dev/sd* device that you want to use.
>
> On 2019-04-25 08:
You create the brick on top of the multipath device. Look for one that
is the same size as the /dev/sd* device that you want to use.
On 2019-04-25 08:00, Strahil Nikolov wrote:
> In which menu do you see it this way ?
>
> Best Regards,
> Strahil Nikolov
>
> В сряда, 24 април 2019 г., 8:55:2
Alex McWhirter
wrote:
oVirt is 4.2.7.5
VDSM is 4.20.43
Not sure which logs are applicable, i don't see any obvious errors in
vdsm.log or engine.log. After you delete the desktop VM, and create
another based on the template the new VM still boots, it just reports
disk read errors and fails
24 05:01, Benny Zlotnik wrote:
can you provide more info (logs, versions)?
On Wed, Apr 24, 2019 at 11:04 AM Alex McWhirter
wrote:
1. Create server template from server VM (so it's a full copy of the
disk)
2. From template create a VM, override server to desktop, so that it
become a qcow2
1. Create server template from server VM (so it's a full copy of the
disk)
2. From template create a VM, override server to desktop, so that it
become a qcow2 overlay to the template raw disk.
3. Boot VM
4. Shutdown VM
5. Delete VM
Template disk is now corrupt, any new machines made from
On 2019-04-22 17:33, adrianquint...@gmail.com wrote:
Found the following and answered part of my own questions, however I
think this sets a new set of Replica 3 Bricks, so if I have 2 hosts
fail from the first 3 hosts then I loose my hyperconverged?
https://access.redhat.com/documentation/en-us/
On 2019-04-22 14:48, adrianquint...@gmail.com wrote:
Hello,
I have a 3 node Hyperconverged setup with gluster and added 3 new
nodes to the cluster for a total of 6 servers.
I am now taking advantage of more compute power but cant scale out my
storage volumes.
Current Hyperconverged setup:
- hos
ve them applied from within UI with
> the "Optimize for VirtStore" button ?
> Thnak you !
>
> On Mon, Apr 15, 2019 at 7:39 PM Alex McWhirter wrote:
>
> On 2019-04-14 23:22, Leo David wrote:
> Hi,
> Thank you Alex, I was looking for some optimisation settin
On 2019-04-15 12:58, Alex McWhirter wrote:
> On 2019-04-15 12:43, Darrell Budic wrote: Interesting. Who's 10g cards and
> which offload settings did you disable? Did you do that on the servers or the
> vm host clients or both?
>
> On Apr 15, 2019, at 11:37 AM, Alex McWhi
On 2019-04-15 12:43, Darrell Budic wrote:
> Interesting. Who's 10g cards and which offload settings did you disable? Did
> you do that on the servers or the vm host clients or both?
>
> On Apr 15, 2019, at 11:37 AM, Alex McWhirter wrote:
>
> I went in and disabled
rt or rhev team) validate these
> settings or add some other tweaks as well, so we can use them as standard ?
> Thank you very much again !
>
> On Mon, Apr 15, 2019, 05:56 Alex McWhirter wrote:
>
> On 2019-04-14 20:27, Jim Kusznir wrote:
>
> Hi all:
> I've had
On 2019-04-14 22:47, Alex McWhirter wrote:
> On 2019-04-14 17:07, Strahil Nikolov wrote:
>
>> Some kernels do not like values below 5%, thus I prefer to use
>> vm.dirty_bytes & vm.dirty_background_bytes.
>> Try the following ones (comm
On 2019-04-14 20:27, Jim Kusznir wrote:
> Hi all:
>
> I've had I/O performance problems pretty much since the beginning of using
> oVirt. I've applied several upgrades as time went on, but strangely, none of
> them have alleviated the problem. VM disk I/O is still very slow to the
> point th
irty_bytes = 45000
> It's more like shooting in the dark , but it might help.
>
> Best Regards,
> Strahil Nikolov
>
> В неделя, 14 април 2019 г., 19:06:07 ч. Гринуич+3, Alex McWhirter
> написа:
>
> On 2019-04-13 03:15, Strahil wrote:
>> Hi,
>
On 2019-04-14 13:05, Alex McWhirter wrote:
On 2019-04-14 12:07, Alex McWhirter wrote:
On 2019-04-13 03:15, Strahil wrote:
Hi,
What is your dirty cache settings on the gluster servers ?
Best Regards,
Strahil NikolovOn Apr 13, 2019 00:44, Alex McWhirter
wrote:
I have 8 machines acting as
On 2019-04-14 12:07, Alex McWhirter wrote:
On 2019-04-13 03:15, Strahil wrote:
Hi,
What is your dirty cache settings on the gluster servers ?
Best Regards,
Strahil NikolovOn Apr 13, 2019 00:44, Alex McWhirter
wrote:
I have 8 machines acting as gluster servers. They each have 12 drives
On 2019-04-13 03:15, Strahil wrote:
Hi,
What is your dirty cache settings on the gluster servers ?
Best Regards,
Strahil NikolovOn Apr 13, 2019 00:44, Alex McWhirter
wrote:
I have 8 machines acting as gluster servers. They each have 12 drives
raid 50'd together (3 sets of 4 drives r
I have 8 machines acting as gluster servers. They each have 12 drives
raid 50'd together (3 sets of 4 drives raid 5'd then 0'd together as
one).
They connect to the compute hosts and to each other over lacp'd 10GB
connections split across two cisco nexus switched with VPC.
Gluster has the fo
What kind of storage are you using? local?
On 2019-04-05 12:26, John Florian wrote:
> Also, I see in the notification drawer a message that says:
>
> Storage domains with IDs [ed4d83f8-41a2-41bd-a0cd-6525d9649edb] could not be
> synchronized. To synchronize them, please move them to maintenan
I second this, it's one of the reasons i really dislike hosted engine. I
also see a need for active / passive engine clones to exist perhaps even
across multiple datacenters. Hosted engine tries to be similar to
VMWare's VCenter appliance, but falls short in the HA department. The
best you can get
real HA is complicated, no way around that...
As stated earlier, we also run engine bare metal using pacemaker /
corosync / drbd to keep both nodes in perfect sync, failover happens in
a few seconds. We also do daily backups of the engine, but in the 4
years or so that we have been running ovirt,
I don't hit this bug, but even when you click "unattached" you still
have to assign the networks to each host individually. Most people use
network labels for this as you can assign them with one action.
On 2018-12-26 09:54, Leo David wrote:
> Thank you Eitan,
> Anybody, any ideea if is any wor
On 2018-12-20 07:53, Stefan Wolf wrote:
i 've mounted it during the hosted-engine --deploy process
I selected glusterfs
and entered server:/engine
I dont enter any mount options
yes it is enabled for both. I dont got errors for the second one, but
may it doesn't check after the first fail
__
On 2018-12-20 07:14, Stefan Wolf wrote:
yes i think this too, but as you see at the top
[root@kvm380 ~]# gluster volume info
...
performance.strict-o-direct: on
...
it was already set
i did a one cluster setup with ovirt and I uses this result
Volume Name: engine
Type: Distribute
Volume ID: a
I've always just used engine-iso-uploader on the engine host to upload
images to the ISO domain, never really noticed that it doesn't "appear"
to be in the GUI. Very rarely do i need to upload ISO's, so i guess it's
just never really been an issue. I know the disk upload GUI options are
for VM HDD
you see to set strict direct io on the volumes
performance.strict-o-direct on
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Con
On 2018-12-02 14:07, Darin Schmidt wrote:
Not sure if Users is the best place for this as Im using 4.3 to test
support for my AMD 2970WX Threadripper but While trying to setup a
Windows VM, it fails. I have a working CentOS 7 running. Heres what I
get when I try to startup the VM.
VM Windows-Dar
On 2018-11-30 09:33, Darin Schmidt wrote:
I was curious, I have an AMD Threadripper (2970wx). Do you know where
Ovirt is grepping or other to get the info needed to use the cpu type?
I assume lscpu is possibly where it gets it and is just matching? Id
like to be able to test this on a threadrippe
On 2018-11-28 06:56, Lucie Leistnerova wrote:
Hello Alex,
On 11/27/18 8:02 PM, Alex McWhirter wrote:
In the admin interface, if i create a server template and make a VM
out of it i get a Clone/Independent VM. If i use a dekstop template a
get a Thin/Dependent
In the VM portal i only get
In the admin interface, if i create a server template and make a VM out
of it i get a Clone/Independent VM. If i use a dekstop template a get a
Thin/Dependent
In the VM portal i only get Thin/Dependent.
How can i change this so that it's always Clone/Dependent for certain
templates?
On 2018-11-27 07:47, Marc Le Grand wrote:
Hello
I followed the tutorial regarding vGPU but it's not working, i i guess
it's a Nvidia licence issue, but i need to be sure.
My node is installed using the node ISO image.
I just removed the nouveau driver ans installed the nVidia one.
My product is :
On 2018-11-25 14:48, Alex McWhirter wrote:
I'm having an odd issue that i find hard to believe could be a bug,
and not some kind of user error, but im at a loss for where else to
look.
when booting a linux ISO with QXL SPICE graphics, the boot hangs as
soon as kernel modesetting kicks in.
I'm having an odd issue that i find hard to believe could be a bug, and
not some kind of user error, but im at a loss for where else to look.
when booting a linux ISO with QXL SPICE graphics, the boot hangs as soon
as kernel modesetting kicks in. Tried with latest debian, fedora, and
centos. S
to be bound by the speed of > storage, if the host
caches in RAM before sending it over the wire. But > that in my opinion is
dangerous and as far as I know, it´s not actived > in oVirt, please correct me
if I´m wrong. > > /K > > > > > Thanks > > > &g
Unless you're using a caching filesystem like zfs, then you're going to be
limited by how fast your storage back end can actually right to disk. Unless
you have a quite large storage back end, 10gbe is probably faster than your
disks can read and write.
On Sep 7, 2015 4:26 PM, Demeter Tibor wr
83 matches
Mail list logo