Re: [ovirt-users] strange iscsi issue

2015-09-08 Thread Alex McWhirter
of > storage, if the host caches in RAM before sending it over the wire. But > that in my opinion is dangerous and as far as I know, it´s not actived > in oVirt, please correct me if I´m wrong. > > /K > > > > > Thanks > > > > Tibor > > >

Re: [ovirt-users] strange iscsi issue

2015-09-07 Thread Alex McWhirter
Unless you're using a caching filesystem like zfs, then you're going to be limited by how fast your storage back end can actually right to disk. Unless you have a quite large storage back end, 10gbe is probably faster than your disks can read and write. On Sep 7, 2015 4:26 PM, Demeter Tibor

[ovirt-users] Re: migrate hosted-engine vm to another cluster?

2019-01-16 Thread Alex McWhirter
I second this, it's one of the reasons i really dislike hosted engine. I also see a need for active / passive engine clones to exist perhaps even across multiple datacenters. Hosted engine tries to be similar to VMWare's VCenter appliance, but falls short in the HA department. The best you can get

[ovirt-users] Re: multiple engines (active passive)

2019-01-14 Thread Alex McWhirter
real HA is complicated, no way around that... As stated earlier, we also run engine bare metal using pacemaker / corosync / drbd to keep both nodes in perfect sync, failover happens in a few seconds. We also do daily backups of the engine, but in the 4 years or so that we have been running

[ovirt-users] Re: Active Storage Domains as Problematic

2018-12-20 Thread Alex McWhirter
you see to set strict direct io on the volumes performance.strict-o-direct on ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of

[ovirt-users] Re: Upload via GUI to VMSTORE possible but not ISO Domain

2018-12-20 Thread Alex McWhirter
I've always just used engine-iso-uploader on the engine host to upload images to the ISO domain, never really noticed that it doesn't "appear" to be in the GUI. Very rarely do i need to upload ISO's, so i guess it's just never really been an issue. I know the disk upload GUI options are for VM HDD

[ovirt-users] Re: Active Storage Domains as Problematic

2018-12-20 Thread Alex McWhirter
On 2018-12-20 07:53, Stefan Wolf wrote: i 've mounted it during the hosted-engine --deploy process I selected glusterfs and entered server:/engine I dont enter any mount options yes it is enabled for both. I dont got errors for the second one, but may it doesn't check after the first fail

[ovirt-users] Re: Active Storage Domains as Problematic

2018-12-20 Thread Alex McWhirter
On 2018-12-20 07:14, Stefan Wolf wrote: yes i think this too, but as you see at the top [root@kvm380 ~]# gluster volume info ... performance.strict-o-direct: on ... it was already set i did a one cluster setup with ovirt and I uses this result Volume Name: engine Type: Distribute Volume ID:

[ovirt-users] Re: Ovirt Engine UI bug- cannot attache networks to hosts

2018-12-26 Thread Alex McWhirter
I don't hit this bug, but even when you click "unattached" you still have to assign the networks to each host individually. Most people use network labels for this as you can assign them with one action. On 2018-12-26 09:54, Leo David wrote: > Thank you Eitan, > Anybody, any ideea if is any

[ovirt-users] Re: oVirt Node on CentOS 7.5 and AMD EPYC Support

2018-11-30 Thread Alex McWhirter
On 2018-11-30 09:33, Darin Schmidt wrote: I was curious, I have an AMD Threadripper (2970wx). Do you know where Ovirt is grepping or other to get the info needed to use the cpu type? I assume lscpu is possibly where it gets it and is just matching? Id like to be able to test this on a

[ovirt-users] Re: Ovirt 4.3 Alpha AMD 2970WX Windows VM creation and NUMA

2018-12-02 Thread Alex McWhirter
On 2018-12-02 14:07, Darin Schmidt wrote: Not sure if Users is the best place for this as Im using 4.3 to test support for my AMD 2970WX Threadripper but While trying to setup a Windows VM, it fails. I have a working CentOS 7 running. Heres what I get when I try to startup the VM. VM

[ovirt-users] Re: SPICE QXL Crashes Linux Guests

2018-11-26 Thread Alex McWhirter
On 2018-11-25 14:48, Alex McWhirter wrote: I'm having an odd issue that i find hard to believe could be a bug, and not some kind of user error, but im at a loss for where else to look. when booting a linux ISO with QXL SPICE graphics, the boot hangs as soon as kernel modesetting kicks in. Tried

[ovirt-users] Change Default Behaviour

2018-11-27 Thread Alex McWhirter
In the admin interface, if i create a server template and make a VM out of it i get a Clone/Independent VM. If i use a dekstop template a get a Thin/Dependent In the VM portal i only get Thin/Dependent. How can i change this so that it's always Clone/Dependent for certain templates?

[ovirt-users] Re: vGPU not available in "type mdev"

2018-11-27 Thread Alex McWhirter
On 2018-11-27 07:47, Marc Le Grand wrote: Hello I followed the tutorial regarding vGPU but it's not working, i i guess it's a Nvidia licence issue, but i need to be sure. My node is installed using the node ISO image. I just removed the nouveau driver ans installed the nVidia one. My product is

[ovirt-users] Re: Change Default Behaviour

2018-11-28 Thread Alex McWhirter
On 2018-11-28 06:56, Lucie Leistnerova wrote: Hello Alex, On 11/27/18 8:02 PM, Alex McWhirter wrote: In the admin interface, if i create a server template and make a VM out of it i get a Clone/Independent VM. If i use a dekstop template a get a Thin/Dependent In the VM portal i only get

[ovirt-users] SPICE QXL Crashes Linux Guests

2018-11-25 Thread Alex McWhirter
I'm having an odd issue that i find hard to believe could be a bug, and not some kind of user error, but im at a loss for where else to look. when booting a linux ISO with QXL SPICE graphics, the boot hangs as soon as kernel modesetting kicks in. Tried with latest debian, fedora, and centos.

[ovirt-users] Re: All hosts non-operational after upgrading from 4.2 to 4.3

2019-04-05 Thread Alex McWhirter
What kind of storage are you using? local? On 2019-04-05 12:26, John Florian wrote: > Also, I see in the notification drawer a message that says: > > Storage domains with IDs [ed4d83f8-41a2-41bd-a0cd-6525d9649edb] could not be > synchronized. To synchronize them, please move them to

[ovirt-users] Re: Tuning Gluster Writes

2019-04-14 Thread Alex McWhirter
irty_bytes = 45000 > It's more like shooting in the dark , but it might help. > > Best Regards, > Strahil Nikolov > > В неделя, 14 април 2019 г., 19:06:07 ч. Гринуич+3, Alex McWhirter > написа: > > On 2019-04-13 03:15, Strahil wrote: >> Hi, >>

[ovirt-users] Re: Poor I/O Performance (again...)

2019-04-14 Thread Alex McWhirter
On 2019-04-14 20:27, Jim Kusznir wrote: > Hi all: > > I've had I/O performance problems pretty much since the beginning of using > oVirt. I've applied several upgrades as time went on, but strangely, none of > them have alleviated the problem. VM disk I/O is still very slow to the > point

[ovirt-users] Re: Tuning Gluster Writes

2019-04-14 Thread Alex McWhirter
On 2019-04-14 13:05, Alex McWhirter wrote: On 2019-04-14 12:07, Alex McWhirter wrote: On 2019-04-13 03:15, Strahil wrote: Hi, What is your dirty cache settings on the gluster servers ? Best Regards, Strahil NikolovOn Apr 13, 2019 00:44, Alex McWhirter wrote: I have 8 machines acting

[ovirt-users] Re: Poor I/O Performance (again...)

2019-04-15 Thread Alex McWhirter
em applied from within UI with > the "Optimize for VirtStore" button ? > Thnak you ! > > On Mon, Apr 15, 2019 at 7:39 PM Alex McWhirter wrote: > > On 2019-04-14 23:22, Leo David wrote: > Hi, > Thank you Alex, I was looking for some optimisation settings as

[ovirt-users] Re: Tuning Gluster Writes

2019-04-15 Thread Alex McWhirter
On 2019-04-15 12:58, Alex McWhirter wrote: > On 2019-04-15 12:43, Darrell Budic wrote: Interesting. Who's 10g cards and > which offload settings did you disable? Did you do that on the servers or the > vm host clients or both? > > On Apr 15, 2019, at 11:37 AM, Alex McWhirter

[ovirt-users] Re: Poor I/O Performance (again...)

2019-04-15 Thread Alex McWhirter
rt or rhev team) validate these > settings or add some other tweaks as well, so we can use them as standard ? > Thank you very much again ! > > On Mon, Apr 15, 2019, 05:56 Alex McWhirter wrote: > > On 2019-04-14 20:27, Jim Kusznir wrote: > > Hi all: > I've had

[ovirt-users] Re: Tuning Gluster Writes

2019-04-15 Thread Alex McWhirter
On 2019-04-15 12:43, Darrell Budic wrote: > Interesting. Who's 10g cards and which offload settings did you disable? Did > you do that on the servers or the vm host clients or both? > > On Apr 15, 2019, at 11:37 AM, Alex McWhirter wrote: > > I went in and disabled

[ovirt-users] Tuning Gluster Writes

2019-04-12 Thread Alex McWhirter
I have 8 machines acting as gluster servers. They each have 12 drives raid 50'd together (3 sets of 4 drives raid 5'd then 0'd together as one). They connect to the compute hosts and to each other over lacp'd 10GB connections split across two cisco nexus switched with VPC. Gluster has the

[ovirt-users] Re: Tuning Gluster Writes

2019-04-14 Thread Alex McWhirter
On 2019-04-13 03:15, Strahil wrote: Hi, What is your dirty cache settings on the gluster servers ? Best Regards, Strahil NikolovOn Apr 13, 2019 00:44, Alex McWhirter wrote: I have 8 machines acting as gluster servers. They each have 12 drives raid 50'd together (3 sets of 4 drives raid 5

[ovirt-users] libvirt memory leak?

2019-06-01 Thread Alex McWhirter
After moving from 4.2 -> 4.3 libvirtd seems to be leaking memory, it recently crashed a host by eating 123GB of RAM. Seems to follow one specific VM around, this is the only VM i have created since 4.3, the others were all made in 4.2 them upgraded to 4.3 What logs would be applicable?

[ovirt-users] 4.3 live migration creates wrong image permissions.

2019-06-13 Thread Alex McWhirter
after upgrading from 4.2 to 4.3, after a vm live migrates it's disk images are become owned by root:root. Live migration succeeds and the vm stays up, but after shutting down the VM from this point, starting it up again will cause it to fail. At this point i have to go in and change the

[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-14 Thread Alex McWhirter
PM Milan Zamazal wrote: > >> Alex McWhirter writes: >> >>> In this case, i should be able to edit /etc/libvirtd/qemu.conf on all >>> the nodes to disable dynamic ownership as a temporary measure until >>> this is patched for libgfapi? >>

[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-13 Thread Alex McWhirter
rds, Strahil NikolovOn Jun 13, 2019 09:46, Alex McWhirter wrote: after upgrading from 4.2 to 4.3, after a vm live migrates it's disk images are become owned by root:root. Live migration succeeds and the vm stays up, but after shutting down the VM from this point, starting it up again will ca

[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-13 Thread Alex McWhirter
SPICE Version: 0.14.0 - 6.el7_6.1 GlusterFS Version: [N/A] On 2019-06-13 06:51, Simone Tiraboschi wrote: > On Thu, Jun 13, 2019 at 11:18 AM Alex McWhirter wrote: > >> after upgrading from 4.2 to 4.3, after a vm live migrates it's disk >> images are become owned by root:ro

[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-13 Thread Alex McWhirter
this happen for new VMs as well? On Thu, Jun 13, 2019 at 12:15 PM Alex McWhirter wrote: > > after upgrading from 4.2 to 4.3, after a vm live migrates it's disk > images are become owned by root:root. Live migration succeeds and the vm > stays up, but after shutting down the VM from this poi

[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-14 Thread Alex McWhirter
at 12:18 PM Alex McWhirter wrote: after upgrading from 4.2 to 4.3, after a vm live migrates it's disk images are become owned by root:root. Live migration succeeds and the vm stays up, but after shutting down the VM from this point, starting it up again will cause it to fail. At this point i

[ovirt-users] Migrating Domain Storage Gluster

2019-05-09 Thread Alex McWhirter
Basically i want to take out all of the HDD's in the main gluster pool, and replace with SSD's. My thought was to put everything in maintenance, copy the data manually over to a transient storage server. Destroy the gluster volume, swap in all the new drives, build a new gluster volume with

[ovirt-users] Gluster Snapshot Datepicker Not Working?

2019-05-10 Thread Alex McWhirter
Updated to 4.3.3.7, the date picker for gluster snapshot appears to not be working? It wont register clicks, and manually typing in times doesn't work. Can anyone else confirm? ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-22 Thread Alex McWhirter
On 2019-04-22 14:48, adrianquint...@gmail.com wrote: Hello, I have a 3 node Hyperconverged setup with gluster and added 3 new nodes to the cluster for a total of 6 servers. I am now taking advantage of more compute power but cant scale out my storage volumes. Current Hyperconverged setup: -

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-22 Thread Alex McWhirter
On 2019-04-22 17:33, adrianquint...@gmail.com wrote: Found the following and answered part of my own questions, however I think this sets a new set of Replica 3 Bricks, so if I have 2 hosts fail from the first 3 hosts then I loose my hyperconverged?

[ovirt-users] Re: Template Disk Corruption

2019-04-24 Thread Alex McWhirter
Alex McWhirter wrote: oVirt is 4.2.7.5 VDSM is 4.20.43 Not sure which logs are applicable, i don't see any obvious errors in vdsm.log or engine.log. After you delete the desktop VM, and create another based on the template the new VM still boots, it just reports disk read errors and fails boot

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-25 Thread Alex McWhirter
You create the brick on top of the multipath device. Look for one that is the same size as the /dev/sd* device that you want to use. On 2019-04-25 08:00, Strahil Nikolov wrote: > In which menu do you see it this way ? > > Best Regards, > Strahil Nikolov > > В сряда, 24 април 2019 г.,

[ovirt-users] Re: Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster

2019-04-25 Thread Alex McWhirter
ultpath device). > > thanks, > > On Thu, Apr 25, 2019 at 8:41 AM Alex McWhirter wrote: > > You create the brick on top of the multipath device. Look for one that is the > same size as the /dev/sd* device that you want to use. > > On 2019-04-25 08:00, Strahi

[ovirt-users] Re: Tuning Gluster Writes

2019-04-14 Thread Alex McWhirter
On 2019-04-14 12:07, Alex McWhirter wrote: On 2019-04-13 03:15, Strahil wrote: Hi, What is your dirty cache settings on the gluster servers ? Best Regards, Strahil NikolovOn Apr 13, 2019 00:44, Alex McWhirter wrote: I have 8 machines acting as gluster servers. They each have 12 drives

[ovirt-users] Template Disk Corruption

2019-04-24 Thread Alex McWhirter
1. Create server template from server VM (so it's a full copy of the disk) 2. From template create a VM, override server to desktop, so that it become a qcow2 overlay to the template raw disk. 3. Boot VM 4. Shutdown VM 5. Delete VM Template disk is now corrupt, any new machines made

[ovirt-users] Re: Template Disk Corruption

2019-04-24 Thread Alex McWhirter
:01, Benny Zlotnik wrote: can you provide more info (logs, versions)? On Wed, Apr 24, 2019 at 11:04 AM Alex McWhirter wrote: 1. Create server template from server VM (so it's a full copy of the disk) 2. From template create a VM, override server to desktop, so that it become a qcow2 overlay

[ovirt-users] Re: VDI

2019-09-23 Thread Alex McWhirter
"check-out" the Gold Image, update it and check-in, while all users are running... Fabio On Mon, Sep 23, 2019 at 8:04 PM Alex McWhirter wrote: yes, we do. All spice, with some customizations done at source level for spice / kvm packages. On 2019-09-23 13:44, Fabio Marzocca wrote:

[ovirt-users] Re: 4.3 live migration creates wrong image permissions.

2019-06-14 Thread Alex McWhirter
still be problems when using GlusterFS with libgfapi: https://bugzilla.redhat.com/1719789. What's your Vdsm version and which kind of storage do you use? *Regards,* *Shani Leviim* On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter wrote: after upgrading from 4.2 to 4.3, after a vm live migrates

[ovirt-users] Re: Libgfapi considerations

2019-12-16 Thread Alex McWhirter
I also use libgfapi in prod. 1. This is a pretty annoying issue, i wish engine-config would look to see if it already enabled and just keep it that way. 2. Edit /etc/libvirt/qemu.conf and set dynamic ownership to 0, will stop the permission changes. 3. I don't see this error on any of my

[ovirt-users] Re: VDI

2019-10-06 Thread Alex McWhirter
irt-viewer installed on the client, and latest qxl-dod driver installed on the vm. Any thoughts on solving video performance and audio redirection ? Thank you again, Leo On Mon, Sep 23, 2019, 22:53 Alex McWhirter wrote: To achieve that all you need to do is create a template of the deskto

[ovirt-users] gluster shard size

2020-01-24 Thread Alex McWhirter
building a new gluster volume this weekend, trying to optimize it fully for virt. RHGS states that it supports only a 512mb shard size, so i ask why is the default for ovirt 64mb?___ Users mailing list -- users@ovirt.org To unsubscribe send an email to

[ovirt-users] Windows Guest Agent Issues Since 4.3

2020-04-06 Thread Alex McWhirter
Upgraded a installation to 4.3, and update the guest agent on all VM's, now all of my Windows VM's have a exclamation point telling me to install the latest guest agent. Some parts of the guest agent still seem to work, the IP addresses are still showing in the portal, but not FQDN and for some

[ovirt-users] Re: Speed Issues

2020-03-27 Thread Alex McWhirter
, what other caveats? -Chris. On 24/03/2020 19:25, Alex McWhirter wrote: Red hat also recommends a shard size of 512mb, it's actually the only shard size they support. Also check the chunk size on the LVM thin pools running the bricks, should be at least 2mb. Note that changing the shard size

[ovirt-users] Re: Windows VirtIO drivers

2020-04-02 Thread Alex McWhirter
I've never had any of these issues... These are my usual windows steps. 1. boot fresh vm into windows ISO image installer. 2. when i get to disk screen (blank because i use VirtIO-SCSI), change windows ISO to ovirt guest tools ISO 3. click load drivers, browse, load VirtIO-SCSI driver from

[ovirt-users] Re: Speed Issues

2020-03-24 Thread Alex McWhirter
Red hat also recommends a shard size of 512mb, it's actually the only shard size they support. Also check the chunk size on the LVM thin pools running the bricks, should be at least 2mb. Note that changing the shard size only applies to new VM disks after the change. Changing the chunk size

[ovirt-users] Re: How to make oVirt + GlusterFS bulletproof

2020-10-09 Thread Alex McWhirter
A few things to consider, what is your RAID situation per host. If you're using mdadm based soft raid, you need to make sure your drives support power loss data protection. This is mostly only a feature on enterprise drives. Essenstially it ensures the drives reserve enough energy to flush

[ovirt-users] Re: Improve glusterfs performance

2020-10-21 Thread Alex McWhirter
In my experience, the ovirt optimized defaults are fairly sane. I may change a few things like enabling read ahead or increasing the shard size, but these are minor performance bumps if anything. The most important thing is the underlying storage, RAID 10 is ideal performance wise, large

[ovirt-users] Re: CentOS 8 is dead

2020-12-08 Thread Alex McWhirter
On 2020-12-08 14:37, Strahil Nikolov via Users wrote: Hello All, I'm really worried about the following news: https://blog.centos.org/2020/12/future-is-centos-stream/ Did anyone tried to port oVirt to SLES/openSUSE or any Debian-based distro ? Best Regards, Strahil Nikolov

[ovirt-users] Re: new hyperconverged setup

2020-12-02 Thread Alex McWhirter
On 2020-12-02 09:56, cpo cpo wrote: Thanks. Trying to figure out if I should use dedup/comp now. If I don't is the total usable space if I follow your setup guide for my storage domain going to be 21tb once I am done with everything (3 vm storage volumes, one per disk)? Thanks, Donnie

[ovirt-users] Re: CentOS 8 is dead

2020-12-10 Thread Alex McWhirter
On 2020-12-10 15:02, tho...@hoberg.net wrote: I came to oVirt thinking that it was like CentOS: There might be bugs, but given the mainline usage in home and coporate labs with light workloads and nothing special, chances to hit one should be pretty minor: I like looking for new fronteers atop

[ovirt-users] Re: Nodes in CentOS 8.3 and oVirt 4.4.3.12-1.el8 but not able to update cluster version

2020-12-10 Thread Alex McWhirter
You have to put all hosts in the cluster to maintenance mode first, then you can change the compat version. On 2020-12-10 11:09, Gianluca Cecchi wrote: Hello, my engine is 4.4.3.12-1.el8 and my 3 oVirt nodes (based on plain CentOS due to megaraid_sas kernel module needed) have been updated,

[ovirt-users] Re: Recent news & oVirt future

2020-12-10 Thread Alex McWhirter
On 2020-12-10 15:47, Charles Kozler wrote: I guess this is probably a question for all current open source projects that red hat runs but - Does this mean oVirt will effectively become a rolling release type situation as well? How exactly is oVirt going to stay open source and stay in

[ovirt-users] How do you manage OVN?

2020-11-19 Thread Alex McWhirter
I'm not sure if I' missing something, but it seems there is no way built in to oVirt to manage OVN outside of network / subnet creation. In particular routing both between networks and to external networks. Of course you have the OVN utilities, but it seems that the provider API is the

[ovirt-users] Re: vGPU on ovirt 4.3

2021-01-25 Thread Alex McWhirter
IIRC oVirt 4.3 should have the basic hooks in place for mdev passthrough. For nvidia this mean you need the vgpu drivers and a license server. These licenses have a recurring cost. AMD's solution uses SR-IOV, and requires a custom kernel module that is not well tested YMMV. You can also

[ovirt-users] Re: oVirt + Gluster issues

2021-06-08 Thread Alex McWhirter
I've run into a similar problem when using VDO + LVM + XFS stacks, also with ZFS. If you're trying to use ZFS on 4.4, my recommendation is don't. You have to run the testing branch at minimum, and quiet a few things just don't work. As for VDO, i ran into this issue when using VDO and a

[ovirt-users] 4.4.4 Image Copying / Template Create Fails - No Such File

2021-03-07 Thread Alex McWhirter
I've been wrestling with this all night, digging through various bits of VDSM code trying to figure why and how this is happening. I need to make some templates, but i simply can't. VDSM command HSMGetAllTasksStatusesVDS failed: value=low level Image copy failed: ("Command

[ovirt-users] Re: 4.4.4 Image Copying / Template Create Fails - No Such File

2021-03-07 Thread Alex McWhirter
enabled, on oVirt 4.4.4+ (possibly earlier is also affected) If this is something current qemu-img cannot handle, i don't think supporting sparse disks on sharded gluster volumes is wise. On 2021-03-08 01:06, Alex McWhirter wrote: This actually looks to be related to sharding. Doing a strace

[ovirt-users] Re: 4.4.4 Image Copying / Template Create Fails - No Such File

2021-03-07 Thread Alex McWhirter
and use qemu-img to overwrite that new unsharded file, there are no issues. On 2021-03-07 18:33, Nir Soffer wrote: On Sun, Mar 7, 2021 at 1:14 PM Alex McWhirter wrote: I've been wrestling with this all night, digging through various bits of VDSM code trying to figure why and how

[ovirt-users] Re: 4.4.4 Image Copying / Template Create Fails - No Such File

2021-03-07 Thread Alex McWhirter
I see, yes running the command by hand results in the same error. Gluster version is 8.3 (I upgraded to see if 8.4.5 would result in a different outcome) Previously it was gluster 7.9, same issue either way. On 2021-03-07 18:33, Nir Soffer wrote: On Sun, Mar 7, 2021 at 1:14 PM Alex

[ovirt-users] Re: Public IP routing question

2021-03-08 Thread Alex McWhirter
You can route it to a private address on your router if you want... We use EVPN/VXLAN (but regular old vlans work too) Just put the public space on a vlan, add public as a vlan tagged network in ovirt. Only your public facing VM's need addresses in the space. On 2021-03-08 05:53, David White

[ovirt-users] Re: VDI and ovirt

2021-02-23 Thread Alex McWhirter
On 2021-02-23 07:39, cpo cpo wrote: Is anyone using Ovirt for a Windows 10 VDI deployment? If so are you using a connection broker? If you are what are you using? Thanks for your time We use ovirt quite a lot for windows 10 vdi, 4.4 / EL8 is quite a bit nicer spice version wise. No

[ovirt-users] Re: Add nodes to single node gluster hyperconverged

2021-08-27 Thread Alex McWhirter
On 2021-08-27 13:24, Thomas Hoberg wrote: I'd rather doubt the GUI would help you there and what's worse, the GUI doesn't easily tell you what it tries to do. By the time you've found and understood what it tries from the logfiles, you'd have it done on your own. It's an unfortunate thing

[ovirt-users] Re: oVirt and the future

2021-08-27 Thread Alex McWhirter
On 2021-08-27 13:09, Thomas Hoberg wrote: Ubuntu support: I feel ready to bet a case of beer, that that won't happen. I'd tend to agree with this. oVirt embeds itself so deep into the RHEL architecture that moving to anything else that doesn't provide the same provisions will be a huge

[ovirt-users] Re: Creating VMs from templates with their own disks

2021-11-17 Thread Alex McWhirter
On 2021-11-17 12:02, notify.s...@gmail.com wrote: Hi All Im very stumped on how to create VMs from templates I've made, but having them installed with their own disks. Please can some one guide me on how to do this? I have Ovirt running, with local storage hypervisors. Anytime I try to use a

[ovirt-users] Re: Creating VMs from templates with their own disks

2021-11-17 Thread Alex McWhirter
On 2021-11-17 13:50, Sina Owolabi wrote: Ok thanks Sounds odd but no problem How do I make the new VM use its own disk, named after itself? On Wed, 17 Nov 2021 at 19:45, Alex McWhirter wrote: On 2021-11-17 12:02, notify.s...@gmail.com wrote: Hi All Im very stumped on how to create

[ovirt-users] Re: The Engine VM (/32) and this host (/32) will not be in the same IP subnet.

2021-10-27 Thread Alex McWhirter
On 2021-10-27 16:09, Sina Owolabi wrote: Its really weird. Just tried again, with the same failure, on a freshly reinstalled CentOS 8. Server has a number of vlan interfaces, on a physical interface enp2s0f1, all in the defined notation, one vlan interface has an IP, 10.200.10.3/23, Second

[ovirt-users] Re: Cannot to update hosts, nothing provides libvirt-daemon-kvm >= 7.6.0-2 needed by vdsm-4.40.90.4-1.el8.x86_64

2021-11-03 Thread Alex McWhirter
On 2021-11-03 16:52, Patrick Lomakin wrote: I think it's a bug. I couldn't find any rpm "libvirt-daemon-kvm" package in CentOS or Ovirt repo.(only libvirt-daemon-kvm 7.0.0). Try to use --nobest flag to install updates. ___ Users mailing list --

[ovirt-users] Re: no QXL ?

2021-12-07 Thread Alex McWhirter
Additionally, should this go forward. I would be interested in maintaining a 3rd party repo with patched packages to keep SPICE/QXL support if anyone else would like to join. On 2021-12-07 10:08, Alex McWhirter wrote: I've sent my concerns to redhat, this would force us to look at other

[ovirt-users] Re: no QXL ?

2021-12-07 Thread Alex McWhirter
It's being removed from RHEL 9, unsure of reasoning. So this mean that oVirt cannot offer SPICE/QXL on RHEL9, there is no spice package, qemu is compiled without SPICE/QXL support, the kernel does not support QXL video drivers. It's not that oVirt is killing off SPICE/QXL, but rather RHEL9

[ovirt-users] Re: no QXL ?

2021-12-07 Thread Alex McWhirter
I've sent my concerns to redhat, this would force us to look at other software and more than likely no longer be red hat customers. On 2021-12-07 09:15, Neal Gompa wrote: On Tue, Dec 7, 2021 at 8:49 AM Rik Theys wrote: Hi, Will SPICE be deprecated or fully removed in oVirt 4.5? Since

[ovirt-users] Re: oVirt alternatives

2022-02-05 Thread Alex McWhirter
Oh i have spent years looking. ProxMox is probably the closest option, but has no multi-clustering support. The clusters are more or less isolated from each other, and would need another layer if you needed the ability to migrate between them. XCP-ng, cool. No spice support. No UI for

[ovirt-users]Re: About oVirt’s future

2022-11-21 Thread Alex McWhirter
I have some manpower im willing to throw at oVirt, but i somewhat need to know if what the community wants and what we want are in line. 1. We'd bring back spice and maybe qxl. We are already maintaining forks of the ovirt and RH kernels for this. We use ovirt currently for a lot of VDI

[ovirt-users] Re: What is the status of the whole Ovirt Project?

2023-07-13 Thread Alex McWhirter
We still have a few oVirt and RHV installs kicking around, but between this and some core features we use being removed from el8/9 (gluster, spice / qxl, and probably others soon at this rate) we've heavily been shifting gears away from both Red Hat and oVirt. Not to mention the recent

[ovirt-users] Re: What is the status of the whole Ovirt Project?

2023-07-20 Thread Alex McWhirter
the pieces, i don't have the resources to also maintain the growing list of depreciated / cut features in the base OS. On 2023-07-14 02:27, Sandro Bonazzola wrote: Il giorno ven 14 lug 2023 alle ore 00:07 Alex McWhirter ha scritto: I would personally put CloudStack in the same category

[ovirt-users] Re: What is the status of the whole Ovirt Project?

2023-07-13 Thread Alex McWhirter
ation is to aid in organization modernization as a way to consolidate workloads onto a single platform while giving app dev time to migrate their work to containers and microservice based deployments." BR, Konstantin Am 13.07.23, 09:10 schrieb "Alex McWhirter" mailto:a...@tria