[ovirt-users] Re: Importing VM from Xen Server 7.1

2021-08-27 Thread Vinícius Ferrão via Users
Hi Francesco, I never was able to achieve this migration in that way. After many hours trying I just given up and used virt-p2v and treated all the VMs from XenServer as physical servers. xen+ssh AFAIK does not work correctly with XAPI (Xen API) which is what XenServer uses. Sent from my

[ovirt-users] Re: Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-05 Thread Vinícius Ferrão via Users
l-Term-ANSIColor yum --enablerepo baseos --enablerepo appstream install perl-Getopt-Long tcl gcc-gfortran tcsh tk make ./mlnxinstall On Thu, Aug 5, 2021 at 3:32 PM Vinícius Ferrão mailto:fer...@versatushpc.com.br>> wrote: Hi Edward, it seems that running mlnxofedinstall would do the job. Althoug

[ovirt-users] Re: Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-05 Thread Vinícius Ferrão via Users
on the script? I couldn't find the difference between enabling it or not. Thank you. On 5 Aug 2021, at 15:20, Vinícius Ferrão mailto:fer...@versatushpc.com.br>> wrote: Hmmm. Running the mlnx_ofed_install.sh script is a pain. But I got your idea. I'll do this test right now and repor

[ovirt-users] Re: Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-05 Thread Vinícius Ferrão
, 2021 at 10:04 PM Vinícius Ferrão via Users mailto:users@ovirt.org>> wrote: Hello, Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each other? The real issue is regarding GlusterFS. It seems to be a Mellanox issue, but I would like to know if there's something that we ca

[ovirt-users] Re: Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-05 Thread Vinícius Ferrão via Users
>> wrote: As far as I know rdma is deprecated ong glusterfs, but it most probably works. Best Regards, Strahil Nikolov On Thu, Aug 5, 2021 at 5:05, Vinícius Ferrão via Users mailto:users@ovirt.org>> wrote: Hello, Is there a way to keep Mellanox OFED and oVirt/RHV playing nice wit

[ovirt-users] Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-04 Thread Vinícius Ferrão via Users
Hello, Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each other? The real issue is regarding GlusterFS. It seems to be a Mellanox issue, but I would like to know if there's something that we can do make both play nice on the same machine: [root@rhvepyc2 ~]# dnf update

[ovirt-users] Re: Host not becoming active due to VDSM failure

2021-08-03 Thread Vinícius Ferrão via Users
As a followup to the mailing list. Updating the machine solved this issue. But the bugzilla still applies since it was blocking the upgrade. Thank you all. On 2 Aug 2021, at 13:22, Vinícius Ferrão via Users mailto:users@ovirt.org>> wrote: Hi Ales, Nir. Sorry for the delayed ans

[ovirt-users] Re: Host not becoming active due to VDSM failure

2021-08-02 Thread Vinícius Ferrão via Users
hat.com>> wrote: On Fri, Jul 30, 2021 at 7:41 PM Vinícius Ferrão via Users mailto:users@ovirt.org>> wrote: ... > restore-net::ERROR::2021-07-30 > 12:34:56,167::restore_net_config::462::root::(restore) restoration failed. > Traceback (most recent call last): > File "/

[ovirt-users] Host not becoming active due to VDSM failure

2021-07-30 Thread Vinícius Ferrão via Users
Hello, I have a host that's failing to bring up VDSM, the logs don't say anything specific, but there's a Python error about DHCP on it. Is there anyone with a similar issue? [root@rhvpower ~]# systemctl status vdsmd ● vdsmd.service - Virtual Desktop Server Manager Loaded: loaded

[ovirt-users] Re: LACP across multiple switches

2021-07-27 Thread Vinícius Ferrão via Users
Yes. I have it running this way. You must configure as 802.3ad normally on oVirt but keep in mind that you must use bond and not teaming. On the switches just configure MLAG, VLT, vPC ou whatever multichassis aggregation suportes by your switch vendor. For the ovirtmgmt there’s some caveats

[ovirt-users] Re: Slow VM replication

2021-04-30 Thread Vinícius Ferrão via Users
As far as I know FreeNAS supports VMware-like snapshots but not the language that oVirt speaks. Another point to observe is that FreeNAS with RAID-Z3 is not recommended from VM storage, because it is just slow for this purpose. Usually NFS issues sync requests which will be slow due to it's

[ovirt-users] Re: Migrate windows 2003 server 64bits from libvirt to ovirt

2021-02-22 Thread Vinícius Ferrão via Users
Hi Fernando. The blue screen message is in Portuguese, the majority of the list speaks English. So it will be hard to get some help on this. Regarding the message for non Portuguese speakers, is says that the BIOS and/or the firmware isn’t compatible with ACPI. Since the OS is legacy, this

[ovirt-users] Re: Constantly XFS in memory corruption inside VMs

2021-01-14 Thread Vinícius Ferrão via Users
Wed, Dec 2, 2020 at 10:42 AM Vinícius Ferrão via Users mailto:users@ovirt.org>> wrote: Can this be related the case? https://bugzilla.redhat.com/show_bug.cgi?id=810082 On 1 Dec 2020, at 10:25, Vinícius Ferrão mailto:fer...@versatushpc.com.br>> wrote: ECC RAM everywhere: hosts

[ovirt-users] Re: Shrink iSCSI Domain

2020-12-29 Thread Vinícius Ferrão via Users
gt; > > В понеделник, 28 декември 2020 г., 18:27:38 Гринуич+2, Vinícius Ferrão via > Users написа: > > > > > > Hi Shani, thank you! > > > > It’s only one LUN :( > > > > > So it may be a best practice to split an SD in multipl

[ovirt-users] Re: Shrink iSCSI Domain

2020-12-28 Thread Vinícius Ferrão via Users
[1] https://www.ovirt.org/develop/release-management/features/storage/reduce-luns-from-sd.html Regards, Shani Leviim On Sun, Dec 27, 2020 at 8:16 PM Vinícius Ferrão via Users mailto:users@ovirt.org>> wrote: Hello, Is there any way to reduce the size of an iSCSI Storage Domain? I can’t seem to

[ovirt-users] Shrink iSCSI Domain

2020-12-27 Thread Vinícius Ferrão via Users
Hello, Is there any way to reduce the size of an iSCSI Storage Domain? I can’t seem to figure this myself. It’s probably unsupported, and the path would be create a new iSCSI Storage Domain with the reduced size and move the virtual disks to there and them delete the old one. But I would like

[ovirt-users] Re: CentOS 8 is dead

2020-12-25 Thread Vinícius Ferrão via Users
Oracle took that college meme — just change the variables name — too seriously. > On 25 Dec 2020, at 16:35, James Loker-Steele via Users > wrote: > > Yes. > We use OEL and have setup oracles branded ovirt as well as test ovirt on > oracle and it works a treat. > > > Sent from my iPhone >

[ovirt-users] Re: Constantly XFS in memory corruption inside VMs

2020-12-12 Thread Vinícius Ferrão via Users
"AUTO" negotiated option. 1) I put the storage domain in maintainance mode. 2) Changed it to NFS v3 and remove it from the maintanance mode. and Boom everything came back to normal. You can check if that workaround will work for you. On Wed, Dec 2, 2020 at 10:42 AM Vinícius Ferrão via Use

[ovirt-users] Re: CentOS 8 is dead

2020-12-08 Thread Vinícius Ferrão via Users
CentOS Stream is unstable at best. I’ve used it recently and it was just a mess. There’s no binary compatibility with the current point release and there’s no version pinning. So it will be really difficult to keep track of things. I’m really curious how oVirt will handle this. From: Wesley

[ovirt-users] Re: difference between CPU server and client family

2020-12-08 Thread Vinícius Ferrão via Users
AFAIK Client is for the i3/i5/i7/i9 families and the other one is for Xeon platforms. But you have pretty unusually Xeon, so it may be missing some flags that will properly classify the CPU. You can run this on the host to check what’s detected: [root]# vdsm-client Host getCapabilities Sent

[ovirt-users] Re: Constantly XFS in memory corruption inside VMs

2020-12-01 Thread Vinícius Ferrão via Users
Can this be related the case? https://bugzilla.redhat.com/show_bug.cgi?id=810082 On 1 Dec 2020, at 10:25, Vinícius Ferrão mailto:fer...@versatushpc.com.br>> wrote: ECC RAM everywhere: hosts and storage. I even run Memtest86 on both hypervisor hosts just be sure. No errors. I haven

[ovirt-users] Re: Constantly XFS in memory corruption inside VMs

2020-12-01 Thread Vinícius Ferrão via Users
ик, 1 декември 2020 г., 06:17:10 Гринуич+2, Vinícius Ferrão via Users > написа: > > > > > > > Hi again, > > > > I had to shutdown everything because of a power outage in the office. When > trying to get the infra up again, even the Engine have corrupte

[ovirt-users] Re: Constantly XFS in memory corruption inside VMs

2020-11-30 Thread Vinícius Ferrão via Users
Damn... You are using EFI boot. Does this happen only to EFI machines ? Did you notice if only EL 8 is affected ? Best Regards, Strahil Nikolov В неделя, 29 ноември 2020 г., 19:36:09 Гринуич+2, Vinícius Ferrão mailto:fer...@versatushpc.com.br>> написа: Yes! I have a liv

[ovirt-users] Re: Constantly XFS in memory corruption inside VMs

2020-11-29 Thread Vinícius Ferrão via Users
kolov > > > > > > > В неделя, 29 ноември 2020 г., 19:36:09 Гринуич+2, Vinícius Ferrão > написа: > > > > > > Yes! > > I have a live VM right now that will de dead on a reboot: > > [root@kontainerscomk ~]# cat /etc/*release > N

[ovirt-users] Re: Constantly XFS in memory corruption inside VMs

2020-11-29 Thread Vinícius Ferrão via Users
e=winnt 0 2 /dev/mapper/rhel-swap noneswapdefaults 0 0 Thanks, -Original Message- From: Strahil Nikolov Sent: Sunday, November 29, 2020 2:33 PM To: Vinícius Ferrão Cc: users Subject: Re: [ovirt-users] Re: Constantly XFS in memory corruption inside VMs

[ovirt-users] Re: Constantly XFS in memory corruption inside VMs

2020-11-29 Thread Vinícius Ferrão via Users
t; Strahil Nikolov > > > > > > > В събота, 28 ноември 2020 г., 19:25:48 Гринуич+2, Vinícius Ferrão > написа: > > > > > > Hi Strahil, > > I moved a running VM to other host, rebooted and no corruption was found. If > there's any corrup

[ovirt-users] Re: Constantly XFS in memory corruption inside VMs

2020-11-28 Thread Vinícius Ferrão via Users
anything when configuring this cluster. Thanks. -Original Message- From: Strahil Nikolov Sent: Saturday, November 28, 2020 1:54 PM To: users ; Vinícius Ferrão Subject: Re: [ovirt-users] Constantly XFS in memory corruption inside VMs Can you try with a test vm, if this happens after

[ovirt-users] Constantly XFS in memory corruption inside VMs

2020-11-28 Thread Vinícius Ferrão via Users
Hello, I'm trying to discover why an oVirt 4.4.3 Cluster with two hosts and NFS shared storage on TrueNAS 12.0 is constantly getting XFS corruption inside the VMs. For random reasons VM's gets corrupted, sometimes halting it or just being silent corrupted and after a reboot the system is

[ovirt-users] Re: EPYC CPU not being detected correctly on cluster

2020-11-25 Thread Vinícius Ferrão via Users
Jelinkova Sent: Monday, November 23, 2020 6:25 AM To: Vinícius Ferrão Cc: users Subject: Re: [ovirt-users] EPYC CPU not being detected correctly on cluster Hi Vinícius, Thank you for the libvirt output - libvirt marked the EPYC CPU as not usable. Let's query qemu why that is. You do not need

[ovirt-users] Re: EPYC CPU not being detected correctly on cluster

2020-11-20 Thread Vinícius Ferrão via Users
, 2020 5:30 AM To: Vinícius Ferrão Cc: users Subject: Re: [ovirt-users] EPYC CPU not being detected correctly on cluster Hi, oVirt CPU detection depends on libvirt (and that depends on qemu) CPU models. Could you please run the following command to see what libvirt reports? virsh domcapabilities

[ovirt-users] EPYC CPU not being detected correctly on cluster

2020-11-19 Thread Vinícius Ferrão via Users
Hi I've an strange issue with two hosts (not using the hypervisor image) with EPYC CPUs, on the engine I got this message: The host CPU does not match the Cluster CPU Type and is running in a degraded mode. It is missing the following CPU flags: model_EPYC. Please update the host CPU

[ovirt-users] Re: How to discover why a VM is getting suspended without recovery possibility?

2020-09-22 Thread Vinícius Ferrão via Users
gt; Best Regards, > Strahil Nikolov > > > > > > > В вторник, 22 септември 2020 г., 10:08:44 Гринуич+3, Vinícius Ferrão > написа: > > > > > > Hi Strahil, yes I can’t find anything recently either. You digged way further > then me, I found some

[ovirt-users] Re: How to discover why a VM is getting suspended without recovery possibility?

2020-09-22 Thread Vinícius Ferrão via Users
Hi Gianluca. On 22 Sep 2020, at 04:24, Gianluca Cecchi mailto:gianluca.cec...@gmail.com>> wrote: On Tue, Sep 22, 2020 at 9:12 AM Vinícius Ferrão via Users mailto:users@ovirt.org>> wrote: Hi Strahil, yes I can’t find anything recently either. You digged way further then me,

[ovirt-users] Re: How to discover why a VM is getting suspended without recovery possibility?

2020-09-22 Thread Vinícius Ferrão via Users
nested virtualization enabled. Best Regards, Strahil Nikolov В понеделник, 21 септември 2020 г., 23:56:26 Гринуич+3, Vinícius Ferrão написа: Strahil, thank you man. We finally got some output: 2020-09-15T12:34:49.362238Z qemu-kvm: warning: CPU(s) not present in any NUMA nodes: CPU 10 [soc

[ovirt-users] Re: How to discover why a VM is getting suspended without recovery possibility?

2020-09-21 Thread Vinícius Ferrão via Users
gt; For example: > /var/log/libvirt/qemu/.log > > Anything changed recently (maybe oVirt version was increased) ? > > Best Regards, > Strahil Nikolov > > > > > > > В понеделник, 21 септември 2020 г., 23:28:13 Гринуич+3, Vinícius Ferrão > написа: >

[ovirt-users] Re: How to discover why a VM is getting suspended without recovery possibility?

2020-09-20 Thread Vinícius Ferrão via Users
> On 16 Sep 2020, at 17:11, Vinícius Ferrão wrote: > > Hello, > > I’m an Exchange Server VM that’s going down to suspend without possibility of > recovery. I need to click on shutdown and them power on. I can’t find > anything useful on the logs, except on “dmesg” of the hos

[ovirt-users] How to discover why a VM is getting suspended without recovery possibility?

2020-09-16 Thread Vinícius Ferrão via Users
Hello, I’m an Exchange Server VM that’s going down to suspend without possibility of recovery. I need to click on shutdown and them power on. I can’t find anything useful on the logs, except on “dmesg” of the host: [47807.747606] *** Guest State *** [47807.747633] CR0:

[ovirt-users] Re: Multiple GPU Passthrough with NVLink (Invalid I/O region)

2020-09-04 Thread Vinícius Ferrão via Users
On Fri, 4 Sep 2020, 16:02 Arman Khalatyan, mailto:arm2...@gmail.com>> wrote: hi, with the 2xT4 we haven't seen any trouble. we have no nvlink there. did u try to disable the nvlink? Vinícius Ferrão via Users mailto:users@ovirt.org>> schrieb am Fr., 4. Sept. 2020, 08:39: Hello, here

[ovirt-users] Multiple GPU Passthrough with NVLink (Invalid I/O region)

2020-09-04 Thread Vinícius Ferrão via Users
Hello, here we go again. I’m trying to passthrough 4x NVIDIA Tesla V100 GPUs (with NVLink) to a single VM; but things aren’t that good. Only one GPU shows up on the VM. lspci is able to show the GPUs, but three of them are unusable: 08:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100

[ovirt-users] Mellanox OFED with oVirt

2020-09-01 Thread Vinícius Ferrão via Users
Hello, Anyone had success using Mellanox OFED with oVirt? Already learned some things: 1. I can’t use oVirt Node. 2. Mellanox OFED cannot be installed with mlnx-ofed-all since it breaks dnf. We need to rely on the upstream RDMA implementation. 3. The way to go is running: dnf install

[ovirt-users] Re: Missing model_FLAGS on specific host

2020-08-28 Thread Vinícius Ferrão via Users
ll-noTSX,model_Opteron_G2,model_Westmere,model_qemu32,model_486,model_pentium3,model_Opteron_G1,model_Westmere-IBRS,model_Haswell-noTSX-IBRS,model_Nehalem", "cpuModel": "Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz", Not sure what really happened, but those actions solved the

[ovirt-users] Missing model_FLAGS on specific host

2020-08-28 Thread Vinícius Ferrão via Users
Hi, I’ve an strange issue in one of my hosts, it’s missing a lot of CPU flags that oVirt seems to require: [root@c4140 ~]# vdsm-client Host getCapabilities | egrep "cpuFlags|cpuModel" "cpuFlags":

[ovirt-users] Re: POWER9 (ppc64le) Support on oVirt 4.4.1

2020-08-27 Thread Vinícius Ferrão via Users
On 27 Aug 2020, at 16:03, Arik Hadas mailto:aha...@redhat.com>> wrote: On Thu, Aug 27, 2020 at 8:40 PM Vinícius Ferrão via Users mailto:users@ovirt.org>> wrote: Hi Michal, On 27 Aug 2020, at 05:08, Michal Skrivanek mailto:michal.skriva...@redhat.com>> wrote: On 26

[ovirt-users] Re: POWER9 (ppc64le) Support on oVirt 4.4.1

2020-08-27 Thread Vinícius Ferrão via Users
Hi Michal, On 27 Aug 2020, at 05:08, Michal Skrivanek mailto:michal.skriva...@redhat.com>> wrote: On 26 Aug 2020, at 20:50, Vinícius Ferrão via Users mailto:users@ovirt.org>> wrote: Okay here we go Arik. With your insight I’ve done the following: # rpm -Va This showed w

[ovirt-users] Re: POWER9 (ppc64le) Support on oVirt 4.4.1

2020-08-26 Thread Vinícius Ferrão via Users
s to be a bug. Any ideia on how to force it back to ppc64? I can’t manually force the import on the Hosted Engine since there’s no buttons to do this… Ideias? On 26 Aug 2020, at 15:04, Vinícius Ferrão mailto:fer...@versatushpc.com.br>> wrote: What a strange thing is happening here: [root@

[ovirt-users] Re: POWER9 (ppc64le) Support on oVirt 4.4.1

2020-08-26 Thread Vinícius Ferrão via Users
-4.40.22-1.el8ev.noarch I’ve never seen something like this. I’ve already reinstalled the host from the ground and the same thing happens. On 26 Aug 2020, at 14:28, Vinícius Ferrão via Users mailto:users@ovirt.org>> wrote: Hello Arik, This is probably the issue. Output totally empty:

[ovirt-users] Re: POWER9 (ppc64le) Support on oVirt 4.4.1

2020-08-26 Thread Vinícius Ferrão via Users
-4.40.22-1.el8ev.ppc64le vdsm-http-4.40.22-1.el8ev.noarch vdsm-client-4.40.22-1.el8ev.noarch vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch Any ideias to try? Thanks. On 26 Aug 2020, at 05:09, Arik Hadas mailto:aha...@redhat.com>> wrote: On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via

[ovirt-users] POWER9 (ppc64le) Support on oVirt 4.4.1

2020-08-23 Thread Vinícius Ferrão via Users
Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le) without any issues. Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine anymore, it complains with the following error: The host CPU does not match the Cluster CPU type and is running in degraded mode. It is

[ovirt-users] Hosted Engine stuck in Firmware

2020-08-22 Thread Vinícius Ferrão via Users
Hello, I’ve an strange issue with oVirt 4.4.1 The hosted engine is stuck in the UEFI firmware trying to “never” boot. I think this happened when I changed the default VM mode for the cluster inside the datacenter. There’s a way to fix this without redeploying the engine?

[ovirt-users] Re: Support for Shared SAS storage

2020-08-07 Thread Vinícius Ferrão via Users
ur SAS connected storage)? VMware vSphere support DAS. Red Hat should do something. 2020年8月8日土曜日 4:06:34 GMT+8、Vinícius Ferrão via Users <mailto:users@ovirt.org>が書いたメール: No, there’s no support for direct attached shared SAS storage on oVirt/RHV. Fibre Channel is a different thing th

[ovirt-users] Re: Support for Shared SAS storage

2020-08-07 Thread Vinícius Ferrão via Users
No, there’s no support for direct attached shared SAS storage on oVirt/RHV. Fibre Channel is a different thing that oVirt/RHV supports. > On 7 Aug 2020, at 08:52, hkexdong--- via Users wrote: > > Hello Vinícius, > Do you able to connect the SAS external storage? > Now I've the problem during

[ovirt-users] Re: iSCSI multipath with separate subnets... still not possible in 4.4.x?

2020-07-18 Thread Vinícius Ferrão via Users
I second that, I’ve tirelessly talked about this and just given up, it’s a basic feature that keeps oVirt lagging behind. > On 18 Jul 2020, at 04:47, Uwe Laverenz wrote: > > Hi Mark, > > Am 14.07.20 um 02:14 schrieb Mark R: > >> I'm looking through quite a few bug reports and mailing list

[ovirt-users] Re: New fenceType in oVirt code for IBM OpenBMC

2020-07-07 Thread Vinícius Ferrão via Users
@Martin if needed I can raise a RFE for this. Just point me where to do, and I will do it. Thank you. On 1 Jul 2020, at 03:33, Vinícius Ferrão via Users mailto:users@ovirt.org>> wrote: Hi Martin, On 1 Jul 2020, at 03:26, Martin Perina mailto:mper...@redhat.com>> wrote: O

[ovirt-users] Re: New fenceType in oVirt code for IBM OpenBMC

2020-07-01 Thread Vinícius Ferrão via Users
Hi Martin, On 1 Jul 2020, at 03:26, Martin Perina mailto:mper...@redhat.com>> wrote: On Wed, Jul 1, 2020 at 1:57 AM Vinícius Ferrão via Users mailto:users@ovirt.org>> wrote: Hello, After some days scratching my head I found that oVirt is probably missing fenceTypes for IBM’s im

[ovirt-users] New fenceType in oVirt code for IBM OpenBMC

2020-06-30 Thread Vinícius Ferrão via Users
Hello, After some days scratching my head I found that oVirt is probably missing fenceTypes for IBM’s implementation of OpenBMC in the Power Management section. The host machine is an OpenPOWER AC922 (ppc64le). The BMC basically is an “ipmilan” device but the ciphers must be defined as 3 or

[ovirt-users] Re: Clean old mount points in hosts VDSM

2020-06-25 Thread Vinícius Ferrão via Users
Strahil, thank you. Reinstalling the host solved the issue. > On 25 Jun 2020, at 15:48, Vinícius Ferrão via Users wrote: > > I think yes. But I’m not sure. > > I can do it again, there’s an update so I’ll do both and report back. > > Thank you Strahil. > >

[ovirt-users] Re: Clean old mount points in hosts VDSM

2020-06-25 Thread Vinícius Ferrão via Users
юни 2020 г. 3:23:15 GMT+03:00, "Vinícius Ferrão via Users" > написа: >> Hello, >> >> For reasons unknown one of my hosts is trying to mount an old storage >> point that’s been removed some time ago. >> >> /var/log/vdsm/vdsm.log:2020-06-24 1

[ovirt-users] Clean old mount points in hosts VDSM

2020-06-24 Thread Vinícius Ferrão via Users
Hello, For reasons unknown one of my hosts is trying to mount an old storage point that’s been removed some time ago. /var/log/vdsm/vdsm.log:2020-06-24 19:57:35,958-0300 INFO (tmap-65016/0) [IOProcessClient] (/192.168.10.6:_mnt_pool0_ovirt_he) Starting client (__init__:308)

[ovirt-users] Re: teaming vs bonding

2020-06-10 Thread Vinícius Ferrão via Users
Only bonding, teaming is not supported on the by the hypervisor. This was valid up to 4.3; not sure if something changed on 4.4, since I didn’t checked it. > On 10 Jun 2020, at 15:30, Diggy Mc wrote: > > Does 4.4.x support adapter teaming? If yes, which is preferred, teaming or > bonding?

[ovirt-users] Re: What happens when shared storage is down?

2020-06-09 Thread Vinícius Ferrão via Users
> On 7 Jun 2020, at 08:34, Strahil Nikolov wrote: > > > > На 7 юни 2020 г. 1:58:27 GMT+03:00, "Vinícius Ferrão via Users" > написа: >> Hello, >> >> This is a pretty vague and difficult question to answer. But what >> happens if the s

[ovirt-users] Re: Cannot start ppc64le VM's

2020-06-09 Thread Vinícius Ferrão via Users
On 8 Jun 2020, at 07:43, Michal Skrivanek mailto:michal.skriva...@redhat.com>> wrote: On 5 Jun 2020, at 20:23, Vinícius Ferrão mailto:fer...@versatushpc.com.br>> wrote: Hi Michal On 5 Jun 2020, at 04:39, Michal Skrivanek mailto:michal.skriva...@redhat.com>> wrote: On

[ovirt-users] Re: Power Management on IBM AC922 Power9 (ppc64le)

2020-06-08 Thread Vinícius Ferrão via Users
Yes… actually IBM uses pretty standard stuff. IPMI is enabled by default and as I said, I can use ipmitool on CLI and it’s works normally. I do have some updates, I upgraded the OpenBMC firmware and now I can use ipmitool like anything else with -U and -P; so I was hoping that oVirt would

[ovirt-users] What happens when shared storage is down?

2020-06-06 Thread Vinícius Ferrão via Users
Hello, This is a pretty vague and difficult question to answer. But what happens if the shared storage holding the VMs is down or unavailable for a period of time? I’m aware that a longer timeout may put the VMs on pause state, but how this is handled? Is it a time limit? Requests limit? Who

[ovirt-users] Re: Cannot start ppc64le VM's

2020-06-05 Thread Vinícius Ferrão via Users
Hi Michal On 5 Jun 2020, at 04:39, Michal Skrivanek mailto:michal.skriva...@redhat.com>> wrote: On 5 Jun 2020, at 08:19, Vinícius Ferrão via Users mailto:users@ovirt.org>> wrote: Hello, I’m trying to run ppc64le VM’s on POWER9 but qemu-kvm fails complaining about

[ovirt-users] Cannot start ppc64le VM's

2020-06-05 Thread Vinícius Ferrão via Users
Hello, I’m trying to run ppc64le VM’s on POWER9 but qemu-kvm fails complaining about NUMA issues: VM ppc64le.local.versatushpc.com.br is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: 2020-06-05T06:16:10.716052Z

[ovirt-users] Power Management on IBM AC922 Power9 (ppc64le)

2020-06-04 Thread Vinícius Ferrão via Users
Hello, I would like to know how to enable Power Management on AC922 hardware from IBM. It’s ppc64le architecture and runs OpenBMC as manager. I only get Test failed: Internal JSON-RPC error when adding the infos with ipmilan on the engine. From the command line I can use ipmitool but without

[ovirt-users] Re: POWER9 Support: VDSM requiring LVM2 package that's missing

2020-05-14 Thread Vinícius Ferrão via Users
Hi Amit, I think I found the answer: It’s not available yet. https://bugzilla.redhat.com/show_bug.cgi?id=1829348 It's this bug right? Thanks, On 14 May 2020, at 20:14, Vinícius Ferrão mailto:fer...@versatushpc.com.br>> wrote: Hi Amit, thanks for confirming. Do you know in which repo

[ovirt-users] Re: POWER9 Support: VDSM requiring LVM2 package that's missing

2020-05-14 Thread Vinícius Ferrão via Users
-rhv-4-mgmt-agent-for-power-9-rpms/ppc64le Red Hat Virtualization 4 Management Agents (for RHEL 7 Server for IBM POWER9 814 Thank you! On 14 May 2020, at 20:09, Amit Bawer mailto:aba...@redhat.com>> wrote: On Fri, May 15, 2020 at 12:19 AM Vinícius Ferrão via Users mailto:users@ovi

[ovirt-users] POWER9 Support: VDSM requiring LVM2 package that's missing

2020-05-14 Thread Vinícius Ferrão via Users
Hello, I would like to know if this is a bug or not, if yes I will submit to Red Hat. I’m trying to add a ppc64le (POWER9) machine to the hosts pool, but there’s missing dependencies on VDSM: --> Processing Dependency: lvm2 >= 7:2.02.186-7.el7_8.1 for package: vdsm-4.30.44-1.el7ev.ppc64le -->

[ovirt-users] Re: Host fails to enter in maintenance due to migration failure

2020-04-13 Thread Vinícius Ferrão
0.9080 -Original Message- From: Vinícius Ferrão mailto:fer...@versatushpc.com.br>> Sent: Monday, April 13, 2020 12:04 PM To: users mailto:users@ovirt.org>> Subject: [ovirt-users] Host fails to enter in maintenance due to migration failure Hello, I’ve a host that’s preparing to mainten

[ovirt-users] Host fails to enter in maintenance due to migration failure

2020-04-13 Thread Vinícius Ferrão
Hello, I’ve a host that’s preparing to maintenance for almost 20 hours now. There’s a huge VM on it, with 32 gigs of RAM and this VM is failing migration. So, there’s a way to cancel prepare for maintenance, so the host can stop trying this migration that’s end up failing? I can just shutdown

[ovirt-users] Re: Ovirt and Dell Compellent in ISCSI

2020-04-09 Thread Vinícius Ferrão
It’s the same problem all over again. iSCSI in oVirt/RHV is broken. For years. Reported this a while: https://bugzilla.redhat.com/show_bug.cgi?id=1474904 iSCSI Multipath in the engine does not means a thing. It’s broken. I don’t know why the oVirt team does not acknowledge this. I’m being an

[ovirt-users] Re: ISO and Export domains deprecated

2020-04-07 Thread Vinícius Ferrão
You can have another data domain named ISO. Just organize it there. Sent from my iPhone On 6 Apr 2020, at 19:37, "eev...@digitaldatatechs.com" wrote:  My understanding is the export (data) domains will hold the iso’s and vfd files now. Personally, I like having an ISO domain to keep those

[ovirt-users] Re: Windows deployment

2020-02-27 Thread Vinícius Ferrão
Eric, On the WDS server you must add the drivers to the image. But pay attention here. The drivers from the oVirt release aren’t signed from a trusted authority from Windows. Only the drivers from RHV (the downstream and paid software from RH, based on oVirt). WDS needs properly signed

[ovirt-users] Re: populating the ISO domain

2020-02-19 Thread Vinícius Ferrão
Hi. ISO domains were deprecated. Now you should use a Data Domain instead and fill with the ISOs. If you still using ISO Domain, like me, I only scp the ISOs directly to the storage. You don’t need to use the ISO uploader script. I never made it work anyway LOL. Sent from my iPhone > On 19

[ovirt-users] Re: Reimport disks

2020-02-13 Thread Vinícius Ferrão
Import domain will work. The VM metadata is available on the OVF_STORE container, inside the domain. So even the names and settings come back. Them you gradually start moving the VMs to the Gluster storage. Sent from my iPhone > On 13 Feb 2020, at 11:42, Robert Webb wrote: > > Off the top

[ovirt-users] Re: Deploy Hosted Engine fails at "Set VLAN ID at datacenter level"

2020-02-05 Thread Vinícius Ferrão
node is in mode 4 and no issues at all. > > Eric Evans > Digital Data Services LLC. > 304.660.9080 > > > -Original Message- > From: Vinícius Ferrão > Sent: Wednesday, February 05, 2020 4:51 PM > To: eev...@digitaldatatechs.com > Cc: users@ovirt.org > Subject

[ovirt-users] Re: Deploy Hosted Engine fails at "Set VLAN ID at datacenter level"

2020-02-05 Thread Vinícius Ferrão
> > -----Original Message- > From: Vinícius Ferrão > Sent: Wednesday, February 05, 2020 3:55 PM > To: eev...@digitaldatatechs.com > Cc: users@ovirt.org > Subject: [ovirt-users] Re: Deploy Hosted Engine fails at "Set VLAN ID at > datacenter level" > >

[ovirt-users] Re: Deploy Hosted Engine fails at "Set VLAN ID at datacenter level"

2020-02-05 Thread Vinícius Ferrão
The switches are configured for LACP Active mode? > On 5 Feb 2020, at 17:40, eev...@digitaldatatechs.com wrote: > > I have a hypervisor with nic bonding in place and it has some communication > issues. It constantly goes up and down and into non operational mode but > comes back up. Is there

[ovirt-users] Re: Deploy Hosted Engine fails at "Set VLAN ID at datacenter level"

2020-02-04 Thread Vinícius Ferrão
Are you trying to deploy the hosted engine to a Storage Domain which is in a separate network with a VLAN? If this is the issue you must inform VDSM the network so it finds the path. This must be informed during the playbook phase where it asks for the shared storage settings. For example:

[ovirt-users] Re: Spacewalk integration

2020-02-03 Thread Vinícius Ferrão
Eric, I can’t talk for Red Hat and the guys from oVirt. But Spacewalk, as far as I know was deprecated in favor of Foreman with Katello, which is the new Red Hat Satellite. And oVirt supports Foreman. So I don’t think oVirt will ever support Spacewalk… > On 3 Feb 2020, at 11:09,

[ovirt-users] Re: Reimport VMs after lost Engine with broken Backup

2020-01-31 Thread Vinícius Ferrão
? Thanks, Vinícius. On 29 Jan 2020, at 15:20, Vinícius Ferrão mailto:fer...@versatushpc.com.br>> wrote: Hello, I’m with a scenario with a lost hosted-engine. For reasons unknown the backup is broken and I’ve tried everything: redeploy with backup file, deploy a new one and them restore the

[ovirt-users] Re: Reimport VMs after lost Engine with broken Backup

2020-01-31 Thread Vinícius Ferrão
shutdown them (I can SSH to all of them and them issue power off). But I can’t detach the storage from the broken engine. * Is it safe to just attach the storage domain on the brand new engine? If it fails there’s a way to recover from this? Thanks. On 31 Jan 2020, at 14:22, Vinícius Ferrão

[ovirt-users] Reimport VMs after lost Engine with broken Backup

2020-01-29 Thread Vinícius Ferrão
Hello, I’m with a scenario with a lost hosted-engine. For reasons unknown the backup is broken and I’ve tried everything: redeploy with backup file, deploy a new one and them restore the backup. Changed the HE storage domain in both cases just be sure. Reinstalled one of the hosts just to be

[ovirt-users] Re: Cannot put host in maintenance mode

2020-01-28 Thread Vinícius Ferrão
Thank you all! Removing the entries directly on Postgres solved the issue. The procedure was described on the bugzilla link. On 28 Jan 2020, at 11:14, Vinícius Ferrão mailto:fer...@versatushpc.com.br>> wrote: On 28 Jan 2020, at 08:45, Lucie Leistnerova mailto:lleis...@redhat.com&g

[ovirt-users] Re: Cannot put host in maintenance mode

2020-01-28 Thread Vinícius Ferrão
On 28 Jan 2020, at 06:18, Amit Bawer mailto:aba...@redhat.com>> wrote: On Tue, Jan 28, 2020 at 6:08 AM Vinícius Ferrão mailto:fer...@versatushpc.com.br>> wrote: Hello, I’m with an issue on one of my oVirt installs and I wasn’t able to solve it. When trying to put a node in

[ovirt-users] Re: Cannot put host in maintenance mode

2020-01-28 Thread Vinícius Ferrão
On 28 Jan 2020, at 08:45, Lucie Leistnerova mailto:lleis...@redhat.com>> wrote: Hi Vinicius, On 1/27/20 11:34 PM, Vinícius Ferrão wrote: Hello, I’m with an issue on one of my oVirt installs and I wasn’t able to solve it. When trying to put a node in maintenance it complains about

[ovirt-users] Cannot put host in maintenance mode

2020-01-27 Thread Vinícius Ferrão
Hello, I’m with an issue on one of my oVirt installs and I wasn’t able to solve it. When trying to put a node in maintenance it complains about image transfers: Error while executing action: Cannot switch Host ovirt2 to Maintenance mode. Image transfer is in progress for the following (3)

[ovirt-users] Support for Shared SAS storage

2020-01-06 Thread Vinícius Ferrão
Hello, I’ve two compute nodes with SAS Direct Attached sharing the same disks. Looking at the supported types I can’t see this on the documentation: https://www.ovirt.org/documentation/admin-guide/chap-Storage.html There’s is local storage on this documentation, but my case is two machines,

[ovirt-users] Zabbix monitoring within the node

2019-12-04 Thread Vinícius Ferrão
Hello, There are some documentation on the web about using Zabbix with oVirt to monitor the hypervisor itself like here: https://github.com/hudecof/libzbxovirt and here: https://github.com/jensdepuydt/zabbix-ovirt But what are you ppl doing about this issue? Any recommendations on Zabbix

[ovirt-users] Re: Either allow 2 CD-ROM's or selectable *.vfd Floppy Images from Storage via Run Once other than from deprcated ISO-Storage

2019-08-13 Thread Vinícius Ferrão
Ralf, you can change CD on the Load Drivers section. Change for the oVirt Tools Disc, load the drivers and then change back to Windows. It’s a bad workflow, I know. But it’s how I’m doing here. I agree that the Windows Tools is not really friendly with oVirt. Sent from my iPhone On 12 Aug

[ovirt-users] Re: Windows Server 2019: Driver Signature Enforcement

2019-06-19 Thread Vinícius Ferrão
at 17:52 +1000, Vadim Rozenfeld wrote: >>> On Thu, 2019-06-13 at 15:24 -0300, Vinícius Ferrão wrote: >>> Lev, thanks for the reply. >>> >>> So basically Windows on Secureboot UEFI is simply “broken” within oVirt? >>> >>> Will Red Hat reconside

[ovirt-users] Re: Does anyone have a positive experience with physical host to oVirt conversion?

2019-06-19 Thread Vinícius Ferrão
I just use the virt-p2v ISO to boot on the machine to be converted and them fire the conversion. You do need a VM running with virt-v2v inside oVirt to talk with the virt-p2v ISO and do the conversion. > On 19 Jun 2019, at 10:38, Andreas Elvers > wrote: > > For converting my Debian boxes

[ovirt-users] Re: Windows Server 2019: Driver Signature Enforcement

2019-06-14 Thread Vinícius Ferrão
t; On Thu, 2019-06-13 at 11:36 +0300, Yedidyah Bar David wrote: >> On Mon, Jun 10, 2019 at 10:54 PM Vinícius Ferrão > <mailto:fer...@versatushpc.com.br>> wrote: >>> RHV drivers works. >>> oVirt drivers does not. >>> >>> Checked this now. >>>

[ovirt-users] Re: Windows Server 2019: Driver Signature Enforcement

2019-06-14 Thread Vinícius Ferrão
not certified. > > Thanks in advance, > > On Thu, Jun 13, 2019 at 1:12 PM Yedidyah Bar David <mailto:d...@redhat.com>> wrote: > On Mon, Jun 10, 2019 at 10:54 PM Vinícius Ferrão <mailto:fer...@versatushpc.com.br>> wrote: > RHV drivers works. > oVirt drivers does

[ovirt-users] Re: Windows Server 2019: Driver Signature Enforcement

2019-06-10 Thread Vinícius Ferrão
RHV drivers works. oVirt drivers does not. Checked this now. I’m not sure if this is intended or not. But oVirt drivers aren’t signed for Windows. > On 29 May 2019, at 21:41, mich...@wanderingmad.com wrote: > > I'm running server 2012R2, 2016, and 2019 with no issue using the Redhat > signed

[ovirt-users] Windows Server 2019: Driver Signature Enforcement

2019-05-29 Thread Vinícius Ferrão
Hello, I’m running oVirt 4.3.0 and installing Windows Server 2019 with UEFI Secure Boot is impossible with the bundled VirtIO drivers. Windows complains about invalid signatures on vioscsi.sys. The only way to boot the system is halting the process with F8 and them selecting: Disable Driver

[ovirt-users] "Upgrade" from oVirt to RHV

2019-05-22 Thread Vinícius Ferrão
Hello, I would like to know if there’s a supported path to move from oVirt to RHV. oVirt is running on version 4.3.0.4-1.el7. RHV would be version 4.3.3.7-0.1.el7. I was thinking in reinstalling a host with RHV 4.3 and adding it to the oVirt HE. Move all the VM’s to the RHV host and than do

[ovirt-users] RHEL 8 Template Seal failed

2019-05-16 Thread Vinícius Ferrão
Hello, I’m trying to seal a RHEL8 template but the operation is failing. Here’s the relevant information from engine.log: 2019-05-17 01:30:31,153-03 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHostJobsVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-58)

[ovirt-users] Re: Cannot assign Network to Mellanox Interface

2019-03-02 Thread Vinícius Ferrão
p; RDMA. Then the > other ports are using for migratons, iSCSI sharing, and a backup admin > channel. > >> On Fri, Mar 1, 2019 at 1:36 PM Vinícius Ferrão >> wrote: >> Darkytoo have you received my message. >> >> I’m really curious about your setup u

  1   2   >