[ovirt-users] Re: did 4.3.9 reset bug https://bugzilla.redhat.com/show_bug.cgi?id=1590266

2022-01-06 Thread Yedidyah Bar David
On Thu, Jan 6, 2022 at 11:47 AM wrote: > > Hi Didi, > > Apologies as this is my first post. I am referring the issue that is > mentioned in the Red hat solution mentioned in this thread. > https://access.redhat.com/solutions/4462431 > I am trying to deploy hosted engine VM. I tried via cockpit

[ovirt-users] Re: did 4.3.9 reset bug https://bugzilla.redhat.com/show_bug.cgi?id=1590266

2022-01-06 Thread sohail_akhter3
Hi Didi, Apologies as this is my first post. I am referring the issue that is mentioned in the Red hat solution mentioned in this thread. https://access.redhat.com/solutions/4462431 I am trying to deploy hosted engine VM. I tried via cockpit gui and through CLI. In both cased deployment fails

[ovirt-users] Re: sanlock issues after 4.3 to 4.4 migration

2022-01-06 Thread Strahil Nikolov via Users
It seems that after the last attempt I managed to move forward: systemctl start ovirt-ha-agent ovirt-ha-broker then stopped the ovirt-ha-agent and run "hosted-engine --reinitialize-lockspace" Now the situation changed a little bit: # sanlock client status daemon

[ovirt-users] Re: Instability after update

2022-01-06 Thread Ritesh Chikatwar
try downgrading in all host and give try On Thu, Jan 6, 2022 at 10:05 PM Andrea Chierici < andrea.chier...@cnaf.infn.it> wrote: > Ritesh, > I downgraded one host to 6.0.0 as you said: > > # rpm -qa|grep qemu > qemu-kvm-block-curl-6.0.0-33.el8s.x86_64 > qemu-kvm-common-6.0.0-33.el8s.x86_64 >

[ovirt-users] Re: Instability after update

2022-01-06 Thread Andrea Chierici
On 05/01/2022 19:08, Ritesh Chikatwar wrote: Hello What’s the qemo version if it’s greater then 6.0.0 then Can you please try downgrading qemu version to 6.0.0 and see if it helps? Dear, here is the situation: # rpm -qa|grep qemu

[ovirt-users] Re: Ovirt 4.4.9 install fails (guestfish)?

2022-01-06 Thread Andy via Users
Here are the configured options for the gluster volume: Options Reconfigured: cluster.lookup-optimize: off server.keepalive-count: 5 server.keepalive-interval: 2 server.keepalive-time: 10 server.tcp-user-timeout: 20 network.ping-timeout: 30 server.event-threads: 4 client.event-threads: 4

[ovirt-users] Re: Instability after update

2022-01-06 Thread Andrea Chierici
Ritesh, I downgraded one host to 6.0.0 as you said: # rpm -qa|grep qemu qemu-kvm-block-curl-6.0.0-33.el8s.x86_64 qemu-kvm-common-6.0.0-33.el8s.x86_64 ipxe-roms-qemu-20181214-8.git133f4c47.el8.noarch qemu-img-6.0.0-33.el8s.x86_64

[ovirt-users] How to find out I / O usge from servers

2022-01-06 Thread ovirt . org
Hi everyone, I would like via API, or. pull information about I / O usage by individual servers directly from the postre database. But I have a data warehouse turned off for performance reasons. Is this information collected somewhere so that I can collect it from somewhere in my external

[ovirt-users] Re: Lots of storage.MailBox.SpmMailMonitor

2022-01-06 Thread Petr Kyselák
Hello Nir, I recently upgrade oVirt engine to 4.4.9 from 4.3.10 (hosts will follow ASAP). I found out in the vdsm.log same strange messages: 2022-01-06 10:35:41,333+0100 ERROR (mailbox-spm) [storage.MailBox.SpmMailMonitor] mailbox 65 checksum failed, not clearing mailbox, clearing new mail

[ovirt-users] Re: sanlock issues after 4.3 to 4.4 migration

2022-01-06 Thread Strahil Nikolov via Users
The engine was not starting till downgrading to 6.0.0 qemu rpms from the Advanced Virtualization Best Regards, Strahil Nikolov В четвъртък, 6 януари 2022 г., 11:51:27 Гринуич+2, Strahil Nikolov via Users написа: It seems that after the last attempt I managed to move forward:

[ovirt-users] Re: Ovirt 4.4.9 install fails (guestfish)?

2022-01-06 Thread Andy via Users
The latest on this I downgraded qemu-kvm to the lowest version in the CentOS8 Stream/Ovirt Repo: qemu-kvm-common-6.0.0-26.el8s.x86_64 qemu-kvm-block-ssh-6.0.0-26.el8s.x86_64 qemu-kvm-block-gluster-6.0.0-26.el8s.x86_64 qemu-kvm-6.0.0-26.el8s.x86_64 qemu-kvm-ui-opengl-6.0.0-26.el8s.x86_64

[ovirt-users] Re: Linux VMs cannot boot from software raid0

2022-01-06 Thread Strahil Nikolov via Users
To be honest in grub rescue I can see only hd0 which leaded me to the issue (and qemu-6.2+ has a fix for it):  https://bugzilla.proxmox.com/show_bug.cgi?id=3010 Can someone also test creating a Linux VM with /boot being a raid0 software MD device ? Best Regards,Strahil Nikolov On Fri,

[ovirt-users] Re: did 4.3.9 reset bug https://bugzilla.redhat.com/show_bug.cgi?id=1590266

2022-01-06 Thread sohail_akhter3
Hi Didi downgrading qemu-kvm fixed the issue. What is the reason it is not working with version 6.1.0. Currently this is the version installed on my host #yum info qemu-kvm Last metadata expiration check: 2:03:58 ago on Thu 06 Jan 2022 03:18:40 PM UTC. Installed Packages Name : qemu-kvm

[ovirt-users] Re: Ovirt 4.4.9 install fails (guestfish)?

2022-01-06 Thread Strahil Nikolov via Users
Can you write on the storage domain like this: sudo -u vdsm if=/dev/zero of=/rhev//full/path/ oflag=direct bs=512 count=10 Best Regards,Strahil Nikolov On Fri, Jan 7, 2022 at 0:19, Andy via Users wrote: ___ Users mailing list --

[ovirt-users] Linux VMs cannot boot from software raid0

2022-01-06 Thread Strahil Nikolov via Users
Hi All, I recently migrated from 4.3.10 to 4.4.9 and it seems that booting from software raid0 (I have multiple gluster volumes) is not possible with Cluster compatibility 4.6 . I've tested creating a fresh VM and it also suffers the problem. Changing various options (virtio-scsi to virtio,

[ovirt-users] Re: After upgrade to vdsm-4.40.90.4-1.el8 - Internal JSON-RPC error - how to fix?

2022-01-06 Thread Adam Xu
在 2022/1/6 15:53, Liran Rotenberg 写道: On Thu, Jan 6, 2022 at 9:20 AM Adam Xu wrote: I also got the error when I try to import an ova from vmware to my ovirt cluster using a san storage domain. I resovled this by importing this ova to a standalone host which is using its

[ovirt-users] Re: Ovirt 4.4.9 install fails (guestfish)?

2022-01-06 Thread Andy via Users
Sir, I can see the data domain written to "/rhev/data-center/mnt/glusterSD/vstore00:_engine" on the host I am trying to deploy from.  when I attempt to write to the directory with: sudo -u vdsm dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/vstore00:_engine/test.txt oflag=direct bs=512