Hi Didi,
Apologies as this is my first post. I am referring the issue that is mentioned
in the Red hat solution mentioned in this thread.
https://access.redhat.com/solutions/4462431
I am trying to deploy hosted engine VM. I tried via cockpit gui and through
CLI. In both cased deployment fails wi
It seems that after the last attempt I managed to move forward:
systemctl start ovirt-ha-agent ovirt-ha-broker
then stopped the ovirt-ha-agent and run "hosted-engine --reinitialize-lockspace"
Now the situation changed a little bit:
# sanlock client status
daemon 5f37f400-b865-11dc-a4f5-2c4d5450
On Thu, Jan 6, 2022 at 11:47 AM wrote:
>
> Hi Didi,
>
> Apologies as this is my first post. I am referring the issue that is
> mentioned in the Red hat solution mentioned in this thread.
> https://access.redhat.com/solutions/4462431
> I am trying to deploy hosted engine VM. I tried via cockpit gu
Hello Nir,
I recently upgrade oVirt engine to 4.4.9 from 4.3.10 (hosts will follow ASAP).
I found out in the vdsm.log same strange messages:
2022-01-06 10:35:41,333+0100 ERROR (mailbox-spm)
[storage.MailBox.SpmMailMonitor] mailbox 65 checksum failed, not clearing
mailbox, clearing new mail (data
The engine was not starting till downgrading to 6.0.0 qemu rpms from the
Advanced Virtualization
Best Regards,
Strahil Nikolov В четвъртък, 6 януари 2022 г., 11:51:27 Гринуич+2, Strahil
Nikolov via Users написа:
It seems that after the last attempt I managed to move forward:
system
On 05/01/2022 19:08, Ritesh Chikatwar wrote:
Hello
What’s the qemo version if it’s greater then 6.0.0 then
Can you please try downgrading qemu version to 6.0.0 and see if it helps?
Dear,
here is the situation:
# rpm -qa|grep qemu
qemu-kvm-block-iscsi-6.1.0-5.module_el8.6.0+1040+0ae94936.x86_6
Ritesh,
I downgraded one host to 6.0.0 as you said:
# rpm -qa|grep qemu
qemu-kvm-block-curl-6.0.0-33.el8s.x86_64
qemu-kvm-common-6.0.0-33.el8s.x86_64
ipxe-roms-qemu-20181214-8.git133f4c47.el8.noarch
qemu-img-6.0.0-33.el8s.x86_64
libvirt-daemon-driver-qemu-7.10.0-1.module_el8.6.0+1046+bd8eec5e.x86
Hi everyone, I would like via API, or. pull information about I / O usage by
individual servers directly from the postre database. But I have a data
warehouse turned off for performance reasons. Is this information collected
somewhere so that I can collect it from somewhere in my external databa
Here are the configured options for the gluster volume:
Options Reconfigured:
cluster.lookup-optimize: off
server.keepalive-count: 5
server.keepalive-interval: 2
server.keepalive-time: 10
server.tcp-user-timeout: 20
network.ping-timeout: 30
server.event-threads: 4
client.event-threads: 4
cluster.c
The latest on this I downgraded qemu-kvm to the lowest version in the CentOS8
Stream/Ovirt Repo:
qemu-kvm-common-6.0.0-26.el8s.x86_64
qemu-kvm-block-ssh-6.0.0-26.el8s.x86_64
qemu-kvm-block-gluster-6.0.0-26.el8s.x86_64
qemu-kvm-6.0.0-26.el8s.x86_64
qemu-kvm-ui-opengl-6.0.0-26.el8s.x86_64
qemu-kvm
Hi Didi
downgrading qemu-kvm fixed the issue. What is the reason it is not working with
version 6.1.0. Currently this is the version installed on my host
#yum info qemu-kvm
Last metadata expiration check: 2:03:58 ago on Thu 06 Jan 2022 03:18:40 PM UTC.
Installed Packages
Name : qemu-kvm
Hi All,
I recently migrated from 4.3.10 to 4.4.9 and it seems that booting from
software raid0 (I have multiple gluster volumes) is not possible with Cluster
compatibility 4.6 .
I've tested creating a fresh VM and it also suffers the problem. Changing
various options (virtio-scsi to virtio, chip
Can you write on the storage domain like this:
sudo -u vdsm if=/dev/zero of=/rhev//full/path/ oflag=direct
bs=512 count=10
Best Regards,Strahil Nikolov
On Fri, Jan 7, 2022 at 0:19, Andy via Users wrote:
___
Users mailing list -- users@ovirt.o
To be honest in grub rescue I can see only hd0 which leaded me to the issue
(and qemu-6.2+ has a fix for it):
https://bugzilla.proxmox.com/show_bug.cgi?id=3010
Can someone also test creating a Linux VM with /boot being a raid0 software MD
device ?
Best Regards,Strahil Nikolov
On Fri, Ja
在 2022/1/6 15:53, Liran Rotenberg 写道:
On Thu, Jan 6, 2022 at 9:20 AM Adam Xu wrote:
I also got the error when I try to import an ova from vmware to my
ovirt cluster using a san storage domain.
I resovled this by importing this ova to a standalone host which
is using its loca
Sir,
I can see the data domain written to
"/rhev/data-center/mnt/glusterSD/vstore00:_engine" on the host I am trying to
deploy from. when I attempt to write to the directory with:
sudo -u vdsm dd if=/dev/zero
of=/rhev/data-center/mnt/glusterSD/vstore00:_engine/test.txt oflag=direct
bs=512 cou
try downgrading in all host and give try
On Thu, Jan 6, 2022 at 10:05 PM Andrea Chierici <
andrea.chier...@cnaf.infn.it> wrote:
> Ritesh,
> I downgraded one host to 6.0.0 as you said:
>
> # rpm -qa|grep qemu
> qemu-kvm-block-curl-6.0.0-33.el8s.x86_64
> qemu-kvm-common-6.0.0-33.el8s.x86_64
> ipxe-
17 matches
Mail list logo