No errors
# sudo -u vdsm dd
if=/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data/0e8de531-ac5e-4089-b390-cfc0adc3e79a/images/a847beca-7ed0-4ff1-8767-fc398379d85b/61d85180-65a4-452d-8773-db778f56e242
of=/dev/null bs=4M status=progress
107336433664 bytes (107 GB) copied, 245.349334 s, 437
Hi Strahil,
The majority of the VMs are UEFI. But I do have some Legacy BIOS VMs and they
are corrupting too. I have a mix of RHEL/CentOS 7 and 8.
All of them are corrupting. XFS on everything with default values from
installation.
There’s one VM with Ubuntu 18.04 LTS and ext4 that corruption
Usually distributed volumes are supported on a Single-node setup, but it
shouldn't be the problem.
As you know the affected VMs , you can easily find the disks of a VM.
Then try to read the VM's disk:
sudo -u vdsm dd
if=/rhev/data-center/mnt/glusterSD/gfs1.server.pt:_gfs1data//images//
of=/d
Damn...
You are using EFI boot. Does this happen only to EFI machines ?
Did you notice if only EL 8 is affected ?
Best Regards,
Strahil Nikolov
В неделя, 29 ноември 2020 г., 19:36:09 Гринуич+2, Vinícius Ferrão
написа:
Yes!
I have a live VM right now that will de dead on a reboot:
No heals pending
There are some VM's I can move the disk but some others VM's I cannot move the
disk
It's a simple gluster
]# gluster volume info
Volume Name: gfs1data
Type: Distribute
Volume ID: 7e6826b9-1220-49d4-a4bf-e7f50f38c42c
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Yes!
I have a live VM right now that will de dead on a reboot:
[root@kontainerscomk ~]# cat /etc/*release
NAME="Red Hat Enterprise Linux"
VERSION="8.3 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.3"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.3 (Ootpa)"
ANSI_COLOR="0;3
Can you check the output on the VM that was affected:
# cat /etc/*release
# sysctl -a | grep dirty
Best Regards,
Strahil Nikolov
В неделя, 29 ноември 2020 г., 19:07:48 Гринуич+2, Vinícius Ferrão via Users
написа:
Hi Strahil.
I’m not using barrier options on mount. It’s the default s
As you use proto=TCP it should not cause the behaviour you are observing.
I was wondering if the VM is rebooted for some reason (maybe HA) during
intensive I/O.
Best Regards,
Strahil Nikolov
В неделя, 29 ноември 2020 г., 19:07:48 Гринуич+2, Vinícius Ferrão via Users
написа:
Hi Strah
Are you sure you don't have any heals pending ?
I should admit I have never seen this type of error.
Is it happening for all VMs or only specific ones ?
Best Regards,
Strahil Nikolov
В неделя, 29 ноември 2020 г., 15:37:04 Гринуич+2, supo...@logicworks.pt
написа:
Sorry, I found this
Hi Stefan,
you can control the VM's cache (if Linux) via:
- vm.vfs_cache_pressure
- vm.dirty_background_ratio/vm.dirty_background_bytes
- vm.dirty_ratio/vm.dirty_bytes
- vm.dirty_writeback_centisecs
- vm.dirty_expire_centisecs
I would just increase the vfs_cache_pressure to 120 and check if it f
Hi Strahil.
I’m not using barrier options on mount. It’s the default settings from CentOS
install.
I have some additional findings, there’s a big number of discarded packages on
the switch on the hypervisor interfaces.
Discards are OK as far as I know, I hope TCP handles this and do the proper
Hi Stefan,
Please check out https://bugzilla.redhat.com/1884050 if it fits your
problem.
The resolution was having the right guest-agent installed in the
guest(qemu-guest-agent or ovirt-guest-agent).
On Sun, Nov 29, 2020 at 10:57 AM Stefan Seifried
wrote:
> Hi,
>
> I'm quite new to oVirt, so my
Sorry, I found this error on gluster logs:
[MSGID: 113040] [posix-helpers.c:1929:__posix_fd_ctx_get] 0-gfs1data-posix:
Failed to get anonymous fd for real_path:
/home/brick1/.glusterfs/bc/57/bc57653e-b08c-417b-83f3-bf234a97e30f. [No such
file or directory]
De: supo...@logicworks.pt
Para: "
I don't find any error in the gluster logs, I just find this error in the vdsm
log:
2020-11-29 12:57:45,528+ INFO (tasks/1) [storage.SANLock] Successfully
released Lease(name='61d85180-65a4-452d-8773-db778f56e242',
path=u'/rhev/data-center/mnt/node2.server.pt:_home_node2data/ab4855be-0edd-
Hi,
I'm quite new to oVirt, so my apologizies if I'm asking something dead obvious:
I noticed that there is an item in the 'General Tab' of each VM, which says
'Guest OS Memory Free/Cached/Buffered' and on all my VM's it says 'Not
Configured'. Right now I'm trying to figure out how to enable thi
15 matches
Mail list logo