--- Begin Message ---
Hi Mark,
This is a Synology server, I don't think I can control that from WUI...
But I'll take a look.
Thanks for the suggestion!
El 19/4/22 a las 16:40, Mark Schouten escribió:
HI,
Do you have enough server threads on the NFS server? I’ve seen issues
with NFS because all server threads (default 8, on Debian IIRC) are
busy, which causes new clients to not being able to connect.
—
Mark Schouten, CTO
Tuxis B.V.
[email protected]
On 19 Apr 2022, at 16:34, Eneko Lacunza via pve-user
<[email protected]> wrote:
*From: *Eneko Lacunza <[email protected]>
*Subject: **Backup/timeout issues PVE 6.4*
*Date: *19 April 2022 at 16:34:11 CEST
*To: *"[email protected]" <[email protected]>
Hi all,
We're having backup/timeout issues with traditional non-PBS backups
in 6.4 .
We have 3 nodes backing up to a NFS server with HDDs. For the same
backup task (with multiple VMs spread in those 3 nodes), one node may
finish all backups, but other may not be able to perform all VM
backups, or not even start them due to storage "not being online".
Version (the same for all 3 nodes):
proxmox-ve: 6.4-1 (running kernel: 5.4.162-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-12
pve-kernel-helper: 6.4-12
pve-kernel-5.4.162-1-pve: 5.4.162-2
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-4.15: 5.4-19
pve-kernel-4.15.18-30-pve: 4.15.18-58
ceph: 15.2.15-pve1~bpo10
ceph-fuse: 15.2.15-pve1~bpo10
corosync: 3.1.5-pve2~bpo10+1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve4~bpo10
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.22-pve2~bpo10+1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-4
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.13-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.3-2
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.7-pve1
Is this a known bug? We had some issues in another cluster that fixed
using this patch on v6 (applied manually):
https://bugzilla.proxmox.com/show_bug.cgi?id=3693
Has that bug been backported to v6?
Thanks
EnekoLacunza
Director Técnico | Zuzendari teknikoa
Binovo IT Human Project
943 569 206 <tel:943 569 206>
[email protected] <mailto:[email protected]>
binovo.es <//binovo.es>
Astigarragako Bidea, 2 - 2 izda. Oficina 10-11, 20180 Oiartzun
youtube <https://www.youtube.com/user/CANALBINOVO/>
linkedin <https://www.linkedin.com/company/37269706/>
_______________________________________________
pve-user mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 |https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
--- End Message ---
_______________________________________________
pve-user mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user