--- Begin Message ---
Hi Frank,
El 8/9/24 a las 14:17, Frank Thommen escribió:
5 hdd 3.81450 1.0 3.8 TiB 3.3 TiB 3.1 TiB 37 MiB 8.5
GiB 568 GiB 85.45 1.16 194 up
16 hdd 1.90039 1.0 1.9 TiB 1.6 TiB 1.6 TiB 19 MiB 5.4
GiB 261 GiB 86.59 1.18 93 up
--- Begin Message ---
Hi Tonci,
Have a look to mbr2efi.exe Windows tool, this is not a Proxmox issue,
but a Windows one; if you change to EFI you need to prepare Windows boot
disk for EFI boot.
Cheers
El 1/6/24 a las 14:26, Tonči Stipičević escribió:
Hello to all,
I did install win2022
--- Begin Message ---
Hi Dominik,
Do you have any expected timeline/version for this to be merged?
Thanks
El 25/5/23 a las 9:43, Dominik Csapak escribió:
On 5/25/23 09:32, DERUMIER, Alexandre wrote:
Hi Dominik,
any news about your patches "add cluster-wide hardware device mapping"
i'm cur
--- Begin Message ---
Hi all,
Anyone has tried using IB TS4300 tape backup with PBS?
I can't see any reference to a concrete compatible tape library, only
that pmtx-tool exists and that PBS supports LTO-5 or newer (and LTO-4 as
best effort)..
Thanks
EnekoLacunza
Director Técnico | Zu
--- Begin Message ---
Thanks for the heads-up, I'll update and keep guest-fsfreeze on to see
if that works.
El 21/2/24 a las 11:20, DERUMIER, Alexandre escribió:
Do you run last qemu version? (8.1.5-2) (with the vm restart or live
migrated to run the last version).
Because they was a bug wit
--- Begin Message ---
Hi all,
Tonight one VM has "almost-locked" (login was working but froze when
trying to log in).
Seems it lost IO access when backups started at 22:00:
feb 20 21:17:01 odoo CRON[]: (root) CMD (cd / && run-parts --report
/etc/cron.hourly)
feb 20 21:17:01 odoo CRON[888
--- Begin Message ---
Hi all,
I just noticed that in PBS 3.0-4 and 3-1-2 an user with Admin role can
manage root@pam user 2FA from GUI.
This is not allowed in PVE 8.1.4 nor 7.4-3 for a user in a group that
has Administrator role. I had to login by SSH and reset root@pam user
2FA with pveum.
--- Begin Message ---
Hi Sebastian,
El 29/9/23 a las 9:53, sebast...@debianfan.de escribió:
if i want to get a full backup - can i copy the 100,101... folders complete to
the backup server to get the data for an emergency restore later?
Or is it required to use the internal backup function to
--- Begin Message ---
Hi,
Message arrived without the sreenshot :)
I'm looking into this issue with SuperMicro support, as other distros
(Debian, Ubuntu) are affected too.
Will report back.
Thanks
El 12/9/23 a las 10:50, Eneko Lacunza via pve-user escribió:
Hi all,
I'm trying
--- Begin Message ---
Hi all,
I'm trying to install PVE 8 from ISO boot media 8.0-2 on a Milan server:
Milan 72F3
256GB RAM
Server is Supermicro.
ISO copied to a USB drive, boots OK but hangs after grub menu, before
loading installer:
Any idea? Tried graphical and console installation.
S
--- Begin Message ---
You need a subscription to use enterprise repository.
Otherwise, use no-subscription repository instead:
https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_no_subscription_repo
El 2/8/23 a las 13:07, Joseph John escribió:
Dear All,
Good evening
Just finished instal
--- Begin Message ---
Hi all,
We have been experiencing Windows VM hangs during the last weeks, in
previously stable cluster/VMs.
So far we have seen a Windows 7 (yes I know!) and a Windows 2016 Std
guest crash with 100% multi-cpu use. A hard stop and start leaves guest
working again.
- Wi
--- Begin Message ---
Hi Philippe,
I don't think softRAID is supported in Proxmox. I think Intel RST/VROC
won't work for installing Proxmox.
You can install Debian with MDRAID, then Proxmox on top of it. MDRAID is
not supported by Proxmox, but it works. We have 3 servers with MDRAID
and Prox
--- Begin Message ---
Hi Stefan,
El 3/6/23 a las 13:47, Stefan Radman via pve-user escribió:
I want to create a Proxmox VE HCI cluster on 3 old but indentical DL380 Gen9
hosts (128GB, Dual CPU, 4x1GbE, 2x10GbE, 6x1.2T SFF 10K 12Gb SAS HDD on P440ar
controller).
Corosync will run over 2 x 1GbE
--- Begin Message ---
Hi,
El 25/5/23 a las 10:03, Eneko Lacunza escribió:
As Ubuntu 22.04 is in it and the Proxmox kernel is derived from
it, the technical effort may not be so large.
Yes, their current Linux KVM package (15.2) should work with our
5.15 kernel,
it's what i use here locally
--- Begin Message ---
Hi,
El 25/5/23 a las 9:53, Dominik Csapak escribió:
El 25/5/23 a las 9:24, Dominik Csapak escribió:
2.12.0 (qemu-kvm-2.12.0-64.el8.2.27782638)
* Microsoft Windows Server with Hyper-V 2019 Datacenter edition
* Red Hat Enterprise Linux Kernel-based Virtual Machine
--- Begin Message ---
Hi Dominik,
El 25/5/23 a las 9:24, Dominik Csapak escribió:
2.12.0 (qemu-kvm-2.12.0-64.el8.2.27782638)
* Microsoft Windows Server with Hyper-V 2019 Datacenter edition
* Red Hat Enterprise Linux Kernel-based Virtual Machine (KVM) 9.0
and 9.1
* Red Hat Virtual
--- Begin Message ---
Hi,
El 24/5/23 a las 15:47, Dominik Csapak escribió:
We're looking to move a PoC in a customer to full-scale production.
Proxmox/Ceph cluster will be for VDI, and some VMs will use vGPU.
I'd like to know if vGPU status is being exposed right now (as of
7.4) for each n
--- Begin Message ---
Hi,
We're looking to move a PoC in a customer to full-scale production.
Proxmox/Ceph cluster will be for VDI, and some VMs will use vGPU.
I'd like to know if vGPU status is being exposed right now (as of 7.4)
for each node through API, as it is done for RAM/CPU, and if no
--- Begin Message ---
Hi Joseph,
You must resolve that 1 full osd(s) issue. If other OSDs aren't as full,
you can try to reweight the full OSD so that some info is removed from it.
Another option would be to add additional OSDs.
Cheers
El 4/5/23 a las 8:39, Joseph John escribió:
Dear All,
G
--- Begin Message ---
Hi Marco,
If you disable the replica run, does it suffer from high load too?
I think there must be something going on inside the VM...
El 26/4/23 a las 12:48, Marco Gaiarin escribió:
Situation: a debian stretch mostly 'samba server' for a 150+ clients, in a
couple of phis
--- Begin Message ---
Hi Marco,
What disk model?
sdc has only one backing HDD right?
84 IOPS is not much, but I don't think you can get much more from an HDD
with random RW...
PERC controller is quite new, maybe driver in PVE 7 is more optimized...
Cheers
El 19/4/23 a las 19:07, Marco Gaia
r "eat our own food"
cluster beforehand!! :-)
Cheers
On 8. 11. 2022, at 18:18, Eneko Lacunza via
pve-user wrote:
From: Eneko Lacunza
Subject: Re: [PVE-User] VMs hung after live migration - Intel CPU
Date: 8 November 2022 18:18:44 CET
To:pve-user@lists.proxmox.com
Hi Jan,
I had
--- Begin Message ---
Hi all,
In PBS, I can add a 2FA for root@pam user with another user with
Administrator privileges (@pbs).
This is not possible in PVE. It allows choosing root@pam user, asks for
current user (LDAP realm) password but then fails with "permission check
failed".
Shall I
--- Begin Message ---
Hi Bryan,
El 18/1/23 a las 1:32, Bryan Fields escribió:
On 1/17/23 3:22 AM, Eneko Lacunza via pve-user wrote:
Hi Bryan,
We started to upgrade our cluster from PVE 7.2 to 7.3 yesterday.
I have enabled the agent in our only VM with Debian 11 running on a
7.3-4 node at the
--- Begin Message ---
Hi Bryan,
We started to upgrade our cluster from PVE 7.2 to 7.3 yesterday.
I have enabled the agent in our only VM with Debian 11 running on a
7.3-4 node at the moment, and performed 5 full backups in a row, VM
continues working (no hang).
You haven't provided details a
--- Begin Message ---
Yes :)
El 11/1/23 a las 16:51, Piviul escribió:
On 1/11/23 14:46, Eneko Lacunza via pve-user wrote:
Hi,
El 11/1/23 a las 12:19, Piviul escribió:
On 1/11/23 10:39, Eneko Lacunza via pve-user wrote:
You should change your public_network to 192.168.255.0/24 .
So the
--- Begin Message ---
Hi,
El 11/1/23 a las 12:19, Piviul escribió:
On 1/11/23 10:39, Eneko Lacunza via pve-user wrote:
You should change your public_network to 192.168.255.0/24 .
So the public_network is the pve communication network? I can edit
directly the /etc/pve/ceph.conf and then
Ceph networks).
Cheers
El 10/1/23 a las 14:29, Piviul escribió:
On 1/10/23 09:04, Eneko Lacunza via pve-user wrote:
I think you may have a wrong Ceph network definition in
/etc/pve/ceph.conf, check for "public_network".
# cat /etc/pve/ceph.conf
[global]
auth_client_requir
--- Begin Message ---
Hi,
El 10/1/23 a las 8:23, Piviul escribió:
On 1/9/23 12:54, Eneko Lacunza via pve-user wrote:
If all Ceph services/clients are on those Proxmox nodes, yes, that
should work.
yes all services/clients are on proxmox nodes...
Also check that there are no old monitor IPs
--- Begin Message ---
Hi Sven,
We have seen this before. I think eventually an update of PVE fixed the
issue because we haven't experienced it lately (weeks).
Cheers
El 9/1/23 a las 13:08, Sven escribió:
Hello,
in the last few days, there have been some backup errors. The log is
always the
--- Begin Message ---
Hi,
El 9/1/23 a las 12:47, Piviul escribió:
On 1/9/23 10:54, Eneko Lacunza via pve-user wrote:
Hi,
You need to route traffic between LAN network and Ceph network, so
that this works. When you have all monitors using ceph network IPs,
undo the routing.
the routing
--- Begin Message ---
Hi,
You need to route traffic between LAN network and Ceph network, so that
this works. When you have all monitors using ceph network IPs, undo the
routing.
Cheers
El 9/1/23 a las 10:14, Piviul escribió:
Hi all, during the CEPH installation I have dedicated a 10Gb netwo
--- Begin Message ---
Hi,
You have in datacenter options "Migrations Settings", where you can set
the migration network.
Cheers
El 15/12/22 a las 8:23, Uwe Sauter escribió:
Good morning,
I'm currently replacing one PVE cluster with another. The new hardware has a
bunch of different
network
--- Begin Message ---
Hi Marco,
I only get SMART emails when those values change. So if it stays with
value 8, there should be a way to not receive an email (if you're
gettting it now, that is)... I don't think anything was touched for this
in our environment...
Cheers
El 12/12/22 a las 17:
--- Begin Message ---
Hi Rainer,
I haven't used erasure coded pools so I can't comment, but you may have
better luck asking in ceph-user mailing list, as the question is quite
generic and not Proxmox related:
https://lists.ceph.io/postorius/lists/ceph-users.ceph.io/
Cheers
El 1/12/22 a las
--- Begin Message ---
Hi Joseph,
I suggest to backup the VMs, scp the resulting file, and restore in
destination Proxmox server.
Cheers
El 24/11/22 a las 8:44, Joseph John escribió:
Dear All,
Good afternoon
I am reading the link
https://pve.proxmox.com/wiki/VM_Templates_and_Clones
My intent
point issue was reported, no idea if that has
been tracked.
I think there has been progress with the issues we are seeing in this
Ryzen cluster, although 5.15 kernel is unworkable yet with 5.15.74...
Cheers
El 9/11/22 a las 9:21, Eneko Lacunza via pve-user escribió:
Hi Jan,
El 8/11/22 a
cluster
beforehand!! :-)
Cheers
On 8. 11. 2022, at 18:18, Eneko Lacunza via
pve-user wrote:
From: Eneko Lacunza
Subject: Re: [PVE-User] VMs hung after live migration - Intel CPU
Date: 8 November 2022 18:18:44 CET
To:pve-user@lists.proxmox.com
Hi Jan,
I had some time to re-test this.
I
e 9 VMs node->ryzen5900x -> node-ryzen1700 works as
intended :)
Cheers
El 8/11/22 a las 9:40, Eneko Lacunza via pve-user escribió:
Hi Jan,
Yes, there's no issue if CPUs are the same.
VMs hang when CPUs are of different enough generation, even being of
the same brand and using KVM64 vCPU
dedicated VLAN on switch
stack.
I have more nodes with EPYC3/Milan on the way, so I’ll test those later as well.
What does your cluster look hardware-wise? What are the problems you experienced
with VM migratio on 5.13->5.19?
Thanks,
JV
On 7. 11. 2022, at 14:40, Eneko Lacunza via
pv
.
I’m probably going to 5.19, I’ve heard other issues with 5.15 as well
(CephFS client issues).
Mark Schouten
Op 3 nov. 2022 om 17:55 heeft Eneko Lacunza via
pve-user <mailto:pve-user@lists.proxmox.com> het
volgende geschreven:
_
--- Begin Message ---
Hi Piviul,
This is usually due to network connectivity issues. Are you able to ping
hosts on 192.168.255.* interfaces?
El 7/11/22 a las 8:44, Piviul escribió:
Good morning sirs, in a 3 nodes proxmox 6.4 all the 3 nodes seems to
works, all vm guest continue to works but I
es with 5.15 as well (CephFS
client issues).
Mark Schouten
Op 3 nov. 2022 om 17:55 heeft Eneko Lacunza via
pve-user het volgende geschreven:
___
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listi
--- Begin Message ---
Hi all,
We have a HCI cluster, upgraded to latest enterprise version as of today
afternoon:
# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.60-2-pve)
pve-manager: 7.2-11 (running version: 7.2-11/b76d3178)
pve-kernel-helper: 7.2-13
pve-kernel-5.15: 7.2-12
pve-kerne
--- Begin Message ---
Hi,
What kernel version?
I was advised to upgrade BIOS when I found some TSC issues some time
ago... What motherboard?
El 29/10/22 a las 15:37, Jos Chrispijn via pve-user escribió:
Last night I noticed this in /var/log/kern.log :
kvm: SMP vm created on host with unstab
--- Begin Message ---
Hi,
El 25/9/22 a las 8:08, Joseph John escribió:
Dear All,
I have doubt about calculating the amount of license which I would be
needing, I am planning to use the following hardware
6 NO of SYS-620U-TNR 2U SuperMicro servers with 2x intel Xeon process
[20 cores each]
--- Begin Message ---
Hi,
We're trying to move cloud-init disks from one storage to another,
without luck.
The only way I have found is to shutdown VM, remove cloud-init disk,
re-create cloud-init disk in destination storage, and start VM.
Is there any way to achieve this without shuting do
--- Begin Message ---
Hi Michael,
El 19/9/22 a las 23:01, Michael Doerner | TechnologyWise via pve-user
escribió:
Hi to this group.
I am looking for some help with the following issue:
Running PVE 7.2-7 (the issue happened on that system before with
earlier PVE 6.x versions), the first VM to
--- Begin Message ---
Hi Justin,
El 16/9/22 a las 10:06, Justin Gräflich escribió:
Hello dear Proxmox Community, my Proxmox environment has 3 Ethernet
Ports (2x 2,5GbE PCIe and 1x GbE Onboard as default Bridge)
And i want to set them up as multiple bridges, working in the same
network.
This
--- Begin Message ---
Hi Sebastian,
Upgrading works very well, but make a backup before starting just in case.
Cheers
El 3/8/22 a las 16:07, sebast...@debianfan.de escribió:
Hi,
I have an proxmox 6 host - end of live :-(
Better backup all the kvm and reinstalll the host with the new proxmox
--- Begin Message ---
Hi Bastian,
Thanks for your input, It's good to know others have been bitten by this
and that it's not a issue in our clusters. I hope updated wiki will help
someone in the future... ;)
Cheers
El 2/8/22 a las 0:40, Bastian Sebode escribió:
Hi Eneko,
I had the same iss
--- Begin Message ---
Hi,
El 1/8/22 a las 20:16, Arjen via pve-user escribió:
On Monday, August 1st, 2022 at 14:27, Eneko Lacunza via
pve-user wrote:
I have noticed that when upgrading from PVE 6.4 to 7.2, BCM57412 network
devices change name, i.e.:
ens2f0np0 -> enp101s0f0np0
Devices
--- Begin Message ---
Hi all,
I have noticed that when upgrading from PVE 6.4 to 7.2, BCM57412 network
devices change name, i.e.:
ens2f0np0 -> enp101s0f0np0
Devices are seen like:
65:00.0 Ethernet controller: Broadcom Inc. and subsidiaries BCM57412
NetXtreme-E 10Gb RDMA Ethernet Controller (
--- Begin Message ---
Hola Diego,
En esta lista de correo la mayoría de personas no entienden el español.
Sería más adecuado que escribas en inglés si es posible.
También que indiques más detalles sobre tu problemas. No consigues hacer
login a Proxmox? Desde el navegador, SSH, o Consola?
Sa
s agrees that
they may be read by other sender employees than the official recipient or
sender in order to ensure the continuity of work-related activities and allow
supervision thereof.
----- Pôvodná správa -
Od: "Eneko Lacunza via pve-user"
<mailto:pve-user@lists
these messages agrees that
they may be read by other sender employees than the official recipient or
sender in order to ensure the continuity of work-related activities and allow
supervision thereof.
- Pôvodná správa -
Od: "Eneko Lacunza via pve-user"
Komu: "pve-user
--- Begin Message ---
Hi,
Can you post a "ceph tree"?
I'd suspect of a disk issue. Did you check laggy PG's OSDs?
You can also try mysqlbackup with OSDs from one node down, to see if
issue is in one of the nodes...
Good luck!
El 22/6/22 a las 20:39, Branislav Viest escribió:
Hello,
I have
And last, have you enabled squash settings in Synology?
I fill. Yes, ver. 3 i tried to an squash it root to admin.
El 1/6/22 a las 14:30, Sebastian Gödecke escribió:
Am Mi., 1. Juni 2022 um 14:28 Uhr schrieb En
ings in Synology?
I fill. Yes, ver. 3 i tried to an squash it root to admin.
El 1/6/22 a las 14:30, Sebastian Gödecke escribió:
Am Mi., 1. Juni 2022 um 14:28 Uhr schrieb Eneko Lacunza
via pve-user :
-- Forwarded message -
in.
El 1/6/22 a las 14:30, Sebastian Gödecke escribió:
Am Mi., 1. Juni 2022 um 14:28 Uhr schrieb Eneko Lacunza via
pve-user :
-- Forwarded message --
From: Eneko Lacunza
To: pve-user@lists.proxmox.com
las 14:30, Sebastian Gödecke escribió:
Am Mi., 1. Juni 2022 um 14:28 Uhr schrieb Eneko Lacunza via
pve-user :
-- Forwarded message --
From: Eneko Lacunza
To: pve-user@lists.proxmox.com
Cc:
Bcc:
Date: Wed, 1 Jun
--- Begin Message ---
Hi,
Ok, so storage is not added.
Can you send a filled "Add Storage" dialog capture and the working pvesm
scan command?
El 1/6/22 a las 14:30, Sebastian Gödecke escribió:
Am Mi., 1. Juni 2022 um 14:28 Uhr schrieb Eneko Lacunza vi
--- Begin Message ---
Hi,
Can you post your /etc/pve/storage.cfg contents?
El 1/6/22 a las 14:24, Sebastian Gödecke via pve-user escribió:
Am Mi., 1. Juni 2022 um 14:16 Uhr schrieb Gilberto Ferreira <
gilberto.nune...@gmail.com>:
Hi
Can you do pvesm scan nfs from Proxmox?
It shows me the
--- Begin Message ---
Hi Sebastian,
El 1/6/22 a las 14:10, Sebastian Gödecke escribió:
-- Forwarded message --
From: Eneko Lacunza
To: pve-user@lists.proxmox.com
Cc:
Bcc:
Date: Wed, 1 Jun 2022 14:05:42 +0200
Subject: Re: [PVE-User] cant add-nfs stor
--- Begin Message ---
Hi Sebastian,
El 1/6/22 a las 13:44, Sebastian Gödecke via pve-user escribió:
Hi, i add several times a pve to a nfs-storage by using synology nas.
It worked quite well, but now, here at my home, i want to add again a nfs
from a synology to a pve but it didn't worked.
I ch
--- Begin Message ---
Hi,
El 19/5/22 a las 18:50, Eneko Lacunza via pve-user escribió:
El 19/5/22 a las 18:31, Stoiko Ivanov escribió:
Today we installed PVE 7.1 (ISO) in a relatively old machine.
any more details on what kind of machine this is
(CPU generation, if it's an older HP
--- Begin Message ---
Hi Stoiko,
El 19/5/22 a las 18:31, Stoiko Ivanov escribió:
Today we installed PVE 7.1 (ISO) in a relatively old machine.
any more details on what kind of machine this is
(CPU generation, if it's an older HP/Dell/Supermicro server or
consumerhardware)?
The system is in
--- Begin Message ---
Hi all,
Today we installed PVE 7.1 (ISO) in a relatively old machine.
Installation was fine and Proxmox has booted OK. But after configuring
non-subscription repository and upgrading to PVE 7.2/kernel 5.15,
Proxmox won't boot anymore:
Kernel will print lots of messages
--- Begin Message ---
Hi Marco,
I would try changing that sata0 disk to virtio-blk (maybe in a clone VM
first). I think squeeze will support it; then try PBS backup again.
El 18/5/22 a las 10:04, Marco Gaiarin escribió:
We are depicting some vary severe disk corruption on one of our
installat
seems to be ok.
---
Gilberto Nunes Ferreira
Em qui., 12 de mai. de 2022 às 13:35, Eneko Lacunza via pve-user
escreveu:
-- Forwarded message --
From: Eneko Lacunza
To: pve-user@lists.proxmox.com
Cc:
Bcc:
Date: Thu, 12 May 2022 18:35:10 +0200
--- Begin Message ---
Hi Alain,
El 12/5/22 a las 17:12, Alain Péan escribió:
Le 12/05/2022 à 16:57, Eneko Lacunza via pve-user a écrit :
Finally we have worked around this issue downgrading to kernel 5.13:
apt-get install proxmox-ve=7.1-1; apt-get remove
pve-kernel-5.15.35-1-pve (+reboot
rom qemu-server from 7.2-2 to 7.1-4 (version
before issues started):
Issue continues.
We have seen that when bulk migrating VMs from node1 to node2, VMs in
node2 ALSO start to have issues.
We'll try setting max workers for bulk actions to 1 next.
El 12/5/22 a las 9
on before
issues started):
Issue continues.
We have seen that when bulk migrating VMs from node1 to node2, VMs in
node2 ALSO start to have issues.
We'll try setting max workers for bulk actions to 1 next.
El 12/5/22 a las 9:33, Eneko Lacunza via pve-user escribió:
0] x86/fpu: x87 FPU
will use FXSAVE
May 12 09:30:43 monitor-cloud kernel: [ 0.00] BIOS-provided
physical RAM map:
Is VM clock managed by qemu/kvm?
Thanks
El 11/5/22 a las 16:35, Eneko Lacunza via pve-user escribió:
___
pve-user mailing
--- Begin Message ---
Hi all,
Yesterday we upgraded a 5-node cluster to PVE 7.2 from PVE 7.1:
# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.35-1-pve)
pve-manager: 7.2-3 (running version: 7.2-3/c743d6c1)
pve-kernel-5.15: 7.2-3
pve-kernel-helper: 7.2-3
pve-kernel-5.13: 7.1-9
pve-kernel-5
--- Begin Message ---
Hi,
Maybe kernel changed names of the interfaces.
To fix the issue, you must change old interface names with new names in
/etc/network/interfaces
El 6/5/22 a las 13:10, storm escribió:
Hello,
on one of my nodes I have total chaos in the network configuration
after upg
de of backup is Compression is ZSTD, Snapshot.
See screenshot.
19.04.2022 18:07, Eneko Lacunza пишет:
Yes, all nodes mount the same NFS export.
El 19/4/22 a las 16:52, Сергей Цаболов escribió:
The Synology path mount over NFS is shared on 3 nodes ?
19.04.2022 17:46, Eneko Lacunza via
--- Begin Message ---
Hi Michael,
El 19/4/22 a las 17:23, Michael Rasmussen via pve-user escribió:
So far it has worked well. Unfortunately, we haven't been able to
find a common pattern/cause in several clusters we see the issue.
If your corosync network is very busy and/or you have not confi
--- Begin Message ---
Yes, all nodes mount the same NFS export.
El 19/4/22 a las 16:52, Сергей Цаболов escribió:
The Synology path mount over NFS is shared on 3 nodes ?
19.04.2022 17:46, Eneko Lacunza via pve-user пишет:
___
pve-user mailing list
--- Begin Message ---
El 19/4/22 a las 16:52, Сергей Цаболов escribió:
The Synology path mount over NFS is shared on 3 nodes ?
19.04.2022 17:46, Eneko Lacunza via pve-user пишет:
___
pve-user mailing list
pve-user@lists.proxmox.com
https
--- Begin Message ---
Hi,
El 15/4/22 a las 18:04, Michael Rasmussen via pve-user escribió:
For the last 10 years I have been using Proxmox I have not have a lost
connection to a server for over 1 sec without it being intentionally
but if your circumstances is another usecase I would go for stack
with NFS because all server threads (default 8, on Debian IIRC) are
busy, which causes new clients to not being able to connect.
—
Mark Schouten, CTO
Tuxis B.V.
m...@tuxis.nl
On 19 Apr 2022, at 16:34, Eneko Lacunza via pve-user
wrote:
*From: *Eneko Lacunza
*Subject: **Backup/timeout issu
--- Begin Message ---
Hi all,
We're having backup/timeout issues with traditional non-PBS backups in 6.4 .
We have 3 nodes backing up to a NFS server with HDDs. For the same
backup task (with multiple VMs spread in those 3 nodes), one node may
finish all backups, but other may not be able to p
--- Begin Message ---
Hi Fabrizio,
Issue with SMR drives is writes, specially random writes. I suspect you
won't like the performance, but it will work nonetheless :)
Cheers
El 28/3/22 a las 17:48, Fabrizio Cuseo escribió:
Hello.
There is a very big incompatibility with ZFS and SRM drives (
--- Begin Message ---
Try refreshing Web interface. All nodes are upgraded to 7.1?
El 23/2/22 a las 3:03, Luis G. Coralle escribió:
I have a 5 nodes cluster with proxmox pve 7.1-10.
I have problem when I start a backup on a shared NFS backup storage, it
shows the following message:
Some errors
--- Begin Message ---
I didn't receive any input on this. We normally don't use lvm-thin,
should I file a issue? :)
El 16/2/22 a las 13:56, Eneko Lacunza via pve-user escribió:
___
pve-user mailing list
pve-user@lists.proxmox
--- Begin Message ---
Hi all,
I'm preparing some scenarios for a PBS lab to be held tomorrow, and
found something interesting.
I have a brand-new installed Windows Server 2019 backed up in PBS,
encrypted. This is a test VM, with only 40GB of disk.
If I restore it to default local-lvm storag
--- Begin Message ---
Hi Sergey,
El 16/2/22 a las 10:54, Сергей Цаболов escribió:
What IOPS are you getting in your 4K tests? You won't get near direct
disk IOPS...
I need to test the host disk or the VM disk ?
If you're worried about VM performance, then test VM disks... :)
Cheers
Eneko
--- Begin Message ---
Hi Sergey,
So, does this really make sense? If you put the new 2 disks in node7 in
a pool, that data won't be able to survive node7 failure.
If you're trying to benchmark the disks, that wouldn't be a good test,
because in a real deployment disk IO for only one VM would
--- Begin Message ---
Hi Sergey,
El 16/2/22 a las 9:52, Сергей Цаболов escribió:
I have 7 node's PVE Cluster + Ceph storage
In 7 node I add new 2 disks and want to make specific new osd pool on
Ceph.
Is possible with new disk create specific pool ?
You are adding 2 additional disk in eac
--- Begin Message ---
Hi,
I think this could be solved with a "Reverse Sync" functionality:
- Currently Syncs are performed by a PBS that connects to another PBS
and "pulls" datastore content to a local datastore.
- "Reverse Sync" would instead connect to a remote PBS and "push" local
datast
--- Begin Message ---
Hi Dietmar,
El 4/2/22 a las 16:46, Dietmar Maurer escribió:
Would it be possible for PVE to create dirty-bitmaps per backup
storage/PBS storage? That would make this kind of setups more efficient
We decided against that because this can be a big memory leak. Please notice
--- Begin Message ---
Hi,
We have setup two PBS storages in a PVE cluster, one PBS is local and
the other PBS is remote.
We are doing this because we don't want the remote PBS to be able to
reach local LAN, so we can't use a Sync in remote PBS.
Setup is working fine, but I noticed that dirt
--- Begin Message ---
Hi all,
3 days ago we updated a PVE 6.0 host to 6.4 . It has been working
without issue for more than a year since last update until then.
After the update, one of the VMs has issues with backups:
INFO: Starting Backup of VM 105 (qemu)
INFO: Backup started at 2022-01-21
--- Begin Message ---
Hi Sergey,
I don't understand very well the issue.
Can you post last 100 lines of syslog before reboot?
El 19/1/22 a las 12:22, Сергей Цаболов escribió:
Hi,
Like in this old thread
https://forum.proxmox.com/threads/unexpected-reboots-help-need.34310/
I have similar pro
--- Begin Message ---
Hi all,
We have a PVE backup task configured with a remote PBS.
This is working very well, but some days ago a backup failed:
INFO: Starting Backup of VM 103 (qemu)
INFO: Backup started at 2022-01-13 01:11:34
INFO: status = running
INFO: VM Name: odoo
INFO: include disk 's
--- Begin Message ---
Hi Marco,
Sorry for the delay, yesterday was a busy day...
I'm posting this to the list too, it may be helpful to others.
Remenber, this procedure was for physical partitions and for resizing
Bluestore DB.
=== Cambiar/ampliar la partición block.db de Bluestore
1. Obte
--- Begin Message ---
Hi Kris,
We have two pfSense VMs on PVE 7.1 clusters, we haven't seen this issue.
Both VMs have Ceph storage (Pacific).
Did you check memory usage inside VM? If it's spawning new processes and
not killing old ones, this seems a swaping issue?
El 14/1/22 a las 9:18, Kri
--- Begin Message ---
Hi,
Why not use "Remove OSD" button in PVE WUI? :-)
El 13/1/22 a las 9:13, Сергей Цаболов escribió:
Hello to all.
I have cluster with 7 node.
Storage for VM disk and others pool data is on ceph version 15.2.15
(4b7a17f73998a0b4d9bd233cda1db482107e5908) octopus (stable)
1 - 100 of 181 matches
Mail list logo