Re: [PVE-User] CEPH performance

2020-06-09 Thread Eneko Lacunza
Hi Marco, El 9/6/20 a las 19:46, Marco Bellini escribió: Dear All, I'm trying to use proxmox on a 4 nodes cluster with ceph. every node has a 500G NVME drive, with dedicated 10G ceph network with 9000bytes MTU. despite off nvme warp speed I can reach when used as lvm volume, as soon as I conv

Re: [PVE-User] Proxmox Datacenter Issue

2020-06-04 Thread Eneko Lacunza
Hi, El 4/6/20 a las 14:52, Sivakumar SARAVANAN escribió: Hello, We have a one Proxmox Datacenter and on top of that we have around 15 standalone nodes and cluster defined. The Datacenter itself is showing "communication error " frequentially. All standalone nodes are unavailable to perform any

Re: [PVE-User] critical HA problem on a PVE6 cluster

2020-05-14 Thread Eneko Lacunza
order to see if something was happening at the moment where our cluster had crashed. I will let you know if I have the answer to that mystery... Cheers, Hervé On 12/05/2020 15:00, Eneko Lacunza wrote: Hi Hervé, El 11/5/20 a las 17:58, Herve Ballans escribió: Thanks for your answer. I was a

Re: [PVE-User] critical HA problem on a PVE6 cluster

2020-05-12 Thread Eneko Lacunza
;d try even a 1G switch just to see if that makes Proxmox cluster and ceph stable. Are 10G interfaces very loaded? Cheers Eneko On 11/05/2020 10:39, Eneko Lacunza wrote: Hi Hervé, This seems a network issue. What is the network setup in this cluster? What logs in syslog about corosync a

Re: [PVE-User] critical HA problem on a PVE6 cluster

2020-05-11 Thread Eneko Lacunza
Hi Hervé, This seems a network issue. What is the network setup in this cluster? What logs in syslog about corosync and pve-cluster? Don't enable HA until you have a stable cluster quorum. Cheers Eneko El 11/5/20 a las 10:35, Herve Ballans escribió: Hi everybody, I would like to take the o

[PVE-User] 5.4 kernel NFS issue

2020-04-21 Thread Eneko Lacunza
Dear Proxmox developers, Following forum post: https://forum.proxmox.com/threads/linux-kernel-5-4-for-proxmox-ve.66854/ I upgraded from 5.3.18-2 to 5.4 in a new Proxmox 6.1 node to diagnose a network card issue... Network card seems broken :-( , but I found that NFS storage doesn't work with

Re: [PVE-User] Proxmox 6 loses network every 24 hours

2020-04-16 Thread Eneko Lacunza
Hi Gerald, I'm sorry about your issue. I tried Soyoustart some time ago (3-4 years I'd say), but my experience was really awful. Had to phone about 5 numbers, talked to people in half the countries in Europe and finally the support guy hanged the call. Probably there's some kind of network p

Re: [PVE-User] Some erros in Ceph - PVE6

2020-03-30 Thread Eneko Lacunza
Hi Gilberto, Generally, you have to wait when Ceph is doing rebalancing etc. until it finishes. Some things can go for hours. Also, try no to change Ceph parameters without being sure and researching documentation and mailing lists. This is a new cluster and you have done things most Ceph us

Re: [PVE-User] Use LVM from XenServerf into Proxmox 6

2020-03-26 Thread Eneko Lacunza
Hi Gilberto, You need to fix your LVM first (not a Proxmox issue). I see you have a lot of PVs, but no (old) LVs show. Also, you seem to have missing at least a PV (/dev/sdb?) Fix that first, then let's see what output give "vgs" and "lvs". You need to see VMs disks with "lvs" first. The you

Re: [PVE-User] Spillover issue

2020-03-25 Thread Eneko Lacunza
Hi Alwin, El 25/3/20 a las 11:55, Alwin Antreich escribió: The easiest way ist to destroy and re-create the OSD with a bigger DB/WAL. The guideline from Facebook for RocksDB is 3/30/300 GB. It's well below the 3GiB limit in the guideline ;) For now. ;) Cluster has 2 years now, data amount i

Re: [PVE-User] Spillover issue

2020-03-25 Thread Eneko Lacunza
Hi Alwin, El 24/3/20 a las 14:54, Alwin Antreich escribió: On Tue, Mar 24, 2020 at 01:12:03PM +0100, Eneko Lacunza wrote: Hi Allwin, El 24/3/20 a las 12:24, Alwin Antreich escribió: On Tue, Mar 24, 2020 at 10:34:15AM +0100, Eneko Lacunza wrote: We're seeing a spillover issue with

Re: [PVE-User] Spillover issue

2020-03-24 Thread Eneko Lacunza
Hi Allwin, El 24/3/20 a las 12:24, Alwin Antreich escribió: On Tue, Mar 24, 2020 at 10:34:15AM +0100, Eneko Lacunza wrote: We're seeing a spillover issue with Ceph, using 14.2.8: [...] 3. ceph health detail HEALTH_WARN BlueFS spillover detected on 3 OSD BLUEFS_SPILLOVER B

[PVE-User] Spillover issue

2020-03-24 Thread Eneko Lacunza
Hi all, We're seeing a spillover issue with Ceph, using 14.2.8: We originally had 1GB rocks.db partition: 1. ceph health detail HEALTH_WARN BlueFS spillover detected on 3 OSD BLUEFS_SPILLOVER BlueFS spillover detected on 3 OSD osd.3 spilled over 78 MiB metadata from 'db' device (1024 M

Re: [PVE-User] Proxmox with ceph storage VM performance strangeness

2020-03-18 Thread Eneko Lacunza
host" type CPU. Do you know if it makes any difference wheater I use the VirtIO SCSI-driver versus the Virtio-SCSI-single driver? I haven't tried -single, maybe others can comment on this. Cheers Eneko Thank you very much Rainer Am 17.03.20 um 14:10 schrieb Eneko Lacunza: Hi, You

Re: [PVE-User] Proxmox with ceph storage VM performance strangeness

2020-03-17 Thread Eneko Lacunza
Hi, You can try to enable IO threads and assign multiple Ceph disks to the VM, then build some kind of raid0 to increase performance. Generally speaking, a SSD based Ceph cluster is considered to perform well when a VM gets about 2000 IOPS, and factors like CPU 1-thread performance, network

Re: [PVE-User] VM network disconnect issue after upgrade to PVE 6.1

2020-03-12 Thread Eneko Lacunza
Hi all, El 24/2/20 a las 10:10, Eneko Lacunza escribió: El 20/2/20 a las 14:47, Eneko Lacunza escribió: We tried running the main VM backup yesterday morning, but couldn't reproduce the issue, although during regular backup all 3 nodes are doing backups and in the test we only performe

Re: [PVE-User] qcow2 vs raw format

2020-03-09 Thread Eneko Lacunza
https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> El lun., 9 mar. 2020 a las 9:17, Eneko Lacunza () escribió: Hola Leandro, El 9/3/20 a las 13:11, Leandro Roggerone escribió:

Re: [PVE-User] qcow2 vs raw format

2020-03-09 Thread Eneko Lacunza
Hola Leandro, El 9/3/20 a las 13:11, Leandro Roggerone escribió: Hi guys, after install pve , would like to create my first VM. I noticed that only available format is raw. Question is: Is qcow2 deprecated ? What are differences between them ? (I already google it but is not 100% clear). This i

Re: [PVE-User] osd replacement to bluestore or filestore

2020-03-02 Thread Eneko Lacunza
Hi MJ, El 29/2/20 a las 12:21, mj escribió: Hi, We have a failing filestore OSD HDD in our pve 5.4 cluster on ceph 12.2.13. I have ordered a replacement SSD, but we have the following doubt: Should we now replace the filestore HDD (journal on an SSD) with a bluestore SSD? Or should we keep

Re: [PVE-User] Create proxmox cluster / storage question.

2020-03-02 Thread Eneko Lacunza
ore resilient, but you have to understand how it works. You may find that having only two servers with Ceph storage can be risky when performing maintenance on one of the servers. Saludos Eneko Thanks! El vie., 28 feb. 2020 a las 11:06, Eneko Lacunza () escribió: Hola Leandro, El 28/2/20

Re: [PVE-User] Create proxmox cluster / storage question.

2020-02-28 Thread Eneko Lacunza
ailure during recovery is high). Saludos Eneko Regards. Leandro. El vie., 28 feb. 2020 a las 5:49, Eneko Lacunza () escribió: Hola Leandro, El 27/2/20 a las 17:29, Leandro Roggerone escribió: Hi guys , i'm still tunning my 5.5 Tb server. While setting storage options during install process, I

Re: [PVE-User] Create proxmox cluster / storage question.

2020-02-28 Thread Eneko Lacunza
Hola Leandro, El 27/2/20 a las 17:29, Leandro Roggerone escribió: Hi guys , i'm still tunning my 5.5 Tb server. While setting storage options during install process, I set 2000 for hd size, so I have 3.5 TB free to assign later. my layout is as follows: root@pve:~# lsblk NAME MAJ:

Re: [PVE-User] VM network disconnect issue after upgrade to PVE 6.1

2020-02-24 Thread Eneko Lacunza
Hi, El 24/2/20 a las 15:41, Falco Kleinschmidt escribió: Am 20.02.20 um 14:47 schrieb Eneko Lacunza: Have you tried setting (bandwidth) limits on the backup jobs and see if that helps ? Not really. I've looked through the docs, but seems I can only affect write bandwith on NAS (onl

Re: [PVE-User] VM network disconnect issue after upgrade to PVE 6.1

2020-02-24 Thread Eneko Lacunza
Hi Gianni, El 20/2/20 a las 14:47, Eneko Lacunza escribió: We tried running the main VM backup yesterday morning, but couldn't reproduce the issue, although during regular backup all 3 nodes are doing backups and in the test we only performed the backup of the only VM storaged on SSD

Re: [PVE-User] VM network disconnect issue after upgrade to PVE 6.1

2020-02-21 Thread Eneko Lacunza
Hi Humberto, We aren't using IPv6 for VM network, that can't be the issue. But thanks for the suggestion! :-) Eneko El 21/2/20 a las 12:42, Humberto Jose De Sousa via pve-user escribió: ___ pve-user mailing list pve-user@pve.proxmox.com https://pve.

Re: [PVE-User] VM network disconnect issue after upgrade to PVE 6.1

2020-02-20 Thread Eneko Lacunza
Hi Gianni, El 20/2/20 a las 13:48, Gianni Milo escribió: See comments below... Thanks for the comments! vmbr0 is on a 2x1Gbit bond0 Ceph public and private are on 2x10Gbit bond2 Backup network is IPv6 on 2x1Gbit bond1, to a Synology NAS. Where's the cluster (corosync) traffic flowing ? On v

[PVE-User] VM network disconnect issue after upgrade to PVE 6.1

2020-02-20 Thread Eneko Lacunza
Hi all, On february 11th we upgraded a PVE 5.3 cluster to 5.4, then to 6.1 . This is an hyperconverged cluster with 3 servers, redundant network, Ceph with two storage pools, one HDD based and the other SSD based: Each server consists of: - Dell R530 - 1xXeon E5-2620 8c/16t 2.1Ghz - 64GB RAM

Re: [PVE-User] upgrade path to proxmox enterprise repos ?

2020-02-19 Thread Eneko Lacunza
Hi Rainer, You can switch from community repo to enterprise repo withou any issue, just change sources.list . Cheers Eneko El 19/2/20 a las 13:05, Rainer Krienke escribió: Hello, At the moment I run a proxmox cluster with a seperate ceph cluster as storage backend. I do not have a proxmox s

Re: [PVE-User] VZdump: No such disk, but the disk is there!

2020-02-13 Thread Eneko Lacunza
-15 --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em qui., 13 de fev. de 2020 às 09:19, Eneko Lacunza escreveu: What about: pvesm list local-lvm ls -l /dev/pve/vm-110-disk-0 El 13/2/20 a las 12:40, Gilberto Nunes escribió: Qu

Re: [PVE-User] VZdump: No such disk, but the disk is there!

2020-02-13 Thread Eneko Lacunza
--127--disk--0 pve-vm--104--disk--0 pve-vm--115--disk--0 pve-vm--129--disk--0 --- Gilberto Nunes Ferreira (47) 3025-5907 (47) 99676-7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em qui., 13 de fev. de 2020 às 08:38, Eneko Lacunza escreveu: It's quite strange, what about "l

Re: [PVE-User] VZdump: No such disk, but the disk is there!

2020-02-13 Thread Eneko Lacunza
7530 - Whatsapp / Telegram Skype: gilberto.nunes36 Em qui., 13 de fev. de 2020 às 08:11, Eneko Lacunza escreveu: Can you send the output for "lvs" and "cat /etc/pve/storage.cfg"? El 13/2/20 a las 11:13, Gilberto Nunes escribió: HI all Still in trouble with this issue cat d

Re: [PVE-User] VZdump: No such disk, but the disk is there!

2020-02-13 Thread Eneko Lacunza
Can you send the output for "lvs" and "cat /etc/pve/storage.cfg"? El 13/2/20 a las 11:13, Gilberto Nunes escribió: HI all Still in trouble with this issue cat daemon.log | grep "Feb 12 22:10" Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE replication runner... Feb 12 22:10:01 a2web syst

Re: [PVE-User] RBD Storage from 6.1 to 3.4 (or 4.4)

2020-01-30 Thread Eneko Lacunza
I think firefly is too old. Either you create backups and restore in the new cluster, or you'll have to upgrade the old clusters at least to Proxmox 5 and Ceph Mimic. Cheers El 30/1/20 a las 12:59, Fabrizio Cuseo escribió: I can't afford the long downtime. With my method, the downtime is onl

Re: [PVE-User] PVE 5.4 - resize a NFS disk truncated it

2020-01-28 Thread Eneko Lacunza
=f8b829aabae2fdc8bdd9ace741bbef3598b892f2 Eneko Lacunza hat am 28. Januar 2020 09:26 geschrieben: Hi all, We have a PVE 5.4 cluster (details below), with a Synology DS1819+ NFS server for storing file backups. The setup is as follows: - Debian 9 VM with 2 disks; system disk con Ceph RBD, file backup data disk

Re: [PVE-User] PVE 5.4 - resize a NFS disk truncated it

2020-01-28 Thread Eneko Lacunza
ere are no guarantees for the results. G. On Tue, 28 Jan 2020 at 08:27, Eneko Lacunza wrote: Hi all, We have a PVE 5.4 cluster (details below), with a Synology DS1819+ NFS server for storing file backups. The setup is as follows: - Debian 9 VM with 2 disks; system disk con Ceph RBD, file backup

[PVE-User] PVE 5.4 - resize a NFS disk truncated it

2020-01-28 Thread Eneko Lacunza
Hi all, We have a PVE 5.4 cluster (details below), with a Synology DS1819+ NFS server for storing file backups. The setup is as follows: - Debian 9 VM with 2 disks; system disk con Ceph RBD, file backup data disk on NFS (6,5TB) - NFS storage on Synology NAS. Backup disk was getting full, s

Re: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6

2019-11-22 Thread Eneko Lacunza
las 11:18, Alexandre DERUMIER escribió: Hi, do you have upgrade all your nodes to corosync 3.0.2-pve4 libknet1:amd641.13-pve1 ? (available in pve-no-subscription et pve-enteprise repos) - Mail original - De: "Eneko Lacunz

Re: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6

2019-11-12 Thread Eneko Lacunza
Hi all, We are seeing this also with 5.4-3 clusters, a node was fenced in two different clusters without any apparent reason. Neither of the clusters had a node fence before... Cheers Eneko El 7/11/19 a las 15:35, Eneko Lacunza escribió: Hi all, We updated our office cluster to get the

Re: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6

2019-11-07 Thread Eneko Lacunza
Hi all, We updated our office cluster to get the patch, but got a node reboot on 31th october. Node was fenced and rebooted, everything continued working OK. Is anyone experencing yet this problem? Cheers Eneko El 2/10/19 a las 18:09, Hervé Ballans escribió: Hi Alexandre, We encouter exact

Re: [PVE-User] Migrating 4->5, from hammer to luminous: some shortcut?

2019-08-28 Thread Eneko Lacunza
Hi Marco, I don't undestand why you are asking about not tested/undocumented migration procedures. Use the documented tested one. It works, has been proven and has zero downtime. Don't waste time :-) Cheers El 27/8/19 a las 17:45, Marco Gaiarin escribió: Why an intermediate passage via '

Re: [PVE-User] Will subscription work behind NAT?

2019-08-22 Thread Eneko Lacunza
Hi, El 22/8/19 a las 12:26, Patrick Westenberg escribió: will the subscription check work if hosts have private IPs only and are not accessible from the web? Yes, it works it the hosts have access to internet via HTTP/HTTPS (i.e. apt-get update works for example). Cheers Eneko -- Zuzendari

Re: [PVE-User] Proxmox 6 - disk problem

2019-08-22 Thread Eneko Lacunza
Hi, So what disks/RAID controller are there on the server? :) My guess is disk if failed :) Did you try smartctl ? Also, I think attachments are stripped off :) Cheers El 22/8/19 a las 10:03, lord_Niedzwiedz escribió: CPU usage 0.04% of 32 CPU(s) _/*IO delay    20.38%        !!*/_ Load avera

Re: [PVE-User] Ceph server manageability issue in upgraded PVE 6 Ceph Server

2019-08-22 Thread Eneko Lacunza
Hi Dominik, El 22/8/19 a las 9:50, Dominik Csapak escribió: On 8/21/19 2:37 PM, Eneko Lacunza wrote: # pveceph createosd /dev/sdb -db_dev /dev/sdd device '/dev/sdd' is already in use and has no LVM on it this sounds like a bug.. can you open one on bugzilla.proxmox.co

[PVE-User] Ceph server manageability issue in upgraded PVE 6 Ceph Server

2019-08-21 Thread Eneko Lacunza
Hi all, I'm reporting here an issue that I think should be handled somehow by Proxmox, maybe with extended migration notes. Starting point: - Proxmox 5.4 cluster with Ceph Server. Proxmox nodes have 1 SSD + 3 HDD. System and Ceph OSD journals (filestore or bluestore db) are on the SSD. Thi

Re: [PVE-User] GUI Node Ceph->OSD screen not showing OSDs after upgrade from 5.4 to 6.0

2019-08-21 Thread Eneko Lacunza
Here it is: https://bugzilla.proxmox.com/show_bug.cgi?id=2340 El 21/8/19 a las 14:03, Tim Marx escribió: Hi, thanks for investigating. Please file a bug at https://bugzilla.proxmox.com/, this will help us to keep track of it. Eneko Lacunza hat am 21. August 2019 13:27 geschrieben: Hi

Re: [PVE-User] GUI Node Ceph->OSD screen not showing OSDs after upgrade from 5.4 to 6.0

2019-08-21 Thread Eneko Lacunza
rush remove" and now it works! Shall I report a bug? I can provide a problematic JSON if needed. Thanks a lot Eneko El 21/8/19 a las 10:41, Eneko Lacunza escribió: Hi all, We have just upgraded our office 5-node cluster from 5.4 to 6.0. Cluster has 15 OSDs in 4 of the nodes. Everythi

[PVE-User] GUI Node Ceph->OSD screen not showing OSDs after upgrade from 5.4 to 6.0

2019-08-21 Thread Eneko Lacunza
Hi all, We have just upgraded our office 5-node cluster from 5.4 to 6.0. Cluster has 15 OSDs in 4 of the nodes. Everything was quite smooth and we have even cleared almost all Ceph warnings (one BlueFS spillover left yet). Thanks a lot for the excelent work! We have noticed though that in

Re: [PVE-User] Move VM's HDD incl. snapshots from one Ceph to another

2019-08-19 Thread Eneko Lacunza
xport-diff) command over ssh. On Mon, 19 Aug 2019, 12:26 Eneko Lacunza, wrote: Hi Uwe, El 19/8/19 a las 10:14, Uwe Sauter escribió: is it possible to move a VM's disks from one Ceph cluster to another, including all snapshots that those disks have? The GUI doesn't let me do it but is th

Re: [PVE-User] Move VM's HDD incl. snapshots from one Ceph to another

2019-08-19 Thread Eneko Lacunza
Hi Uwe, El 19/8/19 a las 10:14, Uwe Sauter escribió: is it possible to move a VM's disks from one Ceph cluster to another, including all snapshots that those disks have? The GUI doesn't let me do it but is there some commandline magic that will move the disks and all I have to do is edit the V

Re: [PVE-User] adding ceph osd nodes

2019-07-18 Thread Eneko Lacunza
Hi, El 18/7/19 a las 13:43, mj escribió: On 7/17/19 2:47 PM, Alwin Antreich wrote: I like to add, though not explicitly asked. While it is technically possible, the cluster will lose its enterprise support. As Ceph is under support on Proxmox VE nodes too. Hmm. That is a disappointing conse

Re: [PVE-User] [pve-devel] Proxmox VE 6.0 beta released!

2019-07-05 Thread Eneko Lacunza
Hi Martin, Thanks a lot for your hard work, Maurer-ITans and the rest of developers... It seems that in PVE 6.0, with corosync 3.0, multicast won't be used by default? I think it could be interesting to have a PVE_6.x cluster wiki page to explain a bit the new cluster, max nodes, ... Also, t

Re: [PVE-User] Shared same rbd disk on 2 Vms

2019-07-02 Thread Eneko Lacunza
You need a cluster file system to be able to do this (gfs for example). ext4 can't be mounted by two systems at the same time. https://en.wikipedia.org/wiki/GFS2 Maybe you can consider using NFS instead... Cheers El 2/7/19 a las 14:43, Hervé Ballans escribió: Dear list, Sorry if the questio

Re: [PVE-User] Proxmox storage usage

2019-06-24 Thread Eneko Lacunza
Hi, root@server5:/var/lib/vz# df -h FilesystemSize Used Avail Use% Mounted on udev 48G 0 48G 0% /dev tmpfs 9.5G 9.6M 9.5G 1% /run /dev/mapper/pve-root 96G 1.8G 95G 2% / tmpfs 48G 37M 48G 1% /dev/shm tmpfs

Re: [PVE-User] Proxmox storage usage

2019-06-24 Thread Eneko Lacunza
Hi Rutger, El 24/6/19 a las 11:21, Rutger Verhoeven escribió: I recently installed a proxmox server. However the storage usage is tremendous: (see attachment) root@server5:/var/lib/vz# df -h FilesystemSize Used Avail Use% Mounted on udev 48G 0 48G 0% /dev

Re: [PVE-User] Ceph bluestore OSD Journal/DB disk size

2019-05-29 Thread Eneko Lacunza
Hi Alwin, El 29/5/19 a las 11:59, Alwin Antreich escribió: I have noticed that our office Proxmox cluster has a Bluestore OSD with a very small db partition. This OSD was created from GUI on 12th march this year: This node has 4 OSDs: - osd.12: bluestore, all SSD - osd.3: bluestore, SSD db + s

[PVE-User] Ceph bluestore OSD Journal/DB disk size

2019-05-29 Thread Eneko Lacunza
Hi all, I have noticed that our office Proxmox cluster has a Bluestore OSD with a very small db partition. This OSD was created from GUI on 12th march this year: This node has 4 OSDs: - osd.12: bluestore, all SSD - osd.3: bluestore, SSD db + spinning - osd.2: filestore, SSD journal + spinning

Re: [PVE-User] Strange cluster/graphics problem in 3-node cluster

2019-05-23 Thread Eneko Lacunza
e pve wiki ? Have you tried UDPU instead of multicast as last option ? No idea about missing rrd graphs... On Thu, 16 May 2019 at 16:41, Eneko Lacunza wrote: Hi all, In a 3-node cluster, we're experiencing a strange clustering problem. Sometimes, the first node drops out of quorum, usuall

Re: [PVE-User] Strange cluster/graphics problem in 3-node cluster

2019-05-17 Thread Eneko Lacunza
, but haven't tried UDPU, yet. No idea about missing rrd graphs... This is the strange part, and the reason for my mail. Otherwise I'd be preparing maintenance windows to change node's network config right away... :) Thanks a lot Eneko On Thu, 16 May 2019 at 16:41, Eneko Lacunza

[PVE-User] Strange cluster/graphics problem in 3-node cluster

2019-05-16 Thread Eneko Lacunza
Hi all, In a 3-node cluster, we're experiencing a strange clustering problem. Sometimes, the first node drops out of quorum, usually for some hours, only to return back to quorum later. During the last 2 weeks, this has happened 7 times. Additionally, one time the second and third node dropp

Re: [PVE-User] Shutting down Windows 10, 2016 and 2019 VMs

2019-05-14 Thread Eneko Lacunza
Hi, I wonder how much money did you pay Maurer IT for their excelent open source product, and Red Hat, for the very same? Did you know that you can get support tickets from Maurer IT? I guess you'll need them the next time you need help... Cheers El 14/5/19 a las 2:20, Saint Michael escribi

Re: [PVE-User] Boot disk corruption after Ceph OSD destroy with cleanup

2019-03-22 Thread Eneko Lacunza
Hi Alwin, El 22/3/19 a las 15:04, Alwin Antreich escribió: On Fri,On a point release, a ISO is generated and the release info is needed On a point release, a ISO is generated and the release info is needed for that. The volume of package updates alone makes a separate announcment of changes sen

Re: [PVE-User] Boot disk corruption after Ceph OSD destroy with cleanup

2019-03-22 Thread Eneko Lacunza
Hi, El 22/3/19 a las 9:59, Alwin Antreich escribió: On Fri, Mar 22, 2019 at 09:03:22AM +0100, Eneko Lacunza wrote: El 22/3/19 a las 8:35, Alwin Antreich escribió: On Thu, Mar 21, 2019 at 03:58:53PM +0100, Eneko Lacunza wrote: We have removed an OSD disk from a server in our office cluster

Re: [PVE-User] Boot disk corruption after Ceph OSD destroy with cleanup

2019-03-22 Thread Eneko Lacunza
Hi Alwin, El 22/3/19 a las 8:35, Alwin Antreich escribió: On Thu, Mar 21, 2019 at 03:58:53PM +0100, Eneko Lacunza wrote: We have removed an OSD disk from a server in our office cluster, removing partitions (with --cleanup 1) and that has made the server unable to boot (we have seen this in 2

[PVE-User] Boot disk corruption after Ceph OSD destroy with cleanup

2019-03-21 Thread Eneko Lacunza
Hi all, We have removed an OSD disk from a server in our office cluster, removing partitions (with --cleanup 1) and that has made the server unable to boot (we have seen this in 2 servers in a row...) Looking at the command output: --- cut --- root@sanmarko:~# pveceph osd destroy 5 --cleanup

Re: [PVE-User] Weired trouble with NIC in Windows Server

2019-03-14 Thread Eneko Lacunza
Or vmbr0 has no interface connected to DHCP server. :-) El 13/3/19 a las 23:49, Craig Jones escribió: Sounds like DHCP isn't enabled on the interface in the guest OS. On 3/13/2019 5:07 PM, Gilberto Nunes wrote: Hi there I am facing a weired problem with NIC in Windows Server. When use vmbr0, t

Re: [PVE-User] Shared storage recommendations

2019-02-26 Thread Eneko Lacunza
Hi El 26/2/19 a las 10:41, Thomas Lamprecht escribió: On 2/25/19 6:22 PM, Frederic Van Espen wrote: We're designing a new datacenter network where we will run proxmox nodes on about 30 servers. Of course, shared storage is a part of the design. What kind of shared storage would anyone recommen

[PVE-User] Fwd: Devolución recibida de Transmedia NTT8UL - Cargador con...

2019-02-18 Thread Eneko Lacunza
FYI -- Zuzendari Teknikoa / Director Técnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es ___ pve-user mailing list pve-user@pve.proxmox.com https://pve.proxmox.com/cgi-b

Re: [PVE-User] Could a pve cluster has 50 nodes, or more?

2019-02-14 Thread Eneko Lacunza
Hi Denis, El 13/2/19 a las 23:28, Denis Morejon escribió: I note that sharing the db file, even using multicast protocol, could put a limit to the maximum number of members. Any thinking about a centralized db paradigm? How many members have you put together? Docs talk about 32 nodes: https:/

Re: [PVE-User] Restore VM Backup in new HDD with new size...

2019-02-06 Thread Eneko Lacunza
Hi Gilberto, No, you can't do that. You must first restore and then resize the disk (I think you must do it from command line). Remember to reduce first the partitions/filesystems on that disk. Cheers El 6/2/19 a las 11:18, Gilberto Nunes escribió: Hi list I have here a VM with has direct

[PVE-User] Bug #1660 - Guest Linux kernel crash after live migration from/to amd to/from Intel, with more than 1 vcore

2019-02-05 Thread Eneko Lacunza
Hi, I'd like to know if Proxmox team will look at bug #1660 that is atmost 1 year old; I provided the requested info, other users made additional tests and now that AMD/EPYC platform is a very interesting one I think it  will be more common to have mixed Intel/AMD clusters? Thanks a lot Enek

Re: [PVE-User] Join cluster first time - problem

2019-01-23 Thread Eneko Lacunza
Just restrict "local-zfs" storage to node1 (can be done from WebGUI) El 23/1/19 a las 16:37, lord_Niedzwiedz escribió: Ok, when I added in node2 this: lvmthin: local-lvm     thinpool data     vgname pve     content rootdir,images I see local-vm in node2   ;-) But i see to local-zfs

Re: [PVE-User] Join cluster first time - problem

2019-01-23 Thread Eneko Lacunza
Hi, Seems you have VMs on host2. Please read: https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster#Adding_nodes_to_the_Cluster Cheers El 23/1/19 a las 15:37, lord_Niedzwiedz escribió:         I do it first time. I create cluster ok on host1: pvecm create klaster1 pvecm status And on host2 i

Re: [PVE-User] Proxmox Ceph high memory usage

2019-01-16 Thread Eneko Lacunza
Hi Gilberto, Are you using Blustore? What version of Ceph? El 16/1/19 a las 13:11, Gilberto Nunes escribió: Hi there Anybody else experiment hight memory usage in Proxmox CEPH Storage Server? I have a 6 node PVE CEPH and after upgrade, I have noticed this high memory usage... All server has 16

Re: [PVE-User] PVE 4 -> 5, multipath differences?

2019-01-07 Thread Eneko Lacunza
Hi, I see the same behaviour with an EMC VNXe3200 (two priorities). I assume it is the right thing to do, host really only has 2x1Gbit channels to storage... :) El 7/1/19 a las 10:37, Marco Gaiarin escribió: Mandi! Sten Aus In chel di` si favelave... As this is my third storage for not

Re: [PVE-User] (Very) basic question regarding PVE Ceph integration

2018-12-17 Thread Eneko Lacunza
Hi Alwin, El 17/12/18 a las 10:22, Alwin Antreich escribió: b) depends on the workload of your nodes. Modern server hardware has enough power to be able to run multiple services. It all comes down to have enough resources for each domain (eg. Ceph, KVM, CT, host). I recommend to use a simple

Re: [PVE-User] (Very) basic question regarding PVE Ceph integration

2018-12-17 Thread Eneko Lacunza
Hi, El 16/12/18 a las 17:16, Frank Thommen escribió: I understand that with the new PVE release PVE hosts (hypervisors) can be used as Ceph servers.  But it's not clear to me if (or when) that makes sense.  Do I really want to have Ceph MDS/OSD on the same hardware as my hypervisors?  Doesn't

Re: [PVE-User] Multicast problems with Intel X540 - 10Gtek network card?

2018-12-05 Thread Eneko Lacunza
lerant (HA) system and other network traffic may disturb corosync. I'd recommend a thorough reading of the document quoted above. Don't use vmbr0 for cluster traffic. Don't use any vmbr for cluster traffic. Stefan On Dec 5, 2018, at 13:34, Eneko Lacunza mailto:elacu...@binovo.e

Re: [PVE-User] Multicast problems with Intel X540 - 10Gtek network card?

2018-12-05 Thread Eneko Lacunza
e ipv6 nd and nd-ra usage. https://pve.proxmox.com/wiki/Multicast_notes have some more notes and exampes around mulicast_querier kind regards Ronny Aasen On 04.12.2018 17:54, Eneko Lacunza wrote: Hi all, Seems I found the solution. eth3 on proxmox1 is a broadcom 1gbit card connected to

Re: [PVE-User] Multicast problems with Intel X540 - 10Gtek network card?

2018-12-05 Thread Eneko Lacunza
although not used for multicast, was confusing someone... Thanks a lot Eneko kind regards Ronny Aasen On 04.12.2018 17:54, Eneko Lacunza wrote: Hi all, Seems I found the solution. eth3 on proxmox1 is a broadcom 1gbit card connected to HPE switch; it is VLAN 10 untagged on the switch end.

Re: [PVE-User] Multicast problems with Intel X540 - 10Gtek network card?

2018-12-04 Thread Eneko Lacunza
good; cluster is stable and omping is happy too after 10 minutes :) It is strange because multicast is on VLAN 1 network... Cheers and thanks a lot Eneko El 4/12/18 a las 16:18, Eneko Lacunza escribió: hi Marcus, El 4/12/18 a las 16:09, Marcus Haarmann escribió: Hi, you did not provide details

Re: [PVE-User] Multicast problems with Intel X540 - 10Gtek network card?

2018-12-04 Thread Eneko Lacunza
dev eth4.100 Cluster is running on vmbr0 network (192.168.0.0/24) Cheers Marcus Haarmann Von: "Eneko Lacunza" An: "pve-user" Gesendet: Dienstag, 4. Dezember 2018 15:57:10 Betreff: [PVE-User] Multicast problems with Intel X540 - 10Gtek network card? Hi all, We have j

[PVE-User] Multicast problems with Intel X540 - 10Gtek network card?

2018-12-04 Thread Eneko Lacunza
Hi all, We have just updated a 3-node Proxmox cluster from 3.4 to 5.2, Ceph hammer to Luminous and the network from 1 Gbit to 10Gbit... one of the three Proxmox nodes is new too :) Generally all was good and VMs are working  well. :-) BUT, we have some problems with the cluster; promxox1 nod

Re: [PVE-User] Ceph repository

2018-10-23 Thread Eneko Lacunza
Hi Thomas, El 23/10/18 a las 8:02, Thomas Lamprecht escribió: On 10/22/18 5:29 PM, Eneko Lacunza wrote: El 22/10/18 a las 17:17, Eneko Lacunza escribió: I'm looking at Ceph Jewel to Luminuous wiki page as preparation for a PVE 4 to 5 migration: https://pve.proxmox.com

Re: [PVE-User] Ceph repository

2018-10-22 Thread Eneko Lacunza
Hi, El 22/10/18 a las 17:17, Eneko Lacunza escribió: I'm looking at Ceph Jewel to Luminuous wiki page as preparation for a PVE 4 to 5 migration: https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous I see that after the procedure, there would be 2 repositories with ceph packages

[PVE-User] Ceph repository

2018-10-22 Thread Eneko Lacunza
Hi all, I'm looking at Ceph Jewel to Luminuous wiki page as preparation for a PVE 4 to 5 migration: https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous I see that after the procedure, there would be 2 repositories with ceph packages; the official ceph.com repo and the PVE repo. Is this nece

Re: [PVE-User] Ceph freeze upgrading from Hammer to Jewel - lessons learned

2018-10-19 Thread Eneko Lacunza
Hi Ronny, El 19/10/18 a las 11:22, Ronny Aasen escribió: On 10/19/18 10:05 AM, Eneko Lacunza wrote: Hi all, Yesterday we performed a Ceph upgrade in a 3-node Proxmox 4.4 cluster, from Hammer to Jewel following the procedure in the wiki: https://pve.proxmox.com/wiki/Ceph_Hammer_to_Jewel It

[PVE-User] Ceph freeze upgrading from Hammer to Jewel - lessons learned

2018-10-19 Thread Eneko Lacunza
Hi all, Yesterday we performed a Ceph upgrade in a 3-node Proxmox 4.4 cluster, from Hammer to Jewel following the procedure in the wiki: https://pve.proxmox.com/wiki/Ceph_Hammer_to_Jewel It went smoothly for the first two nodes, but we had a grave problem with the 3rd, because when shuting do

Re: [PVE-User] PRoxmox and ceph with just 3 server.

2018-08-31 Thread Eneko Lacunza
You can do so from CLI: ceph osd crush reweight osd.N https://ceph.com/geen-categorie/difference-between-ceph-osd-reweight-and-ceph-osd-crush-reweight/ El 31/08/18 a las 13:01, Gilberto Nunes escribió: Thanks a lot for all this advice guys. I still learn with Ceph. So I have a doubt regarding

Re: [PVE-User] PRoxmox and ceph with just 3 server.

2018-08-30 Thread Eneko Lacunza
Hi Gilberto, It's technically possible. I don't know what performance you expect for those 2 SQL servers though (don't expect much). Cheers El 30/08/18 a las 16:47, Gilberto Nunes escribió: Hi there It's possible create a scenario with 3 PowerEdge r540, with Proxmox and Ceph. The server

Re: [PVE-User] Proxmox Ceph with differents HDD Size

2018-08-30 Thread Eneko Lacunza
El 30/08/18 a las 14:37, Mark Schouten escribió: On Thu, 2018-08-30 at 09:30 -0300, Gilberto Nunes wrote: Any advice to, at least, mitigate the low performance? Balance the number of spinning disks and the size per server. This will probably be the safest. It's not said that not balancing degr

Re: [PVE-User] Proxmox Ceph with differents HDD Size

2018-08-29 Thread Eneko Lacunza
You should change the weight of the 8TB disk, so that they have the same as the other 4TB disks. Thanks should fix the performance issue, but you'd waste half space on those 8TB disks :) El 23/08/18 a las 00:19, Brian : escribió: Its really not a great idea because the larger drives will te

Re: [PVE-User] Cluster doesn't recover automatically after blackout

2018-08-01 Thread Eneko Lacunza
Hi, El 01/08/18 a las 13:57, Alwin Antreich escribió: On Wed, Aug 01, 2018 at 01:40:34PM +0200, Eneko Lacunza wrote: El 01/08/18 a las 12:56, Alwin Antreich escribió: On Wed, Aug 01, 2018 at 11:02:18AM +0200, Eneko Lacunza wrote: Hi all, This morning there was a quite long blackout which

Re: [PVE-User] Cluster doesn't recover automatically after blackout

2018-08-01 Thread Eneko Lacunza
Hi Alwin, El 01/08/18 a las 12:56, Alwin Antreich escribió: On Wed, Aug 01, 2018 at 11:02:18AM +0200, Eneko Lacunza wrote: Hi all, This morning there was a quite long blackout which powered off a cluster of 3 proxmox 5.1 servers. All 3 servers the same make and model, so they need the same

[PVE-User] Cluster doesn't recover automatically after blackout

2018-08-01 Thread Eneko Lacunza
Hi all, This morning there was a quite long blackout which powered off a cluster of 3 proxmox 5.1 servers. All 3 servers the same make and model, so they need the same amount of time to boot. When the power came back, servers started correctly but corosync couldn't set up a quorum. Events

Re: [PVE-User] Bug when removing a VM

2018-06-21 Thread Eneko Lacunza
Hi, I'm sorry for your troubles, I hope you had good backups. You should never share storage between clusters. -> If you must or it's convenient to do so, just don't repeat de VM Ids... For example on a NFS server, another thing you can do is just use a different directory for each cluster; wh

Re: [PVE-User] Strange Issues with Windows Server 2012 R2 guests

2018-06-04 Thread Eneko Lacunza
Hi Gregor, El 03/06/18 a las 14:39, Gregor Burck escribió: I migrate diffrent WS2012R2 to proxmox. I've a Strange issue. Sometimes one or another Client forget his nameserver entry. Hav'nt things like this before, so I think it could related to the Proxmox enviroment? I think this is a namese

Re: [PVE-User] Custom storage in ProxMox 5

2018-04-04 Thread Eneko Lacunza
Hi, El 30/03/18 a las 05:05, Lindsay Mathieson escribió: Ceph has rather larger overheads, much bigger PITA to admin, does not perform as well on whitebox hardware – in fact the Ceph crowd std reply to issues is to spend big on enterprise hardware and is far less flexible. Nonsense. We  use wh

[PVE-User] NoVNC shell crash/timeout

2018-03-01 Thread Eneko Lacunza
Hi all, We have been setting up a new 3-node HA cluster with Ceph storage, and migrating VMs from VMWare to Proxmox for the last 3 weeks. Overall the setup and migration has been quite painless; I also appreciated the ability to automatically create Proxmox storages after Ceph pool creation,

Re: [PVE-User] PVE 5.1 - Intel <-> AMD migration crash with Debian 9

2018-02-07 Thread Eneko Lacunza
ashes both times ;) We have seen the problem also with Ubuntu 14.04 kernel 3.16.0-30-generic... Cheers Eneko bye Harald Am 07.02.2018 um 09:33 schrieb Eneko Lacunza: https://bugzilla.proxmox.com/show_bug.cgi?id=1660 El 07/02/18 a las 09:22, Eneko Lacunza escribió: Hi, I finally reproduced t

Re: [PVE-User] PVE 5.1 - Intel <-> AMD migration crash with Debian 9

2018-02-07 Thread Eneko Lacunza
https://bugzilla.proxmox.com/show_bug.cgi?id=1660 El 07/02/18 a las 09:22, Eneko Lacunza escribió: Hi, I finally reproduced the problem with a Ubuntu 14.04.2 LTS VM, so not a Debian 9-only problem. Is there anything I to report this bug to Proxmox/upstream? El 06/02/18 a las 12:07, Eneko

  1   2   3   4   5   >