Hi Marco,
El 9/6/20 a las 19:46, Marco Bellini escribió:
Dear All,
I'm trying to use proxmox on a 4 nodes cluster with ceph.
every node has a 500G NVME drive, with dedicated 10G ceph network with
9000bytes MTU.
despite off nvme warp speed I can reach when used as lvm volume, as soon as I
conv
Hi,
El 4/6/20 a las 14:52, Sivakumar SARAVANAN escribió:
Hello,
We have a one Proxmox Datacenter and on top of that we have around 15
standalone nodes and cluster defined.
The Datacenter itself is showing "communication error " frequentially. All
standalone nodes are unavailable to perform any
order to see if something
was happening at the moment where our cluster had crashed.
I will let you know if I have the answer to that mystery...
Cheers,
Hervé
On 12/05/2020 15:00, Eneko Lacunza wrote:
Hi Hervé,
El 11/5/20 a las 17:58, Herve Ballans escribió:
Thanks for your answer. I was a
;d try even a 1G switch just
to see if that makes Proxmox cluster and ceph stable. Are 10G interfaces
very loaded?
Cheers
Eneko
On 11/05/2020 10:39, Eneko Lacunza wrote:
Hi Hervé,
This seems a network issue. What is the network setup in this
cluster? What logs in syslog about corosync a
Hi Hervé,
This seems a network issue. What is the network setup in this cluster?
What logs in syslog about corosync and pve-cluster?
Don't enable HA until you have a stable cluster quorum.
Cheers
Eneko
El 11/5/20 a las 10:35, Herve Ballans escribió:
Hi everybody,
I would like to take the o
Dear Proxmox developers,
Following forum post:
https://forum.proxmox.com/threads/linux-kernel-5-4-for-proxmox-ve.66854/
I upgraded from 5.3.18-2 to 5.4 in a new Proxmox 6.1 node to diagnose a
network card issue...
Network card seems broken :-( , but I found that NFS storage doesn't
work with
Hi Gerald,
I'm sorry about your issue. I tried Soyoustart some time ago (3-4 years
I'd say), but my experience was really awful. Had to phone about 5
numbers, talked to people in half the countries in Europe and finally
the support guy hanged the call.
Probably there's some kind of network p
Hi Gilberto,
Generally, you have to wait when Ceph is doing rebalancing etc. until it
finishes. Some things can go for hours.
Also, try no to change Ceph parameters without being sure and
researching documentation and mailing lists. This is a new cluster and
you have done things most Ceph us
Hi Gilberto,
You need to fix your LVM first (not a Proxmox issue).
I see you have a lot of PVs, but no (old) LVs show. Also, you seem to
have missing at least a PV (/dev/sdb?)
Fix that first, then let's see what output give "vgs" and "lvs". You
need to see VMs disks with "lvs" first. The you
Hi Alwin,
El 25/3/20 a las 11:55, Alwin Antreich escribió:
The easiest way ist to destroy and re-create the OSD with a bigger
DB/WAL. The guideline from Facebook for RocksDB is 3/30/300 GB.
It's well below the 3GiB limit in the guideline ;)
For now. ;)
Cluster has 2 years now, data amount i
Hi Alwin,
El 24/3/20 a las 14:54, Alwin Antreich escribió:
On Tue, Mar 24, 2020 at 01:12:03PM +0100, Eneko Lacunza wrote:
Hi Allwin,
El 24/3/20 a las 12:24, Alwin Antreich escribió:
On Tue, Mar 24, 2020 at 10:34:15AM +0100, Eneko Lacunza wrote:
We're seeing a spillover issue with
Hi Allwin,
El 24/3/20 a las 12:24, Alwin Antreich escribió:
On Tue, Mar 24, 2020 at 10:34:15AM +0100, Eneko Lacunza wrote:
We're seeing a spillover issue with Ceph, using 14.2.8:
[...]
3. ceph health detail
HEALTH_WARN BlueFS spillover detected on 3 OSD
BLUEFS_SPILLOVER B
Hi all,
We're seeing a spillover issue with Ceph, using 14.2.8:
We originally had 1GB rocks.db partition:
1. ceph health detail
HEALTH_WARN BlueFS spillover detected on 3 OSD
BLUEFS_SPILLOVER BlueFS spillover detected on 3 OSD
osd.3 spilled over 78 MiB metadata from 'db' device (1024 M
host" type CPU.
Do you know if it makes any difference wheater I use the VirtIO
SCSI-driver versus the Virtio-SCSI-single driver?
I haven't tried -single, maybe others can comment on this.
Cheers
Eneko
Thank you very much
Rainer
Am 17.03.20 um 14:10 schrieb Eneko Lacunza:
Hi,
You
Hi,
You can try to enable IO threads and assign multiple Ceph disks to the
VM, then build some kind of raid0 to increase performance.
Generally speaking, a SSD based Ceph cluster is considered to perform
well when a VM gets about 2000 IOPS, and factors like CPU 1-thread
performance, network
Hi all,
El 24/2/20 a las 10:10, Eneko Lacunza escribió:
El 20/2/20 a las 14:47, Eneko Lacunza escribió:
We tried running the main VM backup yesterday morning, but couldn't
reproduce the issue, although during regular backup all 3 nodes are
doing backups and in the test we only performe
https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
El lun., 9 mar. 2020 a las 9:17, Eneko Lacunza ()
escribió:
Hola Leandro,
El 9/3/20 a las 13:11, Leandro Roggerone escribió:
Hola Leandro,
El 9/3/20 a las 13:11, Leandro Roggerone escribió:
Hi guys, after install pve , would like to create my first VM.
I noticed that only available format is raw.
Question is:
Is qcow2 deprecated ?
What are differences between them ? (I already google it but is not 100%
clear).
This i
Hi MJ,
El 29/2/20 a las 12:21, mj escribió:
Hi,
We have a failing filestore OSD HDD in our pve 5.4 cluster on ceph
12.2.13.
I have ordered a replacement SSD, but we have the following doubt:
Should we now replace the filestore HDD (journal on an SSD) with a
bluestore SSD? Or should we keep
ore resilient, but you have to
understand how it works. You may find that having only two servers with
Ceph storage can be risky when performing maintenance on one of the servers.
Saludos
Eneko
Thanks!
El vie., 28 feb. 2020 a las 11:06, Eneko Lacunza ()
escribió:
Hola Leandro,
El 28/2/20
ailure during recovery is high).
Saludos
Eneko
Regards.
Leandro.
El vie., 28 feb. 2020 a las 5:49, Eneko Lacunza ()
escribió:
Hola Leandro,
El 27/2/20 a las 17:29, Leandro Roggerone escribió:
Hi guys , i'm still tunning my 5.5 Tb server.
While setting storage options during install process, I
Hola Leandro,
El 27/2/20 a las 17:29, Leandro Roggerone escribió:
Hi guys , i'm still tunning my 5.5 Tb server.
While setting storage options during install process, I set 2000 for hd
size, so I have 3.5 TB free to assign later.
my layout is as follows:
root@pve:~# lsblk
NAME MAJ:
Hi,
El 24/2/20 a las 15:41, Falco Kleinschmidt escribió:
Am 20.02.20 um 14:47 schrieb Eneko Lacunza:
Have you tried setting (bandwidth) limits on the backup jobs and see if
that helps ?
Not really. I've looked through the docs, but seems I can only affect
write bandwith on NAS (onl
Hi Gianni,
El 20/2/20 a las 14:47, Eneko Lacunza escribió:
We tried running the main VM backup yesterday morning, but couldn't
reproduce the issue, although during regular backup all 3 nodes are
doing backups and in the test we only performed the backup of the only
VM storaged on SSD
Hi Humberto,
We aren't using IPv6 for VM network, that can't be the issue.
But thanks for the suggestion! :-)
Eneko
El 21/2/20 a las 12:42, Humberto Jose De Sousa via pve-user escribió:
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.
Hi Gianni,
El 20/2/20 a las 13:48, Gianni Milo escribió:
See comments below...
Thanks for the comments!
vmbr0 is on a 2x1Gbit bond0
Ceph public and private are on 2x10Gbit bond2
Backup network is IPv6 on 2x1Gbit bond1, to a Synology NAS.
Where's the cluster (corosync) traffic flowing ? On v
Hi all,
On february 11th we upgraded a PVE 5.3 cluster to 5.4, then to 6.1 .
This is an hyperconverged cluster with 3 servers, redundant network,
Ceph with two storage pools, one HDD based and the other SSD based:
Each server consists of:
- Dell R530
- 1xXeon E5-2620 8c/16t 2.1Ghz
- 64GB RAM
Hi Rainer,
You can switch from community repo to enterprise repo withou any issue,
just change sources.list .
Cheers
Eneko
El 19/2/20 a las 13:05, Rainer Krienke escribió:
Hello,
At the moment I run a proxmox cluster with a seperate ceph cluster as
storage backend. I do not have a proxmox s
-15
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em qui., 13 de fev. de 2020 às 09:19, Eneko Lacunza
escreveu:
What about:
pvesm list local-lvm
ls -l /dev/pve/vm-110-disk-0
El 13/2/20 a las 12:40, Gilberto Nunes escribió:
Qu
--127--disk--0
pve-vm--104--disk--0 pve-vm--115--disk--0
pve-vm--129--disk--0
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em qui., 13 de fev. de 2020 às 08:38, Eneko Lacunza
escreveu:
It's quite strange, what about "l
7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em qui., 13 de fev. de 2020 às 08:11, Eneko Lacunza
escreveu:
Can you send the output for "lvs" and "cat /etc/pve/storage.cfg"?
El 13/2/20 a las 11:13, Gilberto Nunes escribió:
HI all
Still in trouble with this issue
cat d
Can you send the output for "lvs" and "cat /etc/pve/storage.cfg"?
El 13/2/20 a las 11:13, Gilberto Nunes escribió:
HI all
Still in trouble with this issue
cat daemon.log | grep "Feb 12 22:10"
Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE replication runner...
Feb 12 22:10:01 a2web syst
I think firefly is too old.
Either you create backups and restore in the new cluster, or you'll have
to upgrade the old clusters at least to Proxmox 5 and Ceph Mimic.
Cheers
El 30/1/20 a las 12:59, Fabrizio Cuseo escribió:
I can't afford the long downtime. With my method, the downtime is onl
=f8b829aabae2fdc8bdd9ace741bbef3598b892f2
Eneko Lacunza hat am 28. Januar 2020 09:26 geschrieben:
Hi all,
We have a PVE 5.4 cluster (details below), with a Synology DS1819+ NFS
server for storing file backups.
The setup is as follows:
- Debian 9 VM with 2 disks; system disk con Ceph RBD, file backup data
disk
ere are no guarantees for
the results.
G.
On Tue, 28 Jan 2020 at 08:27, Eneko Lacunza wrote:
Hi all,
We have a PVE 5.4 cluster (details below), with a Synology DS1819+ NFS
server for storing file backups.
The setup is as follows:
- Debian 9 VM with 2 disks; system disk con Ceph RBD, file backup
Hi all,
We have a PVE 5.4 cluster (details below), with a Synology DS1819+ NFS
server for storing file backups.
The setup is as follows:
- Debian 9 VM with 2 disks; system disk con Ceph RBD, file backup data
disk on NFS (6,5TB)
- NFS storage on Synology NAS.
Backup disk was getting full, s
las 11:18, Alexandre DERUMIER escribió:
Hi,
do you have upgrade all your nodes to
corosync 3.0.2-pve4
libknet1:amd641.13-pve1
?
(available in pve-no-subscription et pve-enteprise repos)
- Mail original -
De: "Eneko Lacunz
Hi all,
We are seeing this also with 5.4-3 clusters, a node was fenced in two
different clusters without any apparent reason.
Neither of the clusters had a node fence before...
Cheers
Eneko
El 7/11/19 a las 15:35, Eneko Lacunza escribió:
Hi all,
We updated our office cluster to get the
Hi all,
We updated our office cluster to get the patch, but got a node reboot on
31th october. Node was fenced and rebooted, everything continued working OK.
Is anyone experencing yet this problem?
Cheers
Eneko
El 2/10/19 a las 18:09, Hervé Ballans escribió:
Hi Alexandre,
We encouter exact
Hi Marco,
I don't undestand why you are asking about not tested/undocumented
migration procedures.
Use the documented tested one. It works, has been proven and has zero
downtime.
Don't waste time :-)
Cheers
El 27/8/19 a las 17:45, Marco Gaiarin escribió:
Why an intermediate passage via '
Hi,
El 22/8/19 a las 12:26, Patrick Westenberg escribió:
will the subscription check work if hosts have private IPs only and are
not accessible from the web?
Yes, it works it the hosts have access to internet via HTTP/HTTPS (i.e.
apt-get update works for example).
Cheers
Eneko
--
Zuzendari
Hi,
So what disks/RAID controller are there on the server? :)
My guess is disk if failed :) Did you try smartctl ?
Also, I think attachments are stripped off :)
Cheers
El 22/8/19 a las 10:03, lord_Niedzwiedz escribió:
CPU usage 0.04% of 32 CPU(s)
_/*IO delay 20.38% !!*/_
Load avera
Hi Dominik,
El 22/8/19 a las 9:50, Dominik Csapak escribió:
On 8/21/19 2:37 PM, Eneko Lacunza wrote:
# pveceph createosd /dev/sdb -db_dev /dev/sdd
device '/dev/sdd' is already in use and has no LVM on it
this sounds like a bug.. can you open one on bugzilla.proxmox.co
Hi all,
I'm reporting here an issue that I think should be handled somehow by
Proxmox, maybe with extended migration notes.
Starting point:
- Proxmox 5.4 cluster with Ceph Server. Proxmox nodes have 1 SSD + 3
HDD. System and Ceph OSD journals (filestore or bluestore db) are on the
SSD.
Thi
Here it is:
https://bugzilla.proxmox.com/show_bug.cgi?id=2340
El 21/8/19 a las 14:03, Tim Marx escribió:
Hi,
thanks for investigating. Please file a bug at https://bugzilla.proxmox.com/,
this will help us to keep track of it.
Eneko Lacunza hat am 21. August 2019 13:27 geschrieben:
Hi
rush
remove" and now it works!
Shall I report a bug? I can provide a problematic JSON if needed.
Thanks a lot
Eneko
El 21/8/19 a las 10:41, Eneko Lacunza escribió:
Hi all,
We have just upgraded our office 5-node cluster from 5.4 to 6.0.
Cluster has 15 OSDs in 4 of the nodes.
Everythi
Hi all,
We have just upgraded our office 5-node cluster from 5.4 to 6.0. Cluster
has 15 OSDs in 4 of the nodes.
Everything was quite smooth and we have even cleared almost all Ceph
warnings (one BlueFS spillover left yet). Thanks a lot for the excelent
work!
We have noticed though that in
xport-diff) command over ssh.
On Mon, 19 Aug 2019, 12:26 Eneko Lacunza, wrote:
Hi Uwe,
El 19/8/19 a las 10:14, Uwe Sauter escribió:
is it possible to move a VM's disks from one Ceph cluster to another,
including all snapshots that those disks have? The GUI
doesn't let me do it but is th
Hi Uwe,
El 19/8/19 a las 10:14, Uwe Sauter escribió:
is it possible to move a VM's disks from one Ceph cluster to another, including
all snapshots that those disks have? The GUI
doesn't let me do it but is there some commandline magic that will move the
disks and all I have to do is edit the V
Hi,
El 18/7/19 a las 13:43, mj escribió:
On 7/17/19 2:47 PM, Alwin Antreich wrote:
I like to add, though not explicitly asked. While it is technically
possible, the cluster will lose its enterprise support. As Ceph is under
support on Proxmox VE nodes too.
Hmm. That is a disappointing conse
Hi Martin,
Thanks a lot for your hard work, Maurer-ITans and the rest of developers...
It seems that in PVE 6.0, with corosync 3.0, multicast won't be used by
default? I think it could be interesting to have a PVE_6.x cluster wiki
page to explain a bit the new cluster, max nodes, ...
Also, t
You need a cluster file system to be able to do this (gfs for example).
ext4 can't be mounted by two systems at the same time.
https://en.wikipedia.org/wiki/GFS2
Maybe you can consider using NFS instead...
Cheers
El 2/7/19 a las 14:43, Hervé Ballans escribió:
Dear list,
Sorry if the questio
Hi,
root@server5:/var/lib/vz# df -h
FilesystemSize Used Avail Use% Mounted on
udev 48G 0 48G 0% /dev
tmpfs 9.5G 9.6M 9.5G 1% /run
/dev/mapper/pve-root 96G 1.8G 95G 2% /
tmpfs 48G 37M 48G 1% /dev/shm
tmpfs
Hi Rutger,
El 24/6/19 a las 11:21, Rutger Verhoeven escribió:
I recently installed a proxmox server. However the storage usage is
tremendous:
(see attachment)
root@server5:/var/lib/vz# df -h
FilesystemSize Used Avail Use% Mounted on
udev 48G 0 48G 0% /dev
Hi Alwin,
El 29/5/19 a las 11:59, Alwin Antreich escribió:
I have noticed that our office Proxmox cluster has a Bluestore OSD with a
very small db partition. This OSD was created from GUI on 12th march this
year:
This node has 4 OSDs:
- osd.12: bluestore, all SSD
- osd.3: bluestore, SSD db + s
Hi all,
I have noticed that our office Proxmox cluster has a Bluestore OSD with
a very small db partition. This OSD was created from GUI on 12th march
this year:
This node has 4 OSDs:
- osd.12: bluestore, all SSD
- osd.3: bluestore, SSD db + spinning
- osd.2: filestore, SSD journal + spinning
e pve wiki ? Have you tried UDPU
instead of multicast as last option ?
No idea about missing rrd graphs...
On Thu, 16 May 2019 at 16:41, Eneko Lacunza wrote:
Hi all,
In a 3-node cluster, we're experiencing a strange clustering problem.
Sometimes, the first node drops out of quorum, usuall
, but haven't tried UDPU, yet.
No idea about missing rrd graphs...
This is the strange part, and the reason for my mail. Otherwise I'd be
preparing maintenance windows to change node's network config right
away... :)
Thanks a lot
Eneko
On Thu, 16 May 2019 at 16:41, Eneko Lacunza
Hi all,
In a 3-node cluster, we're experiencing a strange clustering problem.
Sometimes, the first node drops out of quorum, usually for some hours,
only to return back to quorum later.
During the last 2 weeks, this has happened 7 times.
Additionally, one time the second and third node dropp
Hi,
I wonder how much money did you pay Maurer IT for their excelent open
source product, and Red Hat, for the very same?
Did you know that you can get support tickets from Maurer IT?
I guess you'll need them the next time you need help...
Cheers
El 14/5/19 a las 2:20, Saint Michael escribi
Hi Alwin,
El 22/3/19 a las 15:04, Alwin Antreich escribió:
On Fri,On a point release, a ISO is generated and the release info is needed
On a point release, a ISO is generated and the release info is needed
for that.
The volume of package updates alone makes a separate announcment of
changes sen
Hi,
El 22/3/19 a las 9:59, Alwin Antreich escribió:
On Fri, Mar 22, 2019 at 09:03:22AM +0100, Eneko Lacunza wrote:
El 22/3/19 a las 8:35, Alwin Antreich escribió:
On Thu, Mar 21, 2019 at 03:58:53PM +0100, Eneko Lacunza wrote:
We have removed an OSD disk from a server in our office cluster
Hi Alwin,
El 22/3/19 a las 8:35, Alwin Antreich escribió:
On Thu, Mar 21, 2019 at 03:58:53PM +0100, Eneko Lacunza wrote:
We have removed an OSD disk from a server in our office cluster, removing
partitions (with --cleanup 1) and that has made the server unable to boot
(we have seen this in 2
Hi all,
We have removed an OSD disk from a server in our office cluster,
removing partitions (with --cleanup 1) and that has made the server
unable to boot (we have seen this in 2 servers in a row...)
Looking at the command output:
--- cut ---
root@sanmarko:~# pveceph osd destroy 5 --cleanup
Or vmbr0 has no interface connected to DHCP server. :-)
El 13/3/19 a las 23:49, Craig Jones escribió:
Sounds like DHCP isn't enabled on the interface in the guest OS.
On 3/13/2019 5:07 PM, Gilberto Nunes wrote:
Hi there
I am facing a weired problem with NIC in Windows Server.
When use vmbr0, t
Hi
El 26/2/19 a las 10:41, Thomas Lamprecht escribió:
On 2/25/19 6:22 PM, Frederic Van Espen wrote:
We're designing a new datacenter network where we will run proxmox nodes on
about 30 servers. Of course, shared storage is a part of the design.
What kind of shared storage would anyone recommen
FYI
--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-b
Hi Denis,
El 13/2/19 a las 23:28, Denis Morejon escribió:
I note that sharing the db file, even using multicast protocol, could
put a limit to the maximum number of members. Any thinking about a
centralized db paradigm? How many members have you put together?
Docs talk about 32 nodes:
https:/
Hi Gilberto,
No, you can't do that.
You must first restore and then resize the disk (I think you must do it
from command line). Remember to reduce first the partitions/filesystems
on that disk.
Cheers
El 6/2/19 a las 11:18, Gilberto Nunes escribió:
Hi list
I have here a VM with has direct
Hi,
I'd like to know if Proxmox team will look at bug #1660 that is atmost 1
year old; I provided the requested info, other users made additional
tests and now that AMD/EPYC platform is a very interesting one I think
it will be more common to have mixed Intel/AMD clusters?
Thanks a lot
Enek
Just restrict "local-zfs" storage to node1 (can be done from WebGUI)
El 23/1/19 a las 16:37, lord_Niedzwiedz escribió:
Ok, when I added in node2 this:
lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images
I see local-vm in node2 ;-)
But i see to local-zfs
Hi,
Seems you have VMs on host2. Please read:
https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster#Adding_nodes_to_the_Cluster
Cheers
El 23/1/19 a las 15:37, lord_Niedzwiedz escribió:
I do it first time.
I create cluster ok on host1:
pvecm create klaster1
pvecm status
And on host2 i
Hi Gilberto,
Are you using Blustore? What version of Ceph?
El 16/1/19 a las 13:11, Gilberto Nunes escribió:
Hi there
Anybody else experiment hight memory usage in Proxmox CEPH Storage Server?
I have a 6 node PVE CEPH and after upgrade, I have noticed this high memory
usage...
All server has 16
Hi,
I see the same behaviour with an EMC VNXe3200 (two priorities).
I assume it is the right thing to do, host really only has 2x1Gbit
channels to storage... :)
El 7/1/19 a las 10:37, Marco Gaiarin escribió:
Mandi! Sten Aus
In chel di` si favelave...
As this is my third storage for not
Hi Alwin,
El 17/12/18 a las 10:22, Alwin Antreich escribió:
b) depends on the workload of your nodes. Modern server hardware has
enough power to be able to run multiple services. It all comes down to
have enough resources for each domain (eg. Ceph, KVM, CT, host).
I recommend to use a simple
Hi,
El 16/12/18 a las 17:16, Frank Thommen escribió:
I understand that with the new PVE release PVE hosts (hypervisors)
can be
used as Ceph servers. But it's not clear to me if (or when) that makes
sense. Do I really want to have Ceph MDS/OSD on the same hardware
as my
hypervisors? Doesn't
lerant (HA) system and other
network traffic may disturb corosync.
I'd recommend a thorough reading of the document quoted above.
Don't use vmbr0 for cluster traffic.
Don't use any vmbr for cluster traffic.
Stefan
On Dec 5, 2018, at 13:34, Eneko Lacunza
mailto:elacu...@binovo.e
e ipv6 nd and
nd-ra usage.
https://pve.proxmox.com/wiki/Multicast_notes have some more notes and exampes
around mulicast_querier
kind regards
Ronny Aasen
On 04.12.2018 17:54, Eneko Lacunza wrote:
Hi all,
Seems I found the solution.
eth3 on proxmox1 is a broadcom 1gbit card connected to
although not used for
multicast, was confusing someone...
Thanks a lot
Eneko
kind regards
Ronny Aasen
On 04.12.2018 17:54, Eneko Lacunza wrote:
Hi all,
Seems I found the solution.
eth3 on proxmox1 is a broadcom 1gbit card connected to HPE switch; it
is VLAN 10 untagged on the switch end.
good; cluster is stable and omping is happy too after 10 minutes :)
It is strange because multicast is on VLAN 1 network...
Cheers and thanks a lot
Eneko
El 4/12/18 a las 16:18, Eneko Lacunza escribió:
hi Marcus,
El 4/12/18 a las 16:09, Marcus Haarmann escribió:
Hi,
you did not provide details
dev eth4.100
Cluster is running on vmbr0 network (192.168.0.0/24)
Cheers
Marcus Haarmann
Von: "Eneko Lacunza"
An: "pve-user"
Gesendet: Dienstag, 4. Dezember 2018 15:57:10
Betreff: [PVE-User] Multicast problems with Intel X540 - 10Gtek network card?
Hi all,
We have j
Hi all,
We have just updated a 3-node Proxmox cluster from 3.4 to 5.2, Ceph
hammer to Luminous and the network from 1 Gbit to 10Gbit... one of the
three Proxmox nodes is new too :)
Generally all was good and VMs are working well. :-)
BUT, we have some problems with the cluster; promxox1 nod
Hi Thomas,
El 23/10/18 a las 8:02, Thomas Lamprecht escribió:
On 10/22/18 5:29 PM, Eneko Lacunza wrote:
El 22/10/18 a las 17:17, Eneko Lacunza escribió:
I'm looking at Ceph Jewel to Luminuous wiki page as preparation for a PVE 4 to
5 migration:
https://pve.proxmox.com
Hi,
El 22/10/18 a las 17:17, Eneko Lacunza escribió:
I'm looking at Ceph Jewel to Luminuous wiki page as preparation for a
PVE 4 to 5 migration:
https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous
I see that after the procedure, there would be 2 repositories with
ceph packages
Hi all,
I'm looking at Ceph Jewel to Luminuous wiki page as preparation for a
PVE 4 to 5 migration:
https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous
I see that after the procedure, there would be 2 repositories with ceph
packages; the official ceph.com repo and the PVE repo.
Is this nece
Hi Ronny,
El 19/10/18 a las 11:22, Ronny Aasen escribió:
On 10/19/18 10:05 AM, Eneko Lacunza wrote:
Hi all,
Yesterday we performed a Ceph upgrade in a 3-node Proxmox 4.4
cluster, from Hammer to Jewel following the procedure in the wiki:
https://pve.proxmox.com/wiki/Ceph_Hammer_to_Jewel
It
Hi all,
Yesterday we performed a Ceph upgrade in a 3-node Proxmox 4.4 cluster,
from Hammer to Jewel following the procedure in the wiki:
https://pve.proxmox.com/wiki/Ceph_Hammer_to_Jewel
It went smoothly for the first two nodes, but we had a grave problem
with the 3rd, because when shuting do
You can do so from CLI:
ceph osd crush reweight osd.N
https://ceph.com/geen-categorie/difference-between-ceph-osd-reweight-and-ceph-osd-crush-reweight/
El 31/08/18 a las 13:01, Gilberto Nunes escribió:
Thanks a lot for all this advice guys.
I still learn with Ceph.
So I have a doubt regarding
Hi Gilberto,
It's technically possible. I don't know what performance you expect for
those 2 SQL servers though (don't expect much).
Cheers
El 30/08/18 a las 16:47, Gilberto Nunes escribió:
Hi there
It's possible create a scenario with 3 PowerEdge r540, with Proxmox and
Ceph.
The server
El 30/08/18 a las 14:37, Mark Schouten escribió:
On Thu, 2018-08-30 at 09:30 -0300, Gilberto Nunes wrote:
Any advice to, at least, mitigate the low performance?
Balance the number of spinning disks and the size per server. This will
probably be the safest.
It's not said that not balancing degr
You should change the weight of the 8TB disk, so that they have the same
as the other 4TB disks.
Thanks should fix the performance issue, but you'd waste half space on
those 8TB disks :)
El 23/08/18 a las 00:19, Brian : escribió:
Its really not a great idea because the larger drives will te
Hi,
El 01/08/18 a las 13:57, Alwin Antreich escribió:
On Wed, Aug 01, 2018 at 01:40:34PM +0200, Eneko Lacunza wrote:
El 01/08/18 a las 12:56, Alwin Antreich escribió:
On Wed, Aug 01, 2018 at 11:02:18AM +0200, Eneko Lacunza wrote:
Hi all,
This morning there was a quite long blackout which
Hi Alwin,
El 01/08/18 a las 12:56, Alwin Antreich escribió:
On Wed, Aug 01, 2018 at 11:02:18AM +0200, Eneko Lacunza wrote:
Hi all,
This morning there was a quite long blackout which powered off a cluster of
3 proxmox 5.1 servers.
All 3 servers the same make and model, so they need the same
Hi all,
This morning there was a quite long blackout which powered off a cluster
of 3 proxmox 5.1 servers.
All 3 servers the same make and model, so they need the same amount of
time to boot.
When the power came back, servers started correctly but corosync
couldn't set up a quorum. Events
Hi,
I'm sorry for your troubles, I hope you had good backups.
You should never share storage between clusters.
-> If you must or it's convenient to do so, just don't repeat de VM Ids...
For example on a NFS server, another thing you can do is just use a
different directory for each cluster; wh
Hi Gregor,
El 03/06/18 a las 14:39, Gregor Burck escribió:
I migrate diffrent WS2012R2 to proxmox. I've a Strange issue.
Sometimes one or another Client forget his nameserver entry.
Hav'nt things like this before, so I think it could related to the Proxmox
enviroment?
I think this is a namese
Hi,
El 30/03/18 a las 05:05, Lindsay Mathieson escribió:
Ceph has rather larger overheads, much bigger PITA to admin, does not perform
as well on whitebox hardware – in fact the Ceph crowd std reply to issues is to
spend big on enterprise hardware and is far less flexible.
Nonsense. We use wh
Hi all,
We have been setting up a new 3-node HA cluster with Ceph storage, and
migrating VMs from VMWare to Proxmox for the last 3 weeks.
Overall the setup and migration has been quite painless; I also
appreciated the ability to automatically create Proxmox storages after
Ceph pool creation,
ashes both times ;)
We have seen the problem also with Ubuntu 14.04 kernel 3.16.0-30-generic...
Cheers
Eneko
bye
Harald
Am 07.02.2018 um 09:33 schrieb Eneko Lacunza:
https://bugzilla.proxmox.com/show_bug.cgi?id=1660
El 07/02/18 a las 09:22, Eneko Lacunza escribió:
Hi,
I finally reproduced t
https://bugzilla.proxmox.com/show_bug.cgi?id=1660
El 07/02/18 a las 09:22, Eneko Lacunza escribió:
Hi,
I finally reproduced the problem with a Ubuntu 14.04.2 LTS VM, so not
a Debian 9-only problem.
Is there anything I to report this bug to Proxmox/upstream?
El 06/02/18 a las 12:07, Eneko
1 - 100 of 445 matches
Mail list logo