--- Begin Message ---
Hi Christoph,

El 12/7/21 a las 9:01, Christoph Weber escribió:
For the boot disks we use 2 or 3 mirrored zfs disks to be sure...
Sounds like a good idea in retrospect ;-)
Until now I just was under the impression that the system disk does not contain 
very important data and can easily be replaced ...
I really think it is the case really :)
@Eneko Lacunza
Do you have Ceph OSD journal/DB/WALs on system disk?

No, we have everything belonging to one OSDs on the corresponding SSD drive.
Ok this makes things simpler ;)
Moving OSDs from node3 to node6 would trigger data movement, but I'd go
Will it? I thought it might just relocate the ODSs location in the crush map 
from node3 to node6 when I shut down the prodve3, remove the disks and reinsert 
them in node 6? At least that was my impression in a thread here on the 
mailinglist a few weeks ago.

You'll have to be very careful managing Ceph crushmap to avoid data movement. You'll have to set norebalance and be sure that topology is the same before and after the changes... :)

node3, "pvecm delnode" it, reinstall with new system disk and rejoin cluster.
This seems to be the best solution.

Maybe we will just discontinue the node, as it is getting near its planned 
lifetime ...

I had to reinstall one or two nodes in the past, kept name and IP without any trouble.

Cheers

Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/



--- End Message ---
_______________________________________________
pve-user mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to