1.
ceph osd purge {id} --yes-i-really-mean-it
2. Navigate to the host where you keep the master copy of the cluster’s
ceph.conf file.
ssh {admin-host}
cd /etc/ceph
vim ceph.conf
3. Remove the OSD entry from your ceph.conf file (if it exists).
[osd.1]
host = {hostname}
You're thinking "proxmox". Try thinking "ceph" instead. Sure, ceph runs with
proxmox, but what you're really doing is using a pretty GUI that sits on top of
debian, running ceph and kvm.
Anyway, perhaps the GUI does all the steps needed? Perhaps not.
If it were me, I'd NOT reinstall, as
Hi, Thanks for your response!
No, I didn't do any of that on the cli - I just did stop in the webgui,
then out, then destroy.
Note that there was no VM's or data at all on this test ceph cluster - I
had deleted it all before doing this. I was basically just removing it all
so the OSD numbers
http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/#removing-osds-manual
Are you sure you followed the directions?
From: pve-user on behalf of Mark Adams
Sent: Monday, July 2, 2018 4:05:51 PM
To: pve-user@pve.proxmox.com
Subject: [PVE-User]
Currently running the newest 5.2-1 version, I had a test cluster which was
working fine. I since added more disks, first stopping, then setting out,
then destroying each osd so I could recreate it all from scratch.
However, when adding a new osd (either via GUI or pveceph CLI) it seems to
show a
On 7/2/18 4:16 AM, Vinicius Barreto wrote:
> Hello,
> please, would anyone know to tell me which service is responsible for
> mounting the NFS storages during the startup of the Proxmox?
> Note: Added by GUI or Pvesm.
>
We have a logic that we activate volumes of what we need.
E.g., when a VM is