Hi,

We had a broken disk, and needed to replace it. In the process we noticed something (to us) surprising:

Steps taken:

- set noscrub / no-deepscrub from cli
- stop the OSD from pve gui
- out the OSD from pve gui
wait for data rebalance and HEALTH_OK

When again HEALTH_OK:
- remove the OSD from pve gui
but at this point ceph started rebalancing again, which to us was unexpected..?

It is now rebalancing nicely, but can we prevent this data movement next time..? (and HOW?)

And a question on adding the new OSD:

I tried putting in the new filestore OSD, with an SSD journal, but it failed with this:

create OSD on /dev/sdj (xfs)
using device '/dev/sdl' for journal
ceph-disk: Error: journal specified but not allowed by osd backend
TASK ERROR: command 'ceph-disk prepare --zap-disk --fs-type xfs --cluster ceph 
--cluster-uuid 1397f1dc-7d94-43ea-ab12-8f8792eee9c1 --journal-dev /dev/sdj 
/dev/sdl' failed: exit code 1

The device /dev/sdl is my journal SSD, however, it has a partition for each journal (7 partitions currently) and of course pve should create a new partition for journal and *not* use the whole disk as the above command appears to try..?

Any ideas..?

MJ
_______________________________________________
pve-user mailing list
[email protected]
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to