Can someone please suggest a course of action moving forward?
I don't fee comfortable making changes to the crush map without a better
understanding of what exactly is going on here.
The new osd appears in the 'osd tree' but not in the current crush map. The
sever that hosts the osd is not
I ended up starting from scratch and doing a purge and purgedata on that
host using ceph-deploy, after that things seemed to go better.
The osd is up and in at this point, however when the osd was added to
the cluster...no data was being moved to the new osd.
Here is a copy of my current crush
Hello,
I am trying to use ceph-deploy to add some new osd's to our cluster. I
have used this method over the last few years to add all of our 107
osd's and things have seemed to work quite well.
One difference this time is that we are going to use a pci nvme card to
journal the 16 disks in