Hi,
We skipped stage 1 and replaced the UUIDs of old disks with the new
ones in the policy.cfg
We ran salt '*' pillar.items and confirmed that the output was correct.
It showed the new UUIDs in the correct places.
Next we ran salt-run state.orch ceph.stage.3
PS: All of the above ran
Hi David
Removal process/commands ran as follows:
#ceph osd crush reweight osd. 0
#ceph osd out
#systemctl stop ceph-osd@
#umount /var/lib/ceph/osd/ceph-
#ceph osd crush remove osd.
#ceph auth del osd.
#ceph osd rm
#ceph-disk zap /dev/sd??
Adding them back on:
We skipped stage 1 and
Also what commands did you run to remove the failed HDDs and the commands
you have so far run to add their replacements back in?
On Sat, Feb 16, 2019 at 9:55 PM Konstantin Shalygin wrote:
> I recently replaced failed HDDs and removed them from their respective
> buckets as per procedure.
>
>
I recently replaced failed HDDs and removed them from their respective
buckets as per procedure.
But I’m now facing an issue when trying to place new ones back into the
buckets. I’m getting an error of ‘osd nr not found’ OR ‘file or
directory not found’ OR command sintax error.
I have been
Hi Everyone,
I recently replaced failed HDDs and removed them from their respective
buckets as per procedure.
But I’m now facing an issue when trying to place new ones back into the
buckets. I’m getting an error of ‘osd nr not found’ OR ‘file or
directory not found’ OR command sintax error.
I