Hi,
I finally found a working way to replace the failed OSD. Everthing looks
fine again.
Thanks again for your comments and suggestions.
Dietmar
On 01/12/2018 04:08 PM, Dietmar Rieder wrote:
> Hi,
>
> can someone, comment/confirm my planned OSD replacement procedure?
>
> It would be very
Hi,
can someone, comment/confirm my planned OSD replacement procedure?
It would be very helpful for me.
Dietmar
Am 11. Januar 2018 17:47:50 MEZ schrieb Dietmar Rieder
:
>Hi Alfredo,
>
>thanks for your coments, see my answers inline.
>
>On 01/11/2018 01:47 PM,
Hi Konstantin,
thanks for your answer, see my answer to Alfredo which includes your
suggestions.
~Dietmar
On 01/11/2018 12:57 PM, Konstantin Shalygin wrote:
>> Now wonder what is the correct way to replace a failed OSD block disk?
>
> Generic way for maintenance (e.g. disk replace) is
Hi Alfredo,
thanks for your coments, see my answers inline.
On 01/11/2018 01:47 PM, Alfredo Deza wrote:
> On Thu, Jan 11, 2018 at 4:30 AM, Dietmar Rieder
> wrote:
>> Hello,
>>
>> we have failed OSD disk in our Luminous v12.2.2 cluster that needs to
>> get replaced.
On Thu, Jan 11, 2018 at 4:30 AM, Dietmar Rieder
wrote:
> Hello,
>
> we have failed OSD disk in our Luminous v12.2.2 cluster that needs to
> get replaced.
>
> The cluster was initially deployed using ceph-deploy on Luminous
> v12.2.0. The OSDs were created using
>
>
Now wonder what is the correct way to replace a failed OSD block disk?
Generic way for maintenance (e.g. disk replace) is rebalance by change osd
weight:
ceph osd crush reweight osdid 0
cluster migrate data "from this osd"
When HEALTH_OK you can safe remove this OSD:
ceph osd out osd_id
Hello,
we have failed OSD disk in our Luminous v12.2.2 cluster that needs to
get replaced.
The cluster was initially deployed using ceph-deploy on Luminous
v12.2.0. The OSDs were created using
ceph-deploy osd create --bluestore cephosd-${osd}:/dev/sd${disk}
--block-wal /dev/nvme0n1 --block-db