Hi Gandalf and Sage,

Just would like to confirm if my steps below to replace a journal disk are
correct? Presuming the journal disk to be replaced is /dev/sdg and the two
affected OSDs using the disk as journals are osd.30 and osd.31:

- ceph osd set noout

- stop affected OSDs

sudo stop ceph-osd id=30
sudo stop ceph-osd id=31

- flush the journal for the affected OSDs

ceph-osd -i 30 --flush-journal
ceph-osd -i 31 --flush-journal

- dump journal partition scheme

sgdisk --backup=/tmp/journal_table /dev/sdg

- remove and replace the journal SSD (sdg)

- verify new journal SSD is detected as /dev/sdg

parted -l

- restore the journal partition scheme

sgdisk --restore-backup=/tmp/journal_table /dev/sdg

- run mkjournal on the two affected osds

ceph-osd -i 30 --mkjournal
ceph-osd -i 31 --mkjournal

- start the OSDs

sudo start ceph-osd id=30
sudo start ceph-osd id=31

- ceph osd unset noout

Looking forward to your reply, thank you.

Cheers.



On Fri, May 9, 2014 at 1:08 AM, Indra Pramana <[email protected]> wrote:

> Hi Gandalf and Sage,
>
> Many thanks! Will try this and share the outcome.
>
> Cheers.
>
>
> On Fri, May 9, 2014 at 12:55 AM, Gandalf Corvotempesta <
> [email protected]> wrote:
>
>> 2014-05-08 18:43 GMT+02:00 Indra Pramana <[email protected]>:
>> > Since we don't use ceph.conf to indicate the data and journal paths,
>> how can
>> > I recreate the journal partitions?
>>
>> 1. Dump the partition scheme:
>> sgdisk --backup=/tmp/journal_table /dev/sdd
>>
>> 2. Replace the journal disk device
>>
>> 3. Restore the old partition scheme:
>> sgdisk --restore-backup=/tmp/journal_table /dev/sdd
>>
>> 4. Run mkjournal for each affected OSDs:
>>
>> ceph-osd -i 1 --mkjournal
>> ceph-osd -i 4 --mkjournal
>> ceph-osd -i 6 --mkjournal
>>
>> (1, 4 and 6 are OSD id)
>>
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to