For this the procedure is generally to stop the osd, flush the journal,
update the symlink on the osd to the new journal location, mkjournal, start
osd. You shouldn't need to do anything in the ceph.conf file.
On Thu, Nov 8, 2018 at 2:41 AM wrote:
> Hi all,
>
>
>
> I have been trying to migrate
Hi all,
I have been trying to migrate the journal to SSD partition for an while,
basically I followed the guide here [1], I have the below configuration
defined in the ceph.conf
[osd.0]
osd_journal = /dev/disk/by-partlabel/journal-1
And then create the journal in this way,
# ceph-osd -i 0 -mk
I’ve actually had to migrate every single journal in many clusters from one
(horrible) SSD model to a better SSD. It went smoothly. You’ll also need to
update your /var/lib/ceph/osd/ceph-*/journal_uuid file.
Honestly, the only challenging part was mapping and automating the back and
forth conv
> On Dec 1, 2016, at 6:26 PM, Christian Balzer wrote:
>
> On Thu, 1 Dec 2016 18:06:38 -0600 Reed Dier wrote:
>
>> Apologies if this has been asked dozens of times before, but most answers
>> are from pre-Jewel days, and want to double check that the methodology still
>> holds.
>>
> It does.
On Thu, 1 Dec 2016 18:06:38 -0600 Reed Dier wrote:
> Apologies if this has been asked dozens of times before, but most answers are
> from pre-Jewel days, and want to double check that the methodology still
> holds.
>
It does.
> Currently have 16 OSD’s across 8 machines with on-disk journals, c
Apologies if this has been asked dozens of times before, but most answers are
from pre-Jewel days, and want to double check that the methodology still holds.
Currently have 16 OSD’s across 8 machines with on-disk journals, created using
ceph-deploy.
These machines have NVMe storage (Intel P3600