On 11/8/18 4:54 AM, Hayashida, Mami wrote:
> Wow, after all of this, everything went well and I was able to convert
> osd.120-129 from Filestore to Bluestore.
Glad to hear it works! Make sure you reboot and check that everything
comes back up cleanly.
FWIW, I expect most of the files under
Wow, after all of this, everything went well and I was able to convert
osd.120-129 from Filestore to Bluestore.
***
root@osd2:~# ls -l /var/lib/ceph/osd/ceph-120
total 48
-rw-r--r-- 1 ceph ceph 384 Nov 7 14:34 activate.monmap
lrwxrwxrwx 1 ceph ceph 19 Nov 7 14:38 block -> /dev/hdd120/data120
On 11/8/18 2:15 AM, Hayashida, Mami wrote:
> Thank you very much. Yes, I am aware that zapping the SSD and
> converting it to LVM requires stopping all the FileStore OSDs whose
> journals are on that SSD first. I will add in the `hdparm` to my steps.
> I did run into remnants of gpt information
Thank you very much. Yes, I am aware that zapping the SSD and converting
it to LVM requires stopping all the FileStore OSDs whose journals are on
that SSD first. I will add in the `hdparm` to my steps. I did run into
remnants of gpt information lurking around when trying to re-use osd disks
in
On 11/8/18 12:29 AM, Hayashida, Mami wrote:
> Yes, that was indeed a copy-and-paste mistake. I am trying to use
> /dev/sdh (hdd) for data and a part of /dev/sda (ssd) for the journal.
> That's how the Filestore is set-up. So, for the Bluestore, data on
> /dev/sdh, wal and db on /dev/sda.
Sorry my mixup.
Therefore you shouldn’t be running ZAP against /dev/sda as this will wipe
the whole SSD.
I Guess currently in its setup it’s using a partition on /dev/sda? Like
/dev/sda2 for example.
,Ashley
On Wed, 7 Nov 2018 at 11:30 PM, Hayashida, Mami
wrote:
> Yes, that was indeed a
Yes, that was indeed a copy-and-paste mistake. I am trying to use /dev/sdh
(hdd) for data and a part of /dev/sda (ssd) for the journal. That's how
the Filestore is set-up. So, for the Bluestore, data on /dev/sdh, wal and
db on /dev/sda.
On Wed, Nov 7, 2018 at 10:26 AM, Ashley Merrick
ceph osd destroy 70 --yes-i-really-mean-it
I am guessing that’s a copy and paste mistake and should say 120.
Is the SSD @ /dev/sdh fully for the OSD120 is a partition on this SSD the
journal and other partitions are for other SSD’s?
On Wed, 7 Nov 2018 at 11:21 PM, Hayashida, Mami
wrote:
> I
I would agree with that. So, here is what I am planning on doing today. I
will try this from scratch on a different OSD node from the very first step
and log input and output for every step. Here is the outline of what I
think (based on all the email exchanges so far) should happen.
***
It is pretty difficult to know what step you are missing if we are
getting the `activate --all` command.
Maybe if you try one by one, capturing each command, throughout the
process, with output. In the filestore-to-bluestore guides we never
advertise `activate --all` for example.
Something is
If /dev/hdd67/data67 does not exist, try `vgchange -a y` and that should make
it exist, then try again. Not sure why this would ever happen, though, since I
expect lower level stuff to take care of activating LVM LVs.
If it does exist, I get the feeling that your original ceph-volume prepare
This is becoming even more confusing. I got rid of those
ceph-disk@6[0-9].service
(which had been symlinked to /dev/null). Moved
/var/lib/ceph/osd/ceph-6[0-9] to /var/./osd_old/. Then, I ran
`ceph-volume lvm activate --all`. I got once again
root@osd1:~# ceph-volume lvm activate --all
-->
On 11/7/18 5:27 AM, Hayashida, Mami wrote:
> 1. Stopped osd.60-69: no problem
> 2. Skipped this and went to #3 to check first
> 3. Here, `find /etc/systemd/system | grep ceph-volume` returned
> nothing. I see in that directory
>
> /etc/systemd/system/ceph-disk@60.service # and 61 - 69.
>
1. Stopped osd.60-69: no problem
2. Skipped this and went to #3 to check first
3. Here, `find /etc/systemd/system | grep ceph-volume` returned nothing. I
see in that directory
/etc/systemd/system/ceph-disk@60.service# and 61 - 69.
No ceph-volume entries.
On Tue, Nov 6, 2018 at 11:43 AM,
Ok. I will go through this this afternoon and let you guys know the
result. Thanks!
On Tue, Nov 6, 2018 at 11:32 AM, Hector Martin
wrote:
> On 11/7/18 1:00 AM, Hayashida, Mami wrote:
> > I see. Thank you for clarifying lots of things along the way -- this
> > has been extremely helpful.
On 11/7/18 1:00 AM, Hayashida, Mami wrote:
> I see. Thank you for clarifying lots of things along the way -- this
> has been extremely helpful. Neither "df | grep osd" nor "mount | grep
> osd" shows ceph-60 through 69.
OK, that isn't right then. I suggest you try this:
1) bring down OSD 60-69
But this is correct, isn't it?
root@osd1:~# ceph-volume lvm list --format=json hdd60/data60
{
"60": [
{
"devices": [
"/dev/sdh"
],
"lv_name": "data60",
"lv_path": "/dev/hdd60/data60",
"lv_size": "3.64t",
I see. Thank you for clarifying lots of things along the way -- this has
been extremely helpful. Neither "df | grep osd" nor "mount | grep osd"
shows ceph-60 through 69.
On Tue, Nov 6, 2018 at 10:57 AM, Hector Martin
wrote:
>
>
> On 11/7/18 12:48 AM, Hayashida, Mami wrote:
> > All other OSDs
On 11/7/18 12:48 AM, Hayashida, Mami wrote:
> All other OSDs that I converted (#60-69) look basically identical while
> the Filestore OSDs (/var/lib/ceph/osd/ceph-70 etc.) look different
> obviously. When I run "df" it does NOT list those converted osds (only
> the Filestore ones). In other
All other OSDs that I converted (#60-69) look basically identical while the
Filestore OSDs (/var/lib/ceph/osd/ceph-70 etc.) look different obviously.
When I run "df" it does NOT list those converted osds (only the Filestore
ones). In other words, /dev/sdh1 where osd.60 should be is not listed.
On 11/7/18 12:30 AM, Hayashida, Mami wrote:
> So, currently this is what /var/lib/ceph/osd/ceph-60 shows. Is it not
> correct? I don't know what I should expect to see.
>
> root@osd1:~# ls -l /var/lib/ceph/osd/ceph-60
> total 86252
> -rw-r--r-- 1 ceph ceph 384 Nov 2 16:20
So, currently this is what /var/lib/ceph/osd/ceph-60 shows. Is it not
correct? I don't know what I should expect to see.
root@osd1:~# ls -l /var/lib/ceph/osd/ceph-60
total 86252
-rw-r--r-- 1 ceph ceph 384 Nov 2 16:20 activate.monmap
-rw-r--r-- 1 ceph ceph 10737418240 Nov 5 16:32
On 11/6/18 6:03 AM, Hayashida, Mami wrote:
> WOW. With you two guiding me through every step, the 10 OSDs in
> question are now added back to the cluster as Bluestore disks!!! Here
> are my responses to the last email from Hector:
>
> 1. I first checked the permissions and they looked like this
>From what I observed, however, until I made that last change in the UDEV
rule, I simply could not get those OSDs started. I will try converting the
next 10 OSDs (osd.70-79) tomorrow, following all the steps you have shown
me in in this email thread, and will report back to you guys if/where I
On Mon, Nov 5, 2018 at 4:21 PM Hayashida, Mami wrote:
>
> Yes, I still have the volume log showing the activation process for ssd0/db60
> (and 61-69 as well). I will email it to you directly as an attachment.
In the logs, I see that ceph-volume does set the permissions correctly:
[2018-11-02
Yes, I still have the volume log showing the activation process for
ssd0/db60 (and 61-69 as well). I will email it to you directly as an
attachment.
On Mon, Nov 5, 2018 at 4:14 PM, Alfredo Deza wrote:
> On Mon, Nov 5, 2018 at 4:04 PM Hayashida, Mami
> wrote:
> >
> > WOW. With you two
On Mon, Nov 5, 2018 at 4:04 PM Hayashida, Mami wrote:
>
> WOW. With you two guiding me through every step, the 10 OSDs in question are
> now added back to the cluster as Bluestore disks!!! Here are my responses to
> the last email from Hector:
>
> 1. I first checked the permissions and they
WOW. With you two guiding me through every step, the 10 OSDs in question
are now added back to the cluster as Bluestore disks!!! Here are my
responses to the last email from Hector:
1. I first checked the permissions and they looked like this
root@osd1:/var/lib/ceph/osd/ceph-60# ls -l
total 56
On 11/6/18 3:31 AM, Hayashida, Mami wrote:
> 2018-11-05 12:47:01.075573 7f1f2775ae00 -1
> bluestore(/var/lib/ceph/osd/ceph-60) _open_db add block
> device(/var/lib/ceph/osd/ceph-60/block.db) returned: (13) Permission denied
Looks like the permissions on the block.db device are wrong. As far
On 11/6/18 3:21 AM, Alfredo Deza wrote:
> On Mon, Nov 5, 2018 at 11:51 AM Hector Martin wrote:
>>
>> Those units don't get triggered out of nowhere, there has to be a
>> partition table with magic GUIDs or a fstab or something to cause them
>> to be triggered. The better way should be to get rid
I already ran the "ceph-volume lvm activate --all " command right after I
prepared (using "lvm prepare") those OSDs. Do I need to run the "activate"
command again?
On Mon, Nov 5, 2018 at 1:24 PM, Alfredo Deza wrote:
> On Mon, Nov 5, 2018 at 12:54 PM Hayashida, Mami
> wrote:
> >
> > I
On Mon, Nov 5, 2018 at 12:54 PM Hayashida, Mami wrote:
>
> I commented out those lines and, yes, I was able to restart the system and
> all the Filestore OSDs are now running. But when I cannot start converted
> Bluestore OSDs (service). When I look up the log for osd.60, this is what I
>
On Mon, Nov 5, 2018 at 11:51 AM Hector Martin wrote:
>
> Those units don't get triggered out of nowhere, there has to be a
> partition table with magic GUIDs or a fstab or something to cause them
> to be triggered. The better way should be to get rid of that instead of
> overriding the ceph-disk
I commented out those lines and, yes, I was able to restart the system and
all the Filestore OSDs are now running. But when I cannot start converted
Bluestore OSDs (service). When I look up the log for osd.60, this is what
I see:
2018-11-05 12:47:00.756794 7f1f2775ae00 0 set uid:gid to
On 11/6/18 2:01 AM, Hayashida, Mami wrote:
> I did find in /etc/fstab entries like this for those 10 disks
>
> /dev/sdh1 /var/lib/ceph/osd/ceph-60 xfs noatime,nodiratime 0 0
>
> Should I comment all 10 of them out (for osd.{60-69}) and try rebooting
> again?
Yes. Anything that references any
I did find in /etc/fstab entries like this for those 10 disks
/dev/sdh1 /var/lib/ceph/osd/ceph-60 xfs noatime,nodiratime 0 0
Should I comment all 10 of them out (for osd.{60-69}) and try rebooting
again?
On Mon, Nov 5, 2018 at 11:54 AM, Hayashida, Mami
wrote:
> I was just going to write
I was just going to write that the "ln" command did not solve the problem.
When I rebooted the node, it again went into an emergency mode and I got
exactly the same errors (systemd[1]: Timed out waiting for device
dev-sdh1.device.;-- Subject: Unit dev-sdh1.device has failed...). I will
look into
Those units don't get triggered out of nowhere, there has to be a
partition table with magic GUIDs or a fstab or something to cause them
to be triggered. The better way should be to get rid of that instead of
overriding the ceph-disk service instances, I think.
Given dev-sdh1.device is trying to
Alright. Thanks -- I will try this now.
On Mon, Nov 5, 2018 at 11:36 AM, Alfredo Deza wrote:
> On Mon, Nov 5, 2018 at 11:33 AM Hayashida, Mami
> wrote:
> >
> > But I still have 50 other Filestore OSDs on the same node, though.
> Wouldn't doing it all at once (by not identifying the osd-id) be
On Mon, Nov 5, 2018 at 11:33 AM Hayashida, Mami wrote:
>
> But I still have 50 other Filestore OSDs on the same node, though. Wouldn't
> doing it all at once (by not identifying the osd-id) be a problem for those?
> I have not migrated data out of those 50 OSDs yet.
Sure, like I said, if you
But I still have 50 other Filestore OSDs on the same node, though.
Wouldn't doing it all at once (by not identifying the osd-id) be a problem
for those? I have not migrated data out of those 50 OSDs yet.
On Mon, Nov 5, 2018 at 11:31 AM, Alfredo Deza wrote:
> On Mon, Nov 5, 2018 at 11:24 AM
On Mon, Nov 5, 2018 at 11:24 AM Hayashida, Mami wrote:
>
> Thank you for all of your replies. Just to clarify...
>
> 1. Hector: I did unmount the file system if what you meant was unmounting
> the /var/lib/ceph/osd/ceph-$osd-id for those disks (in my case osd.60-69)
> before running the
Thank you for all of your replies. Just to clarify...
1. Hector: I did unmount the file system if what you meant was unmounting
the /var/lib/ceph/osd/ceph-$osd-id for those disks (in my case osd.60-69)
before running the ceph-volume lvm zap command
2. Alfredo: so I can at this point run the
On Mon, Nov 5, 2018 at 10:43 AM Hayashida, Mami wrote:
>
> Additional info -- I know that /var/lib/ceph/osd/ceph-{60..69} are not
> mounted at this point (i.e. mount | grep ceph-60, and 61-69, returns
> nothing.). They don't show up when I run "df", either.
>
> On Mon, Nov 5, 2018 at 10:15
On 11/6/18 1:08 AM, Hector Martin wrote:
> On 11/6/18 12:42 AM, Hayashida, Mami wrote:
>> Additional info -- I know that /var/lib/ceph/osd/ceph-{60..69} are not
>> mounted at this point (i.e. mount | grep ceph-60, and 61-69, returns
>> nothing.). They don't show up when I run "df", either.
>
On 11/6/18 12:42 AM, Hayashida, Mami wrote:
> Additional info -- I know that /var/lib/ceph/osd/ceph-{60..69} are not
> mounted at this point (i.e. mount | grep ceph-60, and 61-69, returns
> nothing.). They don't show up when I run "df", either.
This is expected. ceph-volume with BlueStore does
On Mon, 5 Nov 2018, 21:13 Hayashida, Mami, wrote:
> Additional info -- I know that /var/lib/ceph/osd/ceph-{60..69} are not
> mounted at this point (i.e. mount | grep ceph-60, and 61-69, returns
> nothing.). They don't show up when I run "df", either.
>
ceph-volume command automatically mount
Additional info -- I know that /var/lib/ceph/osd/ceph-{60..69} are not
mounted at this point (i.e. mount | grep ceph-60, and 61-69, returns
nothing.). They don't show up when I run "df", either.
On Mon, Nov 5, 2018 at 10:15 AM, Hayashida, Mami
wrote:
> Well, over the weekend the whole server
Well, over the weekend the whole server went down and is now in the
emergency mode. (I am running Ubuntu 16.04). When I run "journalctl -p
err -xb" I see that
systemd[1]: Timed out waiting for device dev-sdh1.device.
-- Subject: Unit dev-sdh1.device has failed
-- Defined-By: systemd
--
On Fri, Nov 2, 2018 at 5:04 PM Hayashida, Mami wrote:
>
> I followed all the steps Hector suggested, and almost everything seems to
> have worked fine. I say "almost" because one out of the 10 osds I was
> migrating could not be activated even though everything up to that point
> worked just
On 03/11/2018 06:03, Hayashida, Mami wrote:
ceph-volume lvm activate --all
...
--> Activating OSD ID 67 FSID 17cd6755-76f9-4160-906c-XX
Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-67
--> Absolute path not found for executable: restorecon
--> Ensure $PATH environment variable
I followed all the steps Hector suggested, and almost everything seems to
have worked fine. I say "almost" because one out of the 10 osds I was
migrating could not be activated even though everything up to that point
worked just as well for that osd as the other ones. Here is the output for
that
Thank you, both of you. I will try this out very soon.
On Wed, Oct 31, 2018 at 8:48 AM, Alfredo Deza wrote:
> On Wed, Oct 31, 2018 at 8:28 AM Hayashida, Mami
> wrote:
> >
> > Thank you for your replies. So, if I use the method Hector suggested (by
> creating PVs, VGs etc. first), can I
On Wed, Oct 31, 2018 at 8:28 AM Hayashida, Mami wrote:
>
> Thank you for your replies. So, if I use the method Hector suggested (by
> creating PVs, VGs etc. first), can I add the --osd-id parameter to the
> command as in
>
> ceph-volume lvm prepare --bluestore --data hdd0/data0 --block.db
Thank you for your replies. So, if I use the method Hector suggested (by
creating PVs, VGs etc. first), can I add the --osd-id parameter to the
command as in
ceph-volume lvm prepare --bluestore --data hdd0/data0 --block.db ssd/db0
--osd-id 0
ceph-volume lvm prepare --bluestore --data
On Wed, Oct 31, 2018 at 5:22 AM Hector Martin wrote:
>
> On 31/10/2018 05:55, Hayashida, Mami wrote:
> > I am relatively new to Ceph and need some advice on Bluestore migration.
> > I tried migrating a few of our test cluster nodes from Filestore to
> > Bluestore by following this
> >
On 31/10/2018 05:55, Hayashida, Mami wrote:
I am relatively new to Ceph and need some advice on Bluestore migration.
I tried migrating a few of our test cluster nodes from Filestore to
Bluestore by following this
(http://docs.ceph.com/docs/luminous/rados/operations/bluestore-migration/)
as
I am relatively new to Ceph and need some advice on Bluestore migration. I
tried migrating a few of our test cluster nodes from Filestore to Bluestore
by following this (
http://docs.ceph.com/docs/luminous/rados/operations/bluestore-migration/)
as the cluster is currently running 12.2.9. The
58 matches
Mail list logo