inin; Steven Vacaroaia
Cc: ceph-users
Subject: Re: [ceph-users] ceph luminous - SSD partitions disssapeared
Seen this issue when I first created our Luminous cluster. I use a custom
systemd service to chown the DB and WAL partitions before ceph osd services get
started. The script in /usr/loc
aia
Cc: ceph-users
Subject: Re: [ceph-users] ceph luminous - SSD partitions disssapeared
To make device ownership persist over reboots, you can to set up udev rules.
The article you referenced seems to have nothing to do with bluestore. When you
had zapped /dev/sda, you zapped bluestore metadata
block storage, are no
longer relevant and that’s why osd daemon throws error.
From: Steven Vacaroaia <ste...@gmail.com>
Sent: Wednesday, January 3, 2018 7:20:12 PM
To: Sergey Malinin
Cc: ceph-users
Subject: Re: [ceph-users] ceph luminous - SSD partitions disssa
ser “ceph”?
>
> --
> *From:* ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of
> Steven Vacaroaia <ste...@gmail.com>
> *Sent:* Wednesday, January 3, 2018 6:19:45 PM
> *To:* ceph-users
> *Subject:* [ceph-users] ceph luminous - SSD partitions di
Are actual devices (not only udev links) owned by user “ceph”?
From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of Steven
Vacaroaia <ste...@gmail.com>
Sent: Wednesday, January 3, 2018 6:19:45 PM
To: ceph-users
Subject: [ceph-users] c
Hi,
After a reboot, all the partitions created on the SSD drive dissapeared
They were used by bluestore DB and WAL so the OSD are down
The following error message are in /var/log/messages
Jan 3 09:54:12 osd01 ceph-osd: 2018-01-03 09:54:12.992218 7f4b52b9ed00 -1