Hi Georgios,
I've had a few issues with automatic mounting on CentOS two months ago, and
here are a few tips to how we got automatic mount running with no entries
in the fstab. The versions for my test are CentOS 7.1 with Ceph Hammer,
kernel 3.10.0-229 and udev/systemd 208.
First, I strongly recommend using `ceph-disk list` as a first test. If all
goes well the output should look like this:
[root@ceph-test ~]# ceph-disk list
/dev/sda :
/dev/sda1 other, xfs, mounted on /boot
/dev/sda2 other, LVM2_member
/dev/sdb :
/dev/sdb1 ceph journal, for /dev/sdd1
/dev/sdb2 ceph journal, for /dev/sde1
/dev/sdb3 ceph journal, for /dev/sdc1
/dev/sdc :
/dev/sdc1 ceph data, active, cluster ceph, osd.2, journal /dev/sdb3
/dev/sdd :
/dev/sdd1 ceph data, active, cluster ceph, osd.1, journal /dev/sdb1
/dev/sde :
/dev/sde1 ceph data, active, cluster ceph, osd.0, journal /dev/sdb2
If the partitions are not detected as ceph data/journal, then your
partitions type UUIDs are not set properly; this is important for the Ceph
udev rules to work. An if the data-journal associations are not displayed,
you might want to check that the "journal" symlink and "journal_uuid" files
in the OSD directory are correct and pointing to the right device. That's
if you're using separate partitions as journals, of course.
Then `udevadm`can help you see what exactly is going on in the udev rule
when it's run. Try:
udevadm test -h $(udevadm info -q path /dev/sdc)
(or any other device that's used as data for OSDs)
This command should show you a full log of the events. In our case, the
failure was due to a missing keyring file that made the
`ceph-disk-activate` call from 95-ceph-osd.rules fail.
Finally, you might also want to try using 60-ceph-partuuid-workaround.rules
instead of 60-ceph-by-parttypeuuid.rules if it's the later that is used in
your system. The `udevadm test` log should give good clues to whether
that's the issue or not.
Kind Regards,
--
Xavier Villaneau
Software Engineer, Concurrent Computer Corporation
On Sat, Apr 1, 2017 at 4:47 AM Georgios Dimitrakakis <[email protected]>
wrote:
Hi,
just to provide some more feedback on this one and what I 've done to
solve it, although not sure if this is the most "elegant" solution.
I have add manually to /etc/fstab on all systems the respective mount
points for Ceph OSDs, e.g. entries like this:
UUID=9d2e7674-f143-48a2-bb7a-1c55b99da1f7 /var/lib/ceph/osd/ceph-0 xfs
defaults 0 0
Then I 've checked and seen that the "[email protected]" was "disabled"
which means that wasn't starting by default.
Therefore I did modify all respective services on all nodes, with
commands like:
systemctl enable [email protected]
Did a reboot on the nodes and all CEPH OSDs were mounted and the
service was starting by default, so the problem was solved.
As said I don't know if this is the correct way to do it but for me it
works.
I guess that something still goes wrong when the root volume is on LVM
and all the above that should happen automatically don't happen and
require manual intervention.
Looking forward for any comments on this procedure or things that I
might have missed.
Regards,
G.
> Hi Tom and thanks a lot for the feedback.
>
> Indeed my root filesystem is on an LVM volume and I am currently
> running CentOS 7.3.1611 with kernel 3.10.0-514.10.2.el7.x86_64 and
> the
> ceph version is 10.2.6 (656b5b63ed7c43bd014bcafd81b001959d5f089f)
>
> The 60-ceph-by-parttypeuuid.rules on the system is the same is the
> one on the bug you 've mentioned but unfortunately it still doesn't
> work.
>
> So, are there any more ideas on how to further debug it?
>
> Best,
>
> G.
>
>
>> Are you running the CentOS filesystem as LVM? This
>> (http://tracker.ceph.com/issues/16351 [1]) still seems to be an
>> issue
>> on CentOS 7 that I've seen myself too. After migrating to a standard
>> filesystem layout (i.e. no LVM) the issue disappeared.
>>
>> Regards,
>>
>> Tom
>>
>> -------------------------
>>
>> FROM: ceph-users on behalf of Georgios Dimitrakakis
>> SENT: Thursday, March 23, 2017 10:21:34 PM
>> TO: [email protected]
>> SUBJECT: [ceph-users] CentOS7 Mounting Problem
>>
>> Hello Ceph community!
>>
>> I would like some help with a new CEPH installation.
>>
>> I have install Jewel on CentOS7 and after the reboot my OSDs are
>> not
>> mount automatically and as a consequence ceph is not operating
>> normally...
>>
>> What can I do?
>>
>> Could you please help me solve the problem?
>>
>> Regards,
>>
>> G.
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com [2]
>>
>>
>> Links:
>> ------
>> [1] http://tracker.ceph.com/issues/16351
>> [2] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com