Hi all,

This evening I was in the process of deploying a ceph cluster by hand.
I did it by hand because to my knowledge, ceph-deploy doesn't support
Gentoo, and my cluster here runs that.

The instructions I followed are these ones:
http://docs.ceph.com/docs/master/install/manual-deployment and I'm
running the 10.0.2 release of Ceph:

ceph version 10.0.2 (86764eaebe1eda943c59d7d784b893ec8b0c6ff9)

Things went okay bootstrapping the monitors.  I'm running a 3-node
cluster, with OSDs and monitors co-located.  Each node has a 1TB 2.5"
HDD and a 40GB partition on SSD for the journal.

Things went pear shaped however when I tried bootstrapping the OSDs.
All was going fine until it came time to activate my first OSD.

ceph-disk activate barfed because I didn't have the bootstrap-osd key.
No one told me I needed to create one, or how to do it.  There's a brief
note about using --activate-key, but no word on what to pass as the
argument.  I tried passing in my admin keyring in /etc/ceph, but it
didn't like that.

In the end, I muddled my way through the manual OSD deployment steps,
which worked fine.  After correcting permissions for the ceph user, I
found the OSDs came up.  As an added bonus, I now know how to work
around the journal permission issue at work since I've reproduced it
here, using a UDEV rules file like the following:

SUBSYSTEM=="block", KERNEL=="sda7", OWNER="ceph", GROUP="ceph", MODE="0600"

The cluster seems to be happy enough now, but some notes on how one
generates the OSD activation keys to use with `ceph-disk activate` would
be a big help.

Regards,
-- 
Stuart Longland
Systems Engineer
     _ ___
\  /|_) |                           T: +61 7 3535 9619
 \/ | \ |     38b Douglas Street    F: +61 7 3535 9699
   SYSTEMS    Milton QLD 4064       http://www.vrt.com.au


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to