Couple things I caught.
The first wasn't a huge issue but good to note.
The second took me a while to figure out.

1. Default attribute:

ceph-deploy new [HOST]
by default creates "filestore xattr use omap = true" which is used for ext4
http://eu.ceph.com/docs/wip-3060/config-cluster/ceph-conf/#osds but
ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]
uses XFS by default

2. The command to send admin to a monitor doesn't work:

ceph admin [HOST]
copies the file to /etc/ceph/ceph.client.admin.keyring but the monitor is
expecting /etc/ceph/keyring

Hope this helps some people,
Scottix



On Wed, Jun 12, 2013 at 12:12 PM, Scottix <scot...@gmail.com> wrote:

> Thanks Greg,
> I am starting to understand it better.
> I soon realized as well after doing some searching I hit this bug.
> http://tracker.ceph.com/issues/5194
> Which created the problem upon rebooting.
>
> Thank You,
> Scottix
>
>
> On Wed, Jun 12, 2013 at 10:29 AM, Gregory Farnum <g...@inktank.com> wrote:
>
>> On Wed, Jun 12, 2013 at 9:40 AM, Scottix <scot...@gmail.com> wrote:
>> > Hi John,
>> > That makes sense it affects the ceph cluster map, but it actually does a
>> > little more like partitioning drives and setting up other parameters and
>> > even starts the service. So the part I see is a little confusing is
>> that I
>> > have to configure the ceph.conf file on top of using ceph-deploy so it
>> > starts to feel like double work and potential for error if you get
>> mixed up
>> > or you were expecting one thing and ceph-deploy does another.
>> > I think I can figure out a best practice, but I think it is worth noting
>> > that just running the commands will get it up and running but it is
>> probably
>> > best to edit the config file as well. I like the new ceph-deploy
>> commands
>> > definitely makes things more manageable.
>> > A single page example for install and setup would be highly appreciated,
>> > especially for new users.
>> >
>> > I must have skimmed that section in the runtime-changes thanks for
>> pointing
>> > me to the page.
>>
>> Just as a little more context, ceph-deploy is trying to provide a
>> reference for how we expect users to manage ceph when using a
>> configuration management system like Chef. Rather than trying to
>> maintain a canonical ceph.conf (because let's be clear, there is no
>> canonical one as far as Ceph is concerned), each host gets the
>> information it needs in its ceph.conf, and the cluster is put together
>> dynamically based on who's talking to the monitors.
>> The reason you aren't seeing individual OSD entries in any of the
>> configuration files is because the OSDs on a host are actually defined
>> by the presence of OSD stores in /var/lib/ceph/osd-*. Those daemons
>> should be activated automatically thanks to the magic of udev and our
>> init scripts whenever you reboot, plug in a drive which stores an OSD,
>> etc.
>> -Greg
>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>>
>
>
>
> --
> Follow Me: @Scottix <http://www.twitter.com/scottix>
> http://about.me/scottix
> scot...@gmail.com
>



-- 
Follow Me: @Scottix <http://www.twitter.com/scottix>
http://about.me/scottix
scot...@gmail.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to