On Jan 23, 2013, at 5:10 PM, Dimitri Maziuk <dmaz...@bmrb.wisc.edu> wrote:

> On 01/23/2013 10:19 AM, Patrick McGarry wrote:
> 
>> http://ceph.com/howto/building-a-public-ami-with-ceph-and-openstack/
> 
>> On Wed, Jan 23, 2013 at 10:13 AM, Sam Lang <sam.l...@inktank.com> wrote:
> 
>>> http://ceph.com/docs/master/rbd/rbd-openstack/
> 
> These are both great, I'm sure, but Patrick's page says "I chose to
> follow the 5 minute quickstart guide" and the rbd-openstack page says
> "Important ... you must have a running Ceph cluster."
> 
> My problem is I can;t find a "5 minute quickstart guide" for RHEL 6. and
> I didn't get a "running ceph cluster" by trying to follow the existing
> (ubuntu) guide and adjust for centos 6.3.

http://ceph.com/docs/master/install/rpm/
http://ceph.com/docs/master/start/quick-start/

Between those two links my own quick-start on CentOS 6.3 was maybe 6 minutes. 
YMMV.

After learning that qemu uses librbd (and thus doesn't rely on the rbd kernel 
module) I was happy to stick with the stock CentOS kernel for my servers (with 
updated qemu and libvirt builds).

> So I'm stuck at a point way before those guides become relevant: once I
> had one OSD/MDS/MON box up, I got "HEALTH_WARN 384 pgs degraded; 384 pgs
> stuck unclean; recovery 21/42 degraded (50.000%)" (384 appears be the
> number of placement groups created by default).
> 
> What does that mean? That I only have one OSD? Or is it genuinely unhealthy?

Assuming you have more than one host, be sure that iptables or another firewall 
isn't preventing communication between the ceph daemons.

JN

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to