Guido,

My apologies. I seem to have omitted the PG troubleshooting section from
the index. It has been addressed. See
http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/

Ceph OSDs peer and check on each other. So running a cluster with only one
OSD is not recommended. Operationally, it's perfectly fine to bootstrap a
cluster that way, but an operating cluster should have at least two OSDs
running. See
http://ceph.com/docs/master/rados/operations/monitoring-osd-pg/#peering and
http://ceph.com/docs/master/rados/configuration/mon-osd-interaction/ to
learn how OSDs interact with each other and monitors.

Regards,


John



On Mon, May 6, 2013 at 8:04 AM, Guido Winkelmann <
[email protected]> wrote:

> Am Montag, 6. Mai 2013, 16:59:12 schrieb Wido den Hollander:
> > On 05/06/2013 04:51 PM, Guido Winkelmann wrote:
> > > Am Montag, 6. Mai 2013, 16:41:43 schrieb Wido den Hollander:
> > >> On 05/06/2013 04:15 PM, Guido Winkelmann wrote:
> > >>> Am Montag, 6. Mai 2013, 16:05:31 schrieb Wido den Hollander:
> > >>>> On 05/06/2013 04:00 PM, Guido Winkelmann wrote:
> > >>>>> Hi,
> > >>>>>
> > >>>>> How do I run a 1-node cluster with no replication?
> > >>>>>
> > >>>>> I'm trying to run a small 1-node cluster on my local workstation
> and
> > >>>>> another on my notebook for experimentation/development purposes,
> but
> > >>>>> since I only have on OSD, I'm always getting HEALTH_WARN as the
> > >>>>> cluster
> > >>>>> status from ceph -s. Can I somehow tell ceph to just not bother
> with
> > >>>>> replication for this cluster?
> > >>>>
> > >>>> Have you set min_size to 1 for all the pools?
> > >>>
> > >>> You mean in the crushmap?
> > >>
> > >> No, it's pool setting.
> > >>
> > >> See:
> http://ceph.com/docs/master/rados/operations/pools/#set-pool-values
> > >
> > > Hm, I set that to 1 now, and nothing changed:
> > Have you also set "size" to 1? Meaning no replication.
> >
> > Both size and min_size should be set to 1.
>
> I set size to 1 now, too. ceph -s no longer reports degraded pgs now, but I
> still get a HEALTH_WARN:
>
> $ ceph -s
>    health HEALTH_WARN 384 pgs stuck unclean
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
John Wilkins
Senior Technical Writer
Intank
[email protected]
(415) 425-9599
http://inktank.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to