On Wed, 8 May 2013, John Wilkins wrote:
> James,
> The output says, " monmap e1: 3 mons
> at{4=192.168.200.197:6789/0,7=192.168.200.190:6789/0,8=192.168.200.191:6789/0
> }, election epoch 1104, quorum 0,1,2 4,7,8"
>
> It looks like you have six OSDs (0,1,2,4,7,8) with only 3 OSDs running. The
> cluster needs a majority. So you'd need 4 of 6 monitors running.
Actually in this case it's confusing because the mons have numeric names
"4" "7" and "8" which then map to ranks 0, 1, 2 internally. It is best to
give them alphanumeric names (like the hostname) to avoid this sort of
confusion..
sage
>
>
> On Wed, May 8, 2013 at 4:32 AM, James Harper <[email protected]>
> wrote:
> > On 05/08/2013 08:44 AM, David Zafman wrote:
> > >
> > > According to "osdmap e504: 4 osds: 2 up, 2 in" you have 2 of
> 4 osds that are
> > down and out. That may be the issue.
> >
> > Also, running 'ceph health detail' will give you specifics on
> what is
> > causing the HEALTH_WARN.
> >
>
> # ceph health detail
> HEALTH_WARN
> mon.4 addr 192.168.200.197:6789/0 has 26% avail disk space -- low disk
> space!
>
> I guess that's the problem.
>
> Thanks
>
> James
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> --
> John Wilkins
> Senior Technical Writer
> Intank
> [email protected]
> (415) 425-9599
> http://inktank.com
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com