4 OSD nodes or daemons?

please:

 * ceph -v
 * ceph -s
 * ceph osd tree


On Fri, Feb 10, 2017 at 5:26 AM, Craig Read <[email protected]> wrote:
> We have 4 OSDs in test environment that are all stuck unclean
>
>
>
> I’ve tried rebuilding the whole environment with the same result.
>
>
>
> OSDs are running on XFS disk, partition 1 is OSD, partition 2 is journal
>
>
>
> Also seeing degraded despite having 4 OSDs and a default osd pool of 2
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to