sure, as requested:

*cephfs* was created using the following command:

ceph osd pool create cephfs_metadata 128 128
ceph osd pool create cephfs_data 128 128
ceph fs new cephfs cephfs_metadata cephfs_data

*ceph.conf:*
https://paste.debian.net/895841/


*# ceph osd crush tree*https://paste.debian.net/895839/

*# ceph osd crush rule list*
[
    "replicated_ruleset",
    "replicated_ruleset_ssd"
]

*# ceph osd crush rule dump*
https://paste.debian.net/895842/

*# ceph osd tree*
ID WEIGHT   TYPE NAME                     UP/DOWN REWEIGHT PRIMARY-AFFINITY
-3  0.07999 root default-ssd
-5  0.03999     host dc1-master-ds02-ssd
11  0.03999         osd.11                     up  1.00000          1.00000
-6  0.03999     host dc1-master-ds03-ssd
13  0.03999         osd.13                     up  1.00000          1.00000
-1 31.39999 root default
-2 31.39999     host dc1-master-ds01
 0  3.70000         osd.0                      up  1.00000          1.00000
 1  3.70000         osd.1                      up  1.00000          1.00000
 2  4.00000         osd.2                      up  1.00000          1.00000
 3  4.00000         osd.3                      up  1.00000          1.00000
 4  4.00000         osd.4                      up  1.00000          1.00000
 5  4.00000         osd.5                      up  1.00000          1.00000
 6  4.00000         osd.6                      up  1.00000          1.00000
 7  4.00000         osd.7                      up  1.00000          1.00000


*# ceph osd pool ls*
.rgw.root
master.rgw.control
master.rgw.data.root
master.rgw.gc
master.rgw.log
master.rgw.intent-log
master.rgw.usage
master.rgw.users.keys
master.rgw.users.email
master.rgw.users.swift
master.rgw.users.uid
master.rgw.buckets.index
master.rgw.buckets.data
master.rgw.meta
master.rgw.buckets.non-ec
rbd
cephfs_metadata
cephfs_data


*# ceph osd pool stats*
https://paste.debian.net/895840/




On Tue, Nov 15, 2016 at 10:33 AM Burkhard Linke <
[email protected]> wrote:

> Hi,
>
>
> On 11/15/2016 01:27 PM, Webert de Souza Lima wrote:
> > Not that I know of. On 5 other clusters it works just fine and
> > configuration is the same for all.
> > On this cluster I was using only radosgw, but cephfs was not in use
> > but it had been already created following our procedures.
> >
> > This happened right after mounting it.
> Do you use any different setup for one of the pools?
> active+undersized+degraded means that the crush rules for a PG cannot be
> satisfied, and 128 PGs sounds like the default setup for the number of PGs.
>
> With 10 OSDs I would suspect that you do not have enough host to satisfy
> all crush requirements. Can you post your crush tree, the crush rules
> and the detailed pool configuration?
>
> Regards,
> Burkhard
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to