My guess is a networking problem.  Do you have vlans, cluster network vs
public network in the ceph.conf, etc configured?  Can you ping between all
of your storage nodes on all of their IPs?

All of your OSDs communicate with the mons on the public network, but they
communicate with each other for peering on the cluster network.  My guess
is that your public network is working fine, but that your cluster network
might be having an issue causing the new PGs to never be able to peer.

On Tue, Oct 3, 2017 at 11:12 AM Guilherme Lima <guilherme.l...@farfetch.com>
wrote:

> Here it is,
>
>
>
> size: 3
>
> min_size: 2
>
> crush_rule: replicated_rule
>
>
>
> [
>
>     {
>
>         "rule_id": 0,
>
>         "rule_name": "replicated_rule",
>
>         "ruleset": 0,
>
>         "type": 1,
>
>         "min_size": 1,
>
>         "max_size": 10,
>
>         "steps": [
>
>             {
>
>                 "op": "take",
>
>                 "item": -1,
>
>                 "item_name": "default"
>
>             },
>
>             {
>
>                 "op": "chooseleaf_firstn",
>
>                 "num": 0,
>
>                 "type": "host"
>
>             },
>
>             {
>
>                 "op": "emit"
>
>             }
>
>         ]
>
>     }
>
> ]
>
>
>
>
>
> Thanks
>
> Guilherme
>
>
>
>
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Webert de Souza Lima
> *Sent:* Tuesday, October 3, 2017 15:47
> *To:* ceph-users <ceph-users@lists.ceph.com>
> *Subject:* Re: [ceph-users] Ceph stuck creating pool
>
>
>
> This looks like something wrong with the crush rule.
>
>
>
> What's the size, min_size and crush_rule of this pool?
>
>  ceph osd pool get POOLNAME size
>
>  ceph osd pool get POOLNAME min_size
>
>  ceph osd pool get POOLNAME crush_ruleset
>
>
>
> How is the crush rule?
>
>  ceph osd crush rule dump
>
>
>
>
> Regards,
>
>
>
> Webert Lima
>
> DevOps Engineer at MAV Tecnologia
>
> *Belo Horizonte - Brasil*
>
>
>
> On Tue, Oct 3, 2017 at 11:22 AM, Guilherme Lima <
> guilherme.l...@farfetch.com> wrote:
>
> Hi,
>
>
>
> I have installed a virtual Ceph Cluster lab. I using Ceph Luminous v12.2.1
>
> It consist in 3 mon + 3 osd nodes.
>
> Each node have 3 x 250GB OSD.
>
>
>
> My osd tree:
>
>
>
> ID CLASS WEIGHT  TYPE NAME      STATUS REWEIGHT PRI-AFF
>
> -1       2.19589 root default
>
> -3       0.73196     host osd1
>
> 0   hdd 0.24399         osd.0      up  1.00000 1.00000
>
> 6   hdd 0.24399         osd.6      up  1.00000 1.00000
>
> 9   hdd 0.24399         osd.9      up  1.00000 1.00000
>
> -5       0.73196     host osd2
>
> 1   hdd 0.24399         osd.1      up  1.00000 1.00000
>
> 7   hdd 0.24399         osd.7      up  1.00000 1.00000
>
> 10   hdd 0.24399         osd.10     up  1.00000 1.00000
>
> -7       0.73196     host osd3
>
> 2   hdd 0.24399         osd.2      up  1.00000 1.00000
>
> 8   hdd 0.24399         osd.8      up  1.00000 1.00000
>
> 11   hdd 0.24399         osd.11     up  1.00000 1.00000
>
>
>
> After create a new pool it is stuck on creating+peering and
> creating+activating.
>
>
>
>   cluster:
>
>     id:     d20fdc12-f8bf-45c1-a276-c36dfcc788bc
>
>     health: HEALTH_WARN
>
>             Reduced data availability: 256 pgs inactive, 143 pgs peering
>
>             Degraded data redundancy: 256 pgs unclean
>
>
>
>   services:
>
>     mon: 3 daemons, quorum mon2,mon3,mon1
>
>     mgr: mon1(active), standbys: mon2, mon3
>
>     osd: 9 osds: 9 up, 9 in
>
>
>
>   data:
>
>     pools:   1 pools, 256 pgs
>
>     objects: 0 objects, 0 bytes
>
>     usage:   10202 MB used, 2239 GB / 2249 GB avail
>
>     pgs:     100.000% pgs not active
>
>              143 creating+peering
>
>              113 creating+activating
>
>
>
> Can anyone help to find the issue?
>
>
>
> Thanks
>
> Guilherme
>
>
>
>
>
>
>
>
>
>
>
> This email and any files transmitted with it are confidential and intended
> solely for the use of the individual or entity to whom they are addressed.
> If you have received this email in error please notify the system manager.
> This message contains confidential information and is intended only for the
> individual named. If you are not the named addressee you should not
> disseminate, distribute or copy this e-mail. Please notify the sender
> immediately by e-mail if you have received this e-mail by mistake and
> delete this e-mail from your system. If you are not the intended recipient
> you are notified that disclosing, copying, distributing or taking any
> action in reliance on the contents of this information is strictly
> prohibited.
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> This email and any files transmitted with it are confidential and intended
> solely for the use of the individual or entity to whom they are addressed.
> If you have received this email in error please notify the system manager.
> This message contains confidential information and is intended only for the
> individual named. If you are not the named addressee you should not
> disseminate, distribute or copy this e-mail. Please notify the sender
> immediately by e-mail if you have received this e-mail by mistake and
> delete this e-mail from your system. If you are not the intended recipient
> you are notified that disclosing, copying, distributing or taking any
> action in reliance on the contents of this information is strictly
> prohibited.
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to