It shouldn't be -- if you changed pg_num then a bunch of PGs will need to
move and will report in this state. We can check more thoroughly if you
provide the full "Ceph -s" output. (Stuff to check for: that all PGs are
active, none are degraded, etc)
-Greg

On Wednesday, November 4, 2015, Erming Pei <[email protected]> wrote:

> Hi,
>
>   I found that the pg_num and pgp_num for meta data pool was too small and
> then increased them.
>   Then I got "300 pgs stuck unclean".
>
>
>
>
> *  $ ceph -s     cluster a4d0879f-abdc-4f9d-8a4b-53ce57d822f1      health
> HEALTH_WARN 248 pgs backfill; 52 pgs backfilling; 300 pgs stuck unclean;
> recovery 58417161/113290060 objects misplaced (51.564%); mds0: Client
> physics-007:Physics01_data failing to respond to cache pressure *
> Is it critical?
>
> thanks,
>
> Erming
>
>
>
>
>
> --
> ---------------------------------------------
>  Erming Pei, Ph.D
>  Senior System Analyst; Grid/Cloud Specialist
>
>  Research Computing Group
>  Information Services & Technology
>  University of Alberta, Canada
>
>  Tel: +1 7804929914        Fax: +1 7804921729
> ---------------------------------------------
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to