On Fri, Oct 19, 2018 at 10:06:06AM +0200, Wido den Hollander wrote:
> 
> 
> On 10/19/18 7:51 AM, xiang....@iluvatar.ai wrote:
> > Hi!
> > 
> > I use ceph 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d77) mimic
> > (stable), and find that:
> > 
> > When expand whole cluster, i update pg_num, all succeed, but the status
> > is as below:
> >   cluster:
> >     id:     41ef913c-2351-4794-b9ac-dd340e3fbc75
> >     health: HEALTH_WARN
> >             3 pools have pg_num > pgp_num
> > 
> > Then i update pgp_num too, warning miss.
> > 
> > What makes me confused is that when i create whole cluster at first time,
> > i use "ceph osd create pool pool_name pg_num", the pgp_num is auto equal
> > to pg_num.
> > 
> > But "ceph osd set pool pool_name pg_num" not.
> > 
> > Why does this design?
> > 
> > Why do not auto update pgp_num when update pg_num?
> > 
> 
> Because when changing pg_num only the Placement Groups are created, data
> isn't moving yet. pgp_num, Placement Groups for Placement influences how
> CRUSH works.
> 
> When you change that value data actually  starts to move.
> 
> pgp_num can never be larger then pg_num though.
> 
> Some people choose to increase pgp_num in small steps so that the data
> migration isn't massive.
> 
> Wido
> 

 I get what you mean. 
 Maybe provide a option to start to move when set pg_num is cool for
 people who do expand thing in night so do not care how massive the data 
migration.

> > Thanks
> > 
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > 

-- 
Best Regards
Dai Xiang
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to