There used to be, can't find it right now.  Something like 'ceph osd set pg_num 
<num>' then 'ceph osd set pgp_num <num>' to actually move your data into the 
new pg's.  I successfully did it several months ago, when bobtail was current.

Sent from my iPad

> On Oct 9, 2013, at 10:30 PM, Guang <[email protected]> wrote:
> 
> Thanks Mike.
> 
> Is there any documentation for that?
> 
> Thanks,
> Guang
> 
>> On Oct 9, 2013, at 9:58 PM, Mike Lowe wrote:
>> 
>> You can add PGs,  the process is called splitting.  I don't think PG 
>> merging, the reduction in the number of PGs, is ready yet.
>> 
>>> On Oct 8, 2013, at 11:58 PM, Guang <[email protected]> wrote:
>>> 
>>> Hi ceph-users,
>>> Ceph recommends the PGs number of a pool is (100 * OSDs) / Replicas, per my 
>>> understanding, the number of PGs for a pool should be fixed even we scale 
>>> out / in the cluster by adding / removing OSDs, does that mean if we double 
>>> the OSD numbers, the PG number for a pool is not optimal any more and there 
>>> is no chance to correct it?
>>> 
>>> 
>>> Thanks,
>>> Guang
>>> _______________________________________________
>>> ceph-users mailing list
>>> [email protected]
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to