Hi David, Thanks for the explanation! I'll make a search on how much data each pool will use.
Thanks! David Turner <[email protected]> 于2018年10月18日周四 下午9:26写道: > Not all pools need the same amount of PGs. When you get to so many pools > you want to start calculating how much data each pool will have. If 1 of > your pools will have 80% of your data in it, it should have 80% of your > PGs. The metadata pools for rgw likely won't need more than 8 or so PGs > each. If your rgw data pool is only going to have a little scratch data, > then it won't need very many PGs either. > > On Tue, Oct 16, 2018, 3:35 AM Zhenshi Zhou <[email protected]> wrote: > >> Hi, >> >> I have a cluster serving rbd and cephfs storage for a period of >> time. I added rgw in the cluster yesterday and wanted it to server >> object storage. Everything seems good. >> >> What I'm confused is how to calculate the pg/pgp number. As we >> all know, the formula of calculating pgs is: >> >> Total PGs = ((Total_number_of_OSD * 100) / max_replication_count) / >> pool_count >> >> Before I created rgw, the cluster had 3 pools(rbd, cephfs_data, >> cephfs_meta). >> But now it has 8 pools, which object service may use, including >> '.rgw.root', >> 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log' and >> 'defualt.rgw.buckets.index'. >> >> Should I calculate pg number again using new pool number as 8, or should >> I >> continue to use the old pg number? >> _______________________________________________ >> ceph-users mailing list >> [email protected] >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> >
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
