much.
Thanks,
Guang
Date: Thu, 10 Oct 2013 05:15:27 -0700
From: Kyle Bader kyle.ba...@gmail.com
To: ceph-users@lists.ceph.com ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Expanding ceph cluster by adding more OSDs
Message-ID:
cafmfnwq+hbgsezme3vwom_gqcwikd1393rxc+xb0xgt4nxq
I've contracted and expanded clusters by up to a rack of 216 OSDs - 18
nodes, 12 drives each. New disks are configured with a CRUSH weight of 0
and I slowly add weight (0.1 to 0.01 increments), wait for the cluster to
become active+clean and then add more weight. I was expanding after
contraction
You can add PGs, the process is called splitting. I don't think PG merging,
the reduction in the number of PGs, is ready yet.
On Oct 8, 2013, at 11:58 PM, Guang yguan...@yahoo.com wrote:
Hi ceph-users,
Ceph recommends the PGs number of a pool is (100 * OSDs) / Replicas, per my
Thanks Mike.
Is there any documentation for that?
Thanks,
Guang
On Oct 9, 2013, at 9:58 PM, Mike Lowe wrote:
You can add PGs, the process is called splitting. I don't think PG merging,
the reduction in the number of PGs, is ready yet.
On Oct 8, 2013, at 11:58 PM, Guang
There used to be, can't find it right now. Something like 'ceph osd set pg_num
num' then 'ceph osd set pgp_num num' to actually move your data into the
new pg's. I successfully did it several months ago, when bobtail was current.
Sent from my iPad
On Oct 9, 2013, at 10:30 PM, Guang
Thanks Mike. I get your point.
There are still a few things confusing me:
1) We expand Ceph cluster by adding more OSDs, which will trigger re-balance
PGs across the old new OSDs, and likely it will break the optimized PG
numbers for the cluster.
2) We can add more PGs which will trigger
Hi ceph-users,
Ceph recommends the PGs number of a pool is (100 * OSDs) / Replicas, per my
understanding, the number of PGs for a pool should be fixed even we scale out /
in the cluster by adding / removing OSDs, does that mean if we double the OSD
numbers, the PG number for a pool is not