Re: [ceph-users] ceph-mon always election when change crushmap in firefly

2015-09-24 Thread Sage Weil
On Thu, 24 Sep 2015, Alexander Yang wrote: > I use 'ceph osd crush dump | tail -n 20' get : > > "type": 1, > "min_size": 1, > "max_size": 10, > "steps": [ > { "op": "take", > "item": -62, >

Re: [ceph-users] ceph-mon always election when change crushmap in firefly

2015-09-23 Thread Sage Weil
On Wed, 23 Sep 2015, Alexander Yang wrote: > hello, > We use Ceph+Openstack in our private cloud. In our cluster, we have > 5 mons and 800 osds, the Capacity is about 1Pb. And run about 700 vms and > 1100 volumes, > recently, we increase our pg_num , now the cluster have about

Re: [ceph-users] ceph-mon always election when change crushmap in firefly

2015-09-23 Thread Michael Kidd
Hello Alexander, One other point on your email.. You indicate you desire each OSD to have ~100 PGs, but depending on your pool size, it seems you may have forgetting about the additional PGs associated with replication itself. Assuming 3x replication in your environment: 70,000 * 3