[ceph-users] Algorithm for default pg_count calculation

2015-07-27 Thread Konstantin Danilov
Hi all, I'm working on algorithm to estimate PG count for set of pools with minimal input from user. The main target is openstack deployments. I know about ceph.com/pgcalc/, but would like to write down a rules and get a python code. Can you comment following, please? Input: * pg_count has no

[ceph-users] jewel ceph has PG mapped always to the same OSD's

2018-04-06 Thread Konstantin Danilov
Hi all, we have a strange issue on one cluster. One PG is mapped to the particular set of OSD, say X,Y and Z doesn't matter what how we change crush map. The whole picture is next: * This is 10.2.7 ceph version, all monitors and osd's have the same version * One PG eventually get into

Re: [ceph-users] jewel ceph has PG mapped always to the same OSD's

2018-04-06 Thread Konstantin Danilov
first? > > On Fri, Apr 6, 2018 at 2:29 PM Konstantin Danilov <kdani...@mirantis.com> > wrote: > >> Hi all, we have a strange issue on one cluster. >> >> One PG is mapped to the particular set of OSD, say X,Y and Z doesn't >> matter what how >> we cha

Re: [ceph-users] jewel ceph has PG mapped always to the same OSD's

2018-04-07 Thread Konstantin Danilov
Sat, Apr 7, 2018 at 12:42 AM, Konstantin Danilov <kdani...@mirantis.com> wrote: > David, > >> What happens when you deep-scrub this PG? > we haven't try to deep-scrub it, will try. > >> What do the OSD logs show for any lines involving the problem PGs? > Nothing speci