Re: [ceph-users] PG size distribution

2015-06-02 Thread Daniel Maraio
Hello, Thank you for the feedback Jan, much appreciated! I wont post the whole tree as it is rather long, but here is an example of one of our hosts. All of the OSDs and hosts are weighted the same, with the exception of a host that is missing an OSD due to a broken backplane. We are only

[ceph-users] PG size distribution

2015-06-02 Thread Daniel Maraio
Hello, I have some questions about the size of my placement groups and how I can get a more even distribution. We currently have 160 2TB OSDs across 20 chassis. We have 133TB used in our radosgw pool with a replica size of 2. We want to move to 3 replicas but are concerned we may fill up

Re: [ceph-users] PG size distribution

2015-06-02 Thread Jan Schermer
Post the output from your “ceph osd tree”. We were in a similiar situation, some of the OSDs were quite full while other had 50% free. This is exactly why we increased the number of PGs, and it helped to some degree. Are all your hosts the same size? Does your CRUSH map select a host in the end?