Re: [ceph-users] ceph balancer: further optimizations?

2018-08-22 Thread Stefan Priebe - Profihost AG
Am 21.08.2018 um 17:28 schrieb Gregory Farnum: > You should be able to create issues now; we had a misconfiguration in > the tracker following the recent spam attack. > -Greg > > On Tue, Aug 21, 2018 at 3:07 AM, Stefan Priebe - Profihost AG > wrote: >> >> Am 21.08.2018 um 12:03 schrieb Stefan

Re: [ceph-users] ceph balancer: further optimizations?

2018-08-21 Thread Gregory Farnum
You should be able to create issues now; we had a misconfiguration in the tracker following the recent spam attack. -Greg On Tue, Aug 21, 2018 at 3:07 AM, Stefan Priebe - Profihost AG wrote: > > Am 21.08.2018 um 12:03 schrieb Stefan Priebe - Profihost AG: >> >> Am 21.08.2018 um 11:56 schrieb Dan

Re: [ceph-users] ceph balancer: further optimizations?

2018-08-21 Thread Stefan Priebe - Profihost AG
Am 21.08.2018 um 12:03 schrieb Stefan Priebe - Profihost AG: > > Am 21.08.2018 um 11:56 schrieb Dan van der Ster: >> On Tue, Aug 21, 2018 at 11:54 AM Stefan Priebe - Profihost AG >> wrote: >>> >>> Am 21.08.2018 um 11:47 schrieb Dan van der Ster: On Mon, Aug 20, 2018 at 10:45 PM Stefan

Re: [ceph-users] ceph balancer: further optimizations?

2018-08-21 Thread Stefan Priebe - Profihost AG
Am 21.08.2018 um 11:56 schrieb Dan van der Ster: > On Tue, Aug 21, 2018 at 11:54 AM Stefan Priebe - Profihost AG > wrote: >> >> Am 21.08.2018 um 11:47 schrieb Dan van der Ster: >>> On Mon, Aug 20, 2018 at 10:45 PM Stefan Priebe - Profihost AG >>> wrote: Am 20.08.2018 um 22:38

Re: [ceph-users] ceph balancer: further optimizations?

2018-08-21 Thread Dan van der Ster
On Tue, Aug 21, 2018 at 11:54 AM Stefan Priebe - Profihost AG wrote: > > Am 21.08.2018 um 11:47 schrieb Dan van der Ster: > > On Mon, Aug 20, 2018 at 10:45 PM Stefan Priebe - Profihost AG > > wrote: > >> > >> > >> Am 20.08.2018 um 22:38 schrieb Dan van der Ster: > >>> On Mon, Aug 20, 2018 at

Re: [ceph-users] ceph balancer: further optimizations?

2018-08-21 Thread Stefan Priebe - Profihost AG
Am 21.08.2018 um 11:47 schrieb Dan van der Ster: > On Mon, Aug 20, 2018 at 10:45 PM Stefan Priebe - Profihost AG > wrote: >> >> >> Am 20.08.2018 um 22:38 schrieb Dan van der Ster: >>> On Mon, Aug 20, 2018 at 10:19 PM Stefan Priebe - Profihost AG >>> wrote: Am 20.08.2018 um 21:52

Re: [ceph-users] ceph balancer: further optimizations?

2018-08-21 Thread Dan van der Ster
On Mon, Aug 20, 2018 at 10:45 PM Stefan Priebe - Profihost AG wrote: > > > Am 20.08.2018 um 22:38 schrieb Dan van der Ster: > > On Mon, Aug 20, 2018 at 10:19 PM Stefan Priebe - Profihost AG > > wrote: > >> > >> > >> Am 20.08.2018 um 21:52 schrieb Sage Weil: > >>> On Mon, 20 Aug 2018, Stefan

Re: [ceph-users] ceph balancer: further optimizations?

2018-08-20 Thread Stefan Priebe - Profihost AG
Am 20.08.2018 um 22:38 schrieb Dan van der Ster: > On Mon, Aug 20, 2018 at 10:19 PM Stefan Priebe - Profihost AG > wrote: >> >> >> Am 20.08.2018 um 21:52 schrieb Sage Weil: >>> On Mon, 20 Aug 2018, Stefan Priebe - Profihost AG wrote: Hello, since loic seems to have left ceph

Re: [ceph-users] ceph balancer: further optimizations?

2018-08-20 Thread David Turner
I didn't ask how many PGs per OSD, I asked how large are your PGs in comparison to your OSDs. For instance my primary data pool in my home cluster has 10914GB of data in it and has 256 PGs. That means that each PG accounts for 42GB of data. I'm using 5TB disks in this cluster. Each PG on an

Re: [ceph-users] ceph balancer: further optimizations?

2018-08-20 Thread Stefan Priebe - Profihost AG
Am 20.08.2018 um 22:13 schrieb David Turner: > You might just have too much data per PG.  If a single PG can account > for 4% of your OSD, then 9% difference in used space on your OSDs is > caused by an OSD having only 2 more PGs than another OSD.  If you do > have very large PGs, increasing your

Re: [ceph-users] ceph balancer: further optimizations?

2018-08-20 Thread Stefan Priebe - Profihost AG
Am 20.08.2018 um 21:52 schrieb Sage Weil: > On Mon, 20 Aug 2018, Stefan Priebe - Profihost AG wrote: >> Hello, >> >> since loic seems to have left ceph development and his wunderful crush >> optimization tool isn'T working anymore i'm trying to get a good >> distribution with the ceph balancer.

Re: [ceph-users] ceph balancer: further optimizations?

2018-08-20 Thread David Turner
You might just have too much data per PG. If a single PG can account for 4% of your OSD, then 9% difference in used space on your OSDs is caused by an OSD having only 2 more PGs than another OSD. If you do have very large PGs, increasing your PG count in those pools should improve your data