[ceph-users] Re: Large size differences between pgs

2023-11-17 Thread Eugen Block

Hi,

if you could share some more info about your cluster you might get a  
better response. For example, 'ceph osd df tree' could be helpful to  
get an impression how many PGs you currently have. You can inspect the  
'ceph pg dump' output and look for the column "BYTES" which tells you  
how large a PG is. It's not uncommon to have PGs sizes of 50 GB or  
more, one of our customers had 250 GB per PG because they couldn't  
tell how much data would be stored in their pools so the initial  
pg_num values didn't change for a while.
So getting back to your cluster, assuming that your PGs have a size of  
around 25 GB it would be expected that with a deviation of 1 PG the  
difference would be 25 GB. Depending on the current PG distribution  
you could consider splitting them, but note the risks of having too  
many PGs per OSD.


Regards,
Eugen

Zitat von Miroslav Svoboda :

Namely, the problem what I am trying to solve is that with such a  
large cluster I will lose a lot of  capacity like unused. I have  
deviation set to value 1 at the balancer, that is, if I'm not  
mistaken +-1pg per OSD and then due to the size dispersion between  
the largest and smallest PGs on the one OSDs host.Svoboda Miroslav
 Původní zpráva Od: Anthony D'Atri  
 Datum: 15.11.23  21:54  (GMT+01:00) Komu:  
Miroslav Svoboda  Předmět: Re:  
[ceph-users] Large size differences between pgs How are you  
determining PG size?> On Nov 15, 2023, at 15:46, Miroslav Svoboda  
 wrote:> > Hi,is it possible decrease  
large size differences between pgs? I have 5PB cluster and  
differences between smalest and bigest pgs are somewhere about  
25GB.thanks,Svoboda Miroslav>  
___> ceph-users mailing  
list -- ceph-users@ceph.io> To unsubscribe send an email to  
ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Large size differences between pgs

2023-11-15 Thread Miroslav Svoboda
Namely, the problem what I am trying to solve is that with such a large cluster 
I will lose a lot of  capacity like unused. I have deviation set to value 1 at 
the balancer, that is, if I'm not mistaken +-1pg per OSD and then due to the 
size dispersion between the largest and smallest PGs on the one OSDs 
host.Svoboda Miroslav
 Původní zpráva Od: Anthony D'Atri  Datum: 
15.11.23  21:54  (GMT+01:00) Komu: Miroslav Svoboda 
 Předmět: Re: [ceph-users] Large size differences 
between pgs How are you determining PG size?> On Nov 15, 2023, at 15:46, 
Miroslav Svoboda  wrote:> > Hi,is it possible 
decrease large size differences between pgs? I have 5PB cluster and differences 
between smalest and bigest pgs are somewhere about 25GB.thanks,Svoboda 
Miroslav> ___> ceph-users mailing 
list -- ceph-users@ceph.io> To unsubscribe send an email to 
ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io