Indeed,scaling up PGs in an OSD may be needed for larger HDDs. 

Increasing the number of PGs by 5 fold or 10 fold would have adverse impact of 
OSD peering. What is the practical limits on the number of PGs per OSD with 
default setting, OR should we tuning some Ceph default setting for 
accommodating 1000 or more PGs per OSD?

best regards,

samuel



huxia...@horebdata.cn
 
From: Dan van der Ster
Date: 2021-03-13 14:28
To: Robert Sander
CC: Ceph Users
Subject: [ceph-users] Re: How big an OSD disk could be?
On Fri, Mar 12, 2021 at 6:35 PM Robert Sander
<r.san...@heinlein-support.de> wrote:
>
> Am 12.03.21 um 18:30 schrieb huxia...@horebdata.cn:
>
> > Any other aspects on the limits of bigger capacity hard disk drives?
>
> Recovery will take longer increasing the risk of another failure in the
> same time.
>
 
Another limitation is that OSDs should store 100 PGs each regardless
of their size, so those PGs will each need to store many more objects
and therefore recovery, scrubbing, removal, listing, etc... will all
take longer and longer.
 
So perhaps we'll need to change the OSD to allow for 500 or 1000 PGs
per OSD eventually, (meaning also that PGs per cluster needs to scale
up too!)
 
Cheers, Dan
 
> Regards
> --
> Robert Sander
> Heinlein Support GmbH
> Schwedter Str. 8/9b, 10119 Berlin
>
> http://www.heinlein-support.de
>
> Tel: 030 / 405051-43
> Fax: 030 / 405051-19
>
> Zwangsangaben lt. §35a GmbHG:
> HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
> Geschäftsführer: Peer Heinlein -- Sitz: Berlin
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to