I've managed RBD cluster that had all of the RBDs configured to 1M objects
and filled up the cluster to 75% full with 4TB drives.  Other than the
collection splitting (subfolder splitting as I've called it before) we
didn't have any problems with object counts.

On Wed, Oct 11, 2017 at 9:47 AM Gregory Farnum <[email protected]> wrote:

> These limits unfortunately aren’t very well understood or studied right
> now. The biggest slowdown I’m aware of is that when using FileStore you see
> an impact as it starts to create more folders internally (this is the
> “collection splitting”) and require more cached metadata to do fast lookups.
>
> But that doesn’t apply in the same ways to BlueStore, which shouldn’t have
> any of those cliff edges that I’m aware of. :)
> -Greg
> On Tue, Oct 10, 2017 at 3:45 AM Alexander Kushnirenko <
> [email protected]> wrote:
>
>> Hi,
>>
>> Are there any recommendations on what is the limit when osd performance
>> start to decline because of large number of objects? Or perhaps a procedure
>> on how to find this number (luminous)?  My understanding is that the
>> recommended object size is 10-100 MB, but is there any performance hit due
>> to large number of objects?  I ran across a number of about 1M objects, is
>> that so?  We do not have special SSD for journal and use librados for I/O.
>>
>> Alexander.
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to