These limits unfortunately aren’t very well understood or studied right now. The biggest slowdown I’m aware of is that when using FileStore you see an impact as it starts to create more folders internally (this is the “collection splitting”) and require more cached metadata to do fast lookups.
But that doesn’t apply in the same ways to BlueStore, which shouldn’t have any of those cliff edges that I’m aware of. :) -Greg On Tue, Oct 10, 2017 at 3:45 AM Alexander Kushnirenko <[email protected]> wrote: > Hi, > > Are there any recommendations on what is the limit when osd performance > start to decline because of large number of objects? Or perhaps a procedure > on how to find this number (luminous)? My understanding is that the > recommended object size is 10-100 MB, but is there any performance hit due > to large number of objects? I ran across a number of about 1M objects, is > that so? We do not have special SSD for journal and use librados for I/O. > > Alexander. > > > _______________________________________________ > ceph-users mailing list > [email protected] > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
