I'm interested in this too. Should start testing next week at 1B+ objects
and I sure would like a recommendation of what config to start with.

We learned the hard way that not sharding is very bad at scales like this.
On Wed, Dec 16, 2015 at 2:06 PM Florian Haas <flor...@hastexo.com> wrote:

> Hi Ben & everyone,
>
> just following up on this one from July, as I don't think there's been
> a reply here then.
>
> On Wed, Jul 8, 2015 at 7:37 AM, Ben Hines <bhi...@gmail.com> wrote:
> > Anyone have any data on optimal # of shards for a radosgw bucket index?
> >
> > We've had issues with bucket index contention with a few million+
> > objects in a single bucket so i'm testing out the sharding.
> >
> > Perhaps at least one shard per OSD? Or, less? More?
>
> I'd like to make this more concrete: what about having several buckets
> each holding 2-4M objects, created on hammer, with 64 index shards? Is
> that type of fill expected to bring radosgw performance down by a
> factor of 5, versus an unpopulated (empty) radosgw setup?
>
> Ben, you wrote elsewhere
> (
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-August/003955.html
> )
> that you found approx. 900k objects to be the threshold where index
> sharding becomes necessary. Have you found that to be a reasonable
> rule of thumb, as in "try 1-2 shards per million objects in your most
> populous bucket"? Also, do you reckon that beyond that, more shards
> make things worse?
>
> > I noticed some discussion here regarding slow bucket listing with
> > ~200k obj --
> http://cephnotes.ksperis.com/blog/2015/05/12/radosgw-big-index
> > - bucket list seems significantly impacted.
> >
> > But i'm more concerned about general object put  (write) / object read
> > speed since 'bucket listing' is not something that we need to do. Not
> > sure if the index has to be completely read to write an object into
> > it?
>
> This is a question where I'm looking for an answer, too.
>
> Cheers,
> Florian
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to