Hi Wido,

Do you mean TCP connections overhead is on OSD nodes or on Ceph clients? If
it is the TCP connections on Ceph clients, I think the maximum number will
be not more than the number of OSDs and no matter of the chunk/stripe size
the number of connection will still same if the image is spreaded through
the all OSDs. Isn't it?

On your 64MB stripes case, how about the performance of random I/O access?
Any suggestion of chunk/stripe size for database block image?

Best regards,



> Date: Fri, 17 Jun 2016 12:45:22 +0200 (CEST)
> From: Wido den Hollander <w...@42on.com>
> To: ceph-users@lists.ceph.com, Lazuardi Nasution
>         <mrxlazuar...@gmail.com>
> Subject: Re: [ceph-users] RBD Stripe/Chunk Size (Order Number) Pros
>         Cons
> Message-ID: <100239228.47.1466160323...@ox.pcextreme.nl>
> Content-Type: text/plain; charset=UTF-8
>
>
> > Op 17 juni 2016 om 12:12 schreef Lazuardi Nasution <
> mrxlazuar...@gmail.com>:
> >
> >
> > Hi Mark,
> >
> > What overhead do you mean? Can it be negligible if I use 4KB (extremly,
> > same with I/O size) stripe/chunk size for making sure that all random I/O
> > will spreaded through all OSDs?
> >
>
> Keep in mind that this involves opening additional TCP connections to
> OSDs. That will come with some overhead. Especially when new connections
> have to go through the handshake process.
>
> I am using 64MB stripes in a case with a customer. They only need
> sequential writes and reads at high speed. Works great for them.
>
> Wido
>
> > Anyway, I love coffee too :)
> >
> > Best regards,
> >
> >
> > > Date: Thu, 16 Jun 2016 04:01:37 -0500
> > > From: Mark Nelson <mnel...@redhat.com>
> > > To: ceph-users@lists.ceph.com
> > > Subject: Re: [ceph-users] RBD Stripe/Chunk Size (Order Number) Pros
> > >         Cons
> > > Message-ID: <c0cbe267-474c-b9e1-b9e6-a4666a764...@redhat.com>
> > > Content-Type: text/plain; charset=windows-1252; format=flowed
> > >
> > >
> > >
> > > On 06/16/2016 03:54 AM, Mark Nelson wrote:
> > > > Hi,
> > > >
> > > > larger stripe size (to an extent) will generally improve large
> > > > sequential read and write performance.
> > >
> > > Oops, I should have had my coffee. I missed a sentence here.  larger
> > > strip size will generally improve large sequential read and write
> > > performance.  Smaller stripe size can provide some of the advantages
> you
> > > mention below, but there's overhead though.  Ok fixed, now back to find
> > > coffee. :)
> > >
> > > > There's overhead though.  It
> > > > means more objects which can slow things down at the filestore level
> > > > when PG splits occur and also potentially means more inodes fall out
> of
> > > > cache, longer syncfs, etc.  On the other hand, if using cache
> tiering,
> > > > smaller objects means less data to promote which can be a big win for
> > > > small IO.
> > > >
> > > > Basically the answer is that there are pluses and minuses, and the
> exact
> > > > behavior will depend on your kernel configuration, hardware, and use
> > > > case.  I think 4MB has been a fairly good default thus far (might
> change
> > > > with bluestore), but tuning for a specific use case may mean a
> smaller
> > > > or larger size is better.
> > > >
> > > > Mark
> > > >
> > > > On 06/16/2016 03:20 AM, Lazuardi Nasution wrote:
> > > >> Hi,
> > > >>
> > > >> I'm looking for some pros cons related to RBD stripe/chunk size
> > > >> indicated by image order number. Default is 4MB (order 22), but
> > > >> OpenStack use 8MB (order 23) as default. What if we use smaller size
> > > >> (lower order number), isn't it more chance that image objects is
> > > >> spreaded through OSDs and cached on OSD nodes RAM? What if we use
> bigger
> > > >> size (higher order number), isn't it more chance that image objects
> is
> > > >> cached as contiguos blocks and may be have read ahead advantage?
> Please
> > > >> give your opnion and reason.
> > > >>
> > > >> Best regards,
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to