Aside from the 10GbE vs 40GbE question, if you're planning to export an RBD
image over smb/nfs I think you are going to struggle to reach anywhere near
1GB/s in a single threaded read. This is because even with readahead
cranked right up you're still only going be hitting a handful of disks at a
time. There's a few threads on this list about sequential reads with the
kernel rbd client. I think CephFS would be more appropriate in your use
case.

On Wed, Jul 13, 2016 at 1:52 PM, Götz Reinicke - IT Koordinator <
[email protected]> wrote:

> Am 13.07.16 um 14:27 schrieb Wido den Hollander:
> >> Op 13 juli 2016 om 12:00 schreef Götz Reinicke - IT Koordinator <
> [email protected]>:
> >>
> >>
> >> Am 13.07.16 um 11:47 schrieb Wido den Hollander:
> >>>> Op 13 juli 2016 om 8:19 schreef Götz Reinicke - IT Koordinator <
> [email protected]>:
> >>>>
> >>>>
> >>>> Hi,
> >>>>
> >>>> can anybody give some realworld feedback on what hardware
> >>>> (CPU/Cores/NIC) you use for a 40Gb (file)server (smb and nfs)? The
> Ceph
> >>>> Cluster will be mostly rbd images. S3 in the future, CephFS we will
> see :)
> >>>>
> >>>> Thanks for some feedback and hints! Regadrs . Götz
> >>>>
> >>> Why do you think you need 40Gb? That's some serious traffic to the
> OSDs and I doubt it's really needed.
> >>>
> >>> Latency-wise 40Gb isn't much better than 10Gb, so why not stick with
> that?
> >>>
> >>> It's also better to have more smaller nodes than a few big nodes with
> Ceph.
> >>>
> >>> Wido
> >>>
> >> Hi Wido,
> >>
> >> may be my post was misleading. The OSD Nodes do have 10G, the FIleserver
> >> in front to the Clients/Destops should have 40G.
> >>
> > Ah, the fileserver will re-export RBD/Samba? Any Xeon E5 CPU will do
> just fine I think.
> True @re-export
> > Still, 40GbE is a lot of bandwidth!
> :) I know, but we have users which like to transfer e.g. raw movie
> footage for a normal project which might be quick at 1TB and they dont
> want to wait hours ;). Or others like to screen/stream 4K Video footage
> raw which is +- 10Gb/second ... Thats the challenge :)
>
> And yes our Ceph Cluster is well designed .. on the paper ;) SSDs
> considered. With lot of helpful feedback from the List!!
>
> I just try to find linux/ceph useres with 40Gb experiences :)
>
>     cheers . Götz
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to