Hi List,
If you write to a pool with 3x replication over 10GE, then it will
need to ship data 3 times over 10GE to finalize the write, so 350MB/s
sounds like a theoretical maximum in terms of a single writer.
Sorry but Janne is wrong : it's the primary OSD responsability to write
to the secondary and third OSD.
http://docs.ceph.com/docs/jewel/_images/ditaa-54719cc959473e68a317f6578f9a2f0f3a8345ee.png
So the theorical bandwidth on 10Gb network is roughly 1GB/s not a third
of that.
And 1GB/s is what his NFS server is writing on its local RBD :
NFS server write bandwith on his rbd is 1196MB/s
The problem is a remote NFS client of this RBD share is only roughly
getting a 1/5th of this 1GB/s bandwidth
NFS client write bandwith on the rbd export is only 233MB/s.
But when the share relies on a non-RBD, local disk, the client is
getting 839MB/s :
NFS client write bandwith on a "local-server-disk" export is 839MB/s
So the question is why an NFS server relying on "RBD storage" coulnd't
offer all the bandwidth it has access to itself ?
Any experience is appreciated : what performance do you get with your
RBD-NFS exports ?
Frederic
Janne Johansson <[email protected]> a écrit le 18/06/18 15:07 :
Den mån 18 juni 2018 kl 14:55 skrev Marc Boisis
<[email protected] <mailto:[email protected]>>:
Hi,
I want to export rbd over nfs in a 10Gb network. Server and Client
are DELL R620 with 10Gb nics.
NFS client write bandwith on the rbd export is only 233MB/s.
My conclusion:
- rbd write performance is good
- nfs write permormance is good
- nfs write on rbd performance is bad
If you write to a pool with 3x replication over 10GE, then it will
need to ship data 3 times over 10GE to finalize the write, so 350MB/s
sounds like a theoretical maximum in terms of a single writer.
--
May the most significant bit of your life be positive.
------------------------------------------------------------------------
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com