Hi Sage, As Lars mentioned, at SUSE, we use ganesha 2.5.2/luminous. We did a preliminary performance comparison of cephfs client and nfs-ganesha client. I have attached the results. The results are aggregate bandwidth over 10 clients.
1. Test Setup:
We use fio to read/write to a single 5GB file per thread for 300 seconds. A
single job (represented in x-axis) is of
type {number_of_worker_thread}rw_{block_size}_{op}, where,
number_of_worker_threads: 1, 4, 8, 16
Block size: 4K,64K,1M,4M,8M
op: rw
2. NFS-Ganesha configuration:
Parameters set (other than default):
1. Graceless = True
2. MaxRPCSendBufferSize/MaxRPCRecvBufferSize is set to max value.
3. Observations:
- For single thread (on each client) and 4k block size, the b/w is around 45%
of cephfs
- As number of threads increases, the performance drops. It could be related to
nfs-ganesha parameter
"Dispatch_Max_Reqs_Xprt", which defaults to 512. Note, this parameter is
important only for v2.5.
- We did run with both nfs-ganesha mdcache enabled/disabled. But there were no
significant improvements with caching.
Not sure but it could be related to this issue:
https://github.com/nfs-ganesha/nfs-ganesha/issues/223
The results are still preliminary, and I guess with proper tuning of
nfs-ganesha parameters, it could be better.
Thanks,
Supriti
------
Supriti Singh SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham
Norton,
HRB 21284 (AG Nürnberg)
>>> Lars Marowsky-Bree <[email protected]> 11/09/17 11:07 AM >>>
On 2017-11-08T21:41:41, Sage Weil <[email protected]> wrote:
> Who is running nfs-ganesha's FSAL to export CephFS? What has your
> experience been?
>
> (We are working on building proper testing and support for this into
> Mimic, but the ganesha FSAL has been around for years.)
We use it currently, and it works, but let's not discuss the performance
;-)
How else do you want to build this into Mimic?
Regards,
Lars
--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284
(AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
NFS_Ganesha_vs_CephFS.ods
Description: Binary data
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
