Re: [Nfs-ganesha-devel] rpcping

2018-03-12 Thread Matt Benjamin
That's certainly suggestive. I found it hard to believe the red-black tree performance could be that bad, at a loading of 10K items--even inserting, searching, and removing in-order. Then again, I never benchmarked the opr_rbtree code. If I understand correctly, we always insert records in xid

[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: gtest: add rbt_test.cc

2018-03-12 Thread GerritHub
>From Matt Benjamin : Matt Benjamin has uploaded this change for review. ( https://review.gerrithub.io/403586 Change subject: gtest: add rbt_test.cc .. gtest: add rbt_test.cc Presently just provides an

[Nfs-ganesha-devel] WAVL tree

2018-03-12 Thread William Allen Simpson
New in 2015. https://en.wikipedia.org/wiki/WAVL_tree There's a C++ intrusive container implementation at: https://fuchsia.googlesource.com/zircon/+/master/system/ulib/fbl/include/fbl/intrusive_wavl_tree.h I've not found a standard C implementation yet.

[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: (nfs_dupreq) check CAN_BE_DUP before drc_type

2018-03-12 Thread GerritHub
>From : william.allen.simp...@gmail.com has uploaded this change for review. ( https://review.gerrithub.io/403575 Change subject: (nfs_dupreq) check CAN_BE_DUP before drc_type .. (nfs_dupreq)

Re: [Nfs-ganesha-devel] rpcping

2018-03-12 Thread William Allen Simpson
[These are with a Ganesha that doesn't dupreq cache the null operation.] Just how slow is this RB tree? Here's a comparison of 1000 entries versus 100 entries in ops per second: rpcping tcp localhost threads=5 count=1000 (port=2049 program=13 version=3 procedure=0): average 2963.2517,

Re: [Nfs-ganesha-devel] rpcping

2018-03-12 Thread William Allen Simpson
One of the limiting factors in our Ganesha performance is that the NULL operation is going through the dupreq code. That can be easily fixed with a check that jumps to nocache. One of the limiting factors in our ntirpc performance seems to be the call_replies tree that stores the xid of calls

Re: [Nfs-ganesha-devel] About referrals in FSAL_VFS

2018-03-12 Thread sriram patil
The other way is instead of removing the fs_locations callback from object ops we can add attrlist as one more parameter to the function and pull the attrs from mdcache and pass it down to fsal. Fsal can then parse the strings and fill the fsroot and servers. On Mon 12 Mar, 2018, 8:02 PM Sriram

[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: FSAL_CEPH: Rename struct export and handle to ceph_export and ceph_ha...

2018-03-12 Thread GerritHub
>From : supriti.si...@suse.com has uploaded this change for review. ( https://review.gerrithub.io/403533 Change subject: FSAL_CEPH: Rename struct export and handle to ceph_export and ceph_handle. ..

Re: [Nfs-ganesha-devel] About referrals in FSAL_VFS

2018-03-12 Thread Sriram Patil
Some questions about this implementation. We should be filling the fs_locations in attrs as part of getattrs now, this can be done by subfsal getattrs callback so that subfsals have control over the fs_locations. But, with this, we may not require the fs_locations callback in the obj_ops. Is

Re: [Nfs-ganesha-devel] rpcping

2018-03-12 Thread William Allen Simpson
[top post] Matt produced a new tcp-only variant that skipped rpcbind. I tried it, and immediately got crashes. So I've pushed out a few bug fixes With my fixes, here are the results on my desktop. First and foremost, I compared with my prior results against rpcbind, and they were