Re: [Nfs-ganesha-devel] rpcping profile

2018-03-25 Thread Matt Benjamin
With N=10 and num_calls=100, on Lemon, test_rbt averages 2.8M
reqs/s.  That's about half the rate when N=1, which I think is
expected.  If this is really an available rbt in-order
search-remove-insert retire rate when N is 10, my intuition would
be it's sufficiently fast not to be the bottleneck your result claims,
and I think it's necessary to understand why.

Matt

On Sun, Mar 25, 2018 at 6:17 PM, Matt Benjamin  wrote:
> 1 What is the peak outstanding size of outstanding calls
>
> 1.1 if e.g. > 100k is that correct: as last week, why would a sensible
> client issue more than e.g. 1000 calls without seeing replies?
>
> 1.3 if outstanding calls is <= 1, why can test_rbt retire millions of
> duty cycles / s in that scenario?
>
> 2 what does the search workload look like when replies are mixed with calls?
> Ie bidirectional rpc this is intended for?
>
> 2.2 Hint: xid dist is not generally sorted;  client defines only its own
> issue order, not reply order nor peer xids;  why is it safe to base reply
> matching around xids being in sorted order?
>
> Matt
>
> On Sun, Mar 25, 2018, 1:40 PM William Allen Simpson
>  wrote:
>>
>> On 3/24/18 7:50 AM, William Allen Simpson wrote:
>> > Noting that the top problem is exactly my prediction by knowledge of
>> > the code:
>> >clnt_req_callback() opr_rbtree_insert()
>> >
>> > The second is also exactly as expected:
>> >
>> >svc_rqst_expire_insert() opr_rbtree_insert() svc_rqst_expire_cmpf()
>> >
>> > These are both inserted in ascending order, sorted in ascending order,
>> > and removed in ascending order
>> >
>> > QED: rb_tree is a poor data structure for this purpose.
>>
>> I've replaced those 2 rbtrees with TAILQ, so that we are not
>> spending 49% of the time there anymore, and am now seeing:
>>
>> rpcping tcp localhost count=1000 threads=1 workers=5 (port=2049
>> program=13 version=3 procedure=0): mean 151800.6287, total 151800.6287
>> rpcping tcp localhost count=1000 threads=1 workers=5 (port=2049
>> program=13 version=3 procedure=0): mean 167828.8817, total 167828.8817
>>
>> This is probably good enough for now.  Time to move on to
>> more interesting things.
>>
>>
>> --
>> Check out the vibrant tech community on one of the world's most
>> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
>> ___
>> Nfs-ganesha-devel mailing list
>> Nfs-ganesha-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: gtest: adjust linkage for nfs_core

2018-03-25 Thread GerritHub
>From Matt Benjamin :

Matt Benjamin has uploaded this change for review. ( 
https://review.gerrithub.io/405191


Change subject: gtest: adjust linkage for nfs_core
..

gtest: adjust linkage for nfs_core

Change-Id: I8d9159feabd3488480748ca5646b2a1e2ca57385
Signed-off-by: Matt Benjamin 
---
M src/gtest/CMakeLists.txt
1 file changed, 7 insertions(+), 1 deletion(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/91/405191/1
--
To view, visit https://review.gerrithub.io/405191
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I8d9159feabd3488480748ca5646b2a1e2ca57385
Gerrit-Change-Number: 405191
Gerrit-PatchSet: 1
Gerrit-Owner: Matt Benjamin 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: 9p. gtest: qualify export identifier in struct _9p_fid

2018-03-25 Thread GerritHub
>From Matt Benjamin :

Matt Benjamin has uploaded this change for review. ( 
https://review.gerrithub.io/405190


Change subject: 9p. gtest: qualify export identifier in struct _9p_fid
..

9p. gtest: qualify export identifier in struct _9p_fid

Permit C++ compiliation.

Change-Id: I5a4c3f5370aa90c6ac5c9b73f798feedd545517e
Signed-off-by: Matt Benjamin 
---
M src/Protocols/9P/9p_attach.c
M src/Protocols/9P/9p_getattr.c
M src/Protocols/9P/9p_link.c
M src/Protocols/9P/9p_proto_tools.c
M src/Protocols/9P/9p_rename.c
M src/Protocols/9P/9p_renameat.c
M src/Protocols/9P/9p_walk.c
M src/Protocols/9P/9p_xattrwalk.c
M src/include/9p.h
9 files changed, 23 insertions(+), 23 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/90/405190/1
--
To view, visit https://review.gerrithub.io/405190
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I5a4c3f5370aa90c6ac5c9b73f798feedd545517e
Gerrit-Change-Number: 405190
Gerrit-PatchSet: 1
Gerrit-Owner: Matt Benjamin 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: test_rbt: time explicitly, default to 1M cycles

2018-03-25 Thread GerritHub
>From Matt Benjamin :

Matt Benjamin has uploaded this change for review. ( 
https://review.gerrithub.io/405189


Change subject: test_rbt: time explicitly, default to 1M cycles
..

test_rbt: time explicitly, default to 1M cycles

Change-Id: Ic7b8dec4acf956d48836678c3953807c034493be
Signed-off-by: Matt Benjamin 
---
M src/gtest/test_rbt.cc
1 file changed, 28 insertions(+), 1 deletion(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/89/405189/1
-- 
To view, visit https://review.gerrithub.io/405189
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ic7b8dec4acf956d48836678c3953807c034493be
Gerrit-Change-Number: 405189
Gerrit-PatchSet: 1
Gerrit-Owner: Matt Benjamin 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] rpcping profile

2018-03-25 Thread Matt Benjamin
1 What is the peak outstanding size of outstanding calls

1.1 if e.g. > 100k is that correct: as last week, why would a sensible
client issue more than e.g. 1000 calls without seeing replies?

1.3 if outstanding calls is <= 1, why can test_rbt retire millions of
duty cycles / s in that scenario?

2 what does the search workload look like when replies are mixed with
calls?  Ie bidirectional rpc this is intended for?

2.2 Hint: xid dist is not generally sorted;  client defines only its own
issue order, not reply order nor peer xids;  why is it safe to base reply
matching around xids being in sorted order?

Matt

On Sun, Mar 25, 2018, 1:40 PM William Allen Simpson <
william.allen.simp...@gmail.com> wrote:

> On 3/24/18 7:50 AM, William Allen Simpson wrote:
> > Noting that the top problem is exactly my prediction by knowledge of
> > the code:
> >clnt_req_callback() opr_rbtree_insert()
> >
> > The second is also exactly as expected:
> >
> >svc_rqst_expire_insert() opr_rbtree_insert() svc_rqst_expire_cmpf()
> >
> > These are both inserted in ascending order, sorted in ascending order,
> > and removed in ascending order
> >
> > QED: rb_tree is a poor data structure for this purpose.
>
> I've replaced those 2 rbtrees with TAILQ, so that we are not
> spending 49% of the time there anymore, and am now seeing:
>
> rpcping tcp localhost count=1000 threads=1 workers=5 (port=2049
> program=13 version=3 procedure=0): mean 151800.6287, total 151800.6287
> rpcping tcp localhost count=1000 threads=1 workers=5 (port=2049
> program=13 version=3 procedure=0): mean 167828.8817, total 167828.8817
>
> This is probably good enough for now.  Time to move on to
> more interesting things.
>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> Nfs-ganesha-devel mailing list
> Nfs-ganesha-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] rpcping comparison nfs-server

2018-03-25 Thread William Allen Simpson

On 3/23/18 1:30 PM, William Allen Simpson wrote:

Ran some apples to apples comparisons today V2.7-dev.5:


Without the client-side rbtrees, rpcping works a lot better:



Ganesha (worst, best):

rpcping tcp localhost count=1000 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 33950.1556, total 33950.1556
rpcping tcp localhost count=1000 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 43668.3435, total 43668.3435



rpcping tcp localhost count=1000 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 151800.6287, total 151800.6287
rpcping tcp localhost count=1000 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 167828.8817, total 167828.8817



Kernel (worst, best):

rpcping tcp localhost count=1000 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 46826.6383, total 46826.6383
rpcping tcp localhost count=1000 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 52915.1652, total 52915.1652



rpcping tcp localhost count=1000 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 175773.3986, total 175773.3986
rpcping tcp localhost count=1000 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 189168.4778, total 189168.4778

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] rpcping profile

2018-03-25 Thread William Allen Simpson

On 3/24/18 7:50 AM, William Allen Simpson wrote:

Noting that the top problem is exactly my prediction by knowledge of
the code:
   clnt_req_callback() opr_rbtree_insert()

The second is also exactly as expected:

   svc_rqst_expire_insert() opr_rbtree_insert() svc_rqst_expire_cmpf()

These are both inserted in ascending order, sorted in ascending order,
and removed in ascending order

QED: rb_tree is a poor data structure for this purpose.


I've replaced those 2 rbtrees with TAILQ, so that we are not
spending 49% of the time there anymore, and am now seeing:

rpcping tcp localhost count=1000 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 151800.6287, total 151800.6287
rpcping tcp localhost count=1000 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 167828.8817, total 167828.8817

This is probably good enough for now.  Time to move on to
more interesting things.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel