Re: [Nfs-ganesha-devel] A question about rpc requests maybe for Bill

2018-03-15 Thread Frank Filz
> On 3/15/18 7:57 PM, Frank Filz wrote:
> > NFS v4.1 has a max request size option for the session, I’m wondering if
> there’s a way to get the size of a given request easily.
> >
> Depends on how that's defined.  Bytes following header?  And what you need to
> do with it.
> 
> It might be simplest to add a data length field to the struct svc_req, and 
> set it
> during decode.

I just need to compare against the max request length set for a 4.1 session. I 
don't think it needs to be exact. I'm not sure if it's total request length or 
just the NFS portion.

Frank


--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] A question about rpc requests maybe for Bill

2018-03-15 Thread William Allen Simpson

On 3/15/18 7:57 PM, Frank Filz wrote:

NFS v4.1 has a max request size option for the session, I’m wondering if 
there’s a way to get the size of a given request easily.


Depends on how that's defined.  Bytes following header?  And what you
need to do with it.

It might be simplest to add a data length field to the struct svc_req,
and set it during decode.

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] DSESS9002 and DSESS9003 test cases result in NFS4ERR_GRACE

2018-03-15 Thread Frank Filz
Does anyone know why an open well after Ganesha comes up is returning
NFS4ERR_GRACE?

 

I think it has something to do with the cid_reclaim_complete in the NFS v4.1
clientid.

 

Thanks

 

Frank

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] A question about rpc requests maybe for Bill

2018-03-15 Thread Frank Filz
NFS v4.1 has a max request size option for the session, I'm wondering if
there's a way to get the size of a given request easily.

 

Thanks

 

Frank

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] rpcping

2018-03-15 Thread Daniel Gryniewicz
100k is a much more accurate measurement.  I haven't gotten any
crashes since the fixes from yesterday, but I can keep trying.


On Thu, Mar 15, 2018 at 12:10 PM, William Allen Simpson
 wrote:
> On 3/15/18 10:23 AM, Daniel Gryniewicz wrote:
>>
>> Can you try again with a larger count, like 100k?  500 is still quite
>> small for a loop benchmark like this.
>>
> In the code, I commented that 500 is minimal.  I've done a pile of
> 100, 200, 300, and they perform roughly the same as 500.
>
> rpcping tcp localhost count=100 threads=1 workers=5 (port=2049
> program=13 version=3 procedure=0): mean 46812.8194, total 46812.8194
> rpcping tcp localhost count=500 threads=1 workers=5 (port=2049
> program=13 version=3 procedure=0): mean 41285.4267, total 41285.4267
>
> 100k is a lot less (when it works).
>
> tests/rpcping tcp localhost -c 10
> rpcping tcp localhost count=10 threads=1 workers=5 (port=2049
> program=13 version=3 procedure=0): mean 15901.7190, total 15901.7190
> tests/rpcping tcp localhost -c 10
> rpcping tcp localhost count=10 threads=1 workers=5 (port=2049
> program=13 version=3 procedure=0): mean 15894.9971, total 15894.9971
>
> tests/rpcping tcp localhost -c 10 -t 2
> double free or corruption (out)
> Aborted (core dumped)
>
> tests/rpcping tcp localhost -c 10 -t 2
> double free or corruption (out)
> corrupted double-linked list (not small)
> Aborted (core dumped)
>
> Looks like we have a nice dump test case! ;)

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] rpcping

2018-03-15 Thread William Allen Simpson

On 3/15/18 10:23 AM, Daniel Gryniewicz wrote:

Can you try again with a larger count, like 100k?  500 is still quite
small for a loop benchmark like this.


In the code, I commented that 500 is minimal.  I've done a pile of
100, 200, 300, and they perform roughly the same as 500.

rpcping tcp localhost count=100 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 46812.8194, total 46812.8194
rpcping tcp localhost count=500 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 41285.4267, total 41285.4267

100k is a lot less (when it works).

tests/rpcping tcp localhost -c 10
rpcping tcp localhost count=10 threads=1 workers=5 (port=2049 
program=13 version=3 procedure=0): mean 15901.7190, total 15901.7190
tests/rpcping tcp localhost -c 10
rpcping tcp localhost count=10 threads=1 workers=5 (port=2049 
program=13 version=3 procedure=0): mean 15894.9971, total 15894.9971

tests/rpcping tcp localhost -c 10 -t 2
double free or corruption (out)
Aborted (core dumped)

tests/rpcping tcp localhost -c 10 -t 2
double free or corruption (out)
corrupted double-linked list (not small)
Aborted (core dumped)

Looks like we have a nice dump test case! ;)

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] rpcping

2018-03-15 Thread Daniel Gryniewicz
Can you try again with a larger count, like 100k?  500 is still quite
small for a loop benchmark like this.

Daniel

On Thu, Mar 15, 2018 at 9:02 AM, William Allen Simpson
 wrote:
> On 3/14/18 3:33 AM, William Allen Simpson wrote:
>>
>> rpcping tcp localhost threads=1 count=500 (port=2049 program=13
>> version=3 procedure=0): mean 51285.7754, total 51285.7754
>
>
> DanG pushed the latest code onto ntirpc this morning, and I'll submit a
> pullup for Ganesha later today.
>
> I've changed the calculations to be in the final loop, holding onto
> the hope that the original design of averaging each threat result
> might introduce quantization errors.  But it didn't significantly
> change the results.
>
> I've improved the pretty print a bit, now including the worker pool.
> The default 5 worker threads are each handling the incoming replies
> concurrently, so they hopefully keep working without a thread switch.
>
> Another thing I've noted is that the best result is almost always the
> first result after an idle period.  That's opposite of my expectations.
>
> Could it be that the default Ganesha worker pool size of 200 (default)
> or 500 (configured) is much too large, thread scheduler thrashing?
>
> rpcping tcp localhost count=500 threads=1 workers=5 (port=2049
> program=13 version=3 procedure=0): mean 50989.4139, total 50989.4139
> rpcping tcp localhost count=500 threads=1 workers=5 (port=2049
> program=13 version=3 procedure=0): mean 32562.0173, total 32562.0173
> rpcping tcp localhost count=500 threads=1 workers=5 (port=2049
> program=13 version=3 procedure=0): mean 34479.7577, total 34479.7577
> rpcping tcp localhost count=500 threads=1 workers=5 (port=2049
> program=13 version=3 procedure=0): mean 34070.8189, total 34070.8189
> rpcping tcp localhost count=500 threads=1 workers=5 (port=2049
> program=13 version=3 procedure=0): mean 33861.2689, total 33861.2689
> rpcping tcp localhost count=500 threads=1 workers=5 (port=2049
> program=13 version=3 procedure=0): mean 35843.8433, total 35843.8433
> rpcping tcp localhost count=500 threads=1 workers=5 (port=2049
> program=13 version=3 procedure=0): mean 35367.2721, total 35367.2721
> rpcping tcp localhost count=500 threads=1 workers=5 (port=2049
> program=13 version=3 procedure=0): mean 31642.2972, total 31642.2972
> rpcping tcp localhost count=500 threads=1 workers=5 (port=2049
> program=13 version=3 procedure=0): mean 34738.4166, total 34738.4166
> rpcping tcp localhost count=500 threads=1 workers=5 (port=2049
> program=13 version=3 procedure=0): mean 33211.7319, total 33211.7319
> rpcping tcp localhost count=500 threads=1 workers=5 (port=2049
> program=13 version=3 procedure=0): mean 35000.5520, total 35000.5520
> rpcping tcp localhost count=500 threads=1 workers=5 (port=2049
> program=13 version=3 procedure=0): mean 36557.6578, total 36557.6578

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] rpcping

2018-03-15 Thread William Allen Simpson

On 3/14/18 3:33 AM, William Allen Simpson wrote:

rpcping tcp localhost threads=1 count=500 (port=2049 program=13 version=3 
procedure=0): mean 51285.7754, total 51285.7754


DanG pushed the latest code onto ntirpc this morning, and I'll submit a
pullup for Ganesha later today.

I've changed the calculations to be in the final loop, holding onto
the hope that the original design of averaging each threat result
might introduce quantization errors.  But it didn't significantly
change the results.

I've improved the pretty print a bit, now including the worker pool.
The default 5 worker threads are each handling the incoming replies
concurrently, so they hopefully keep working without a thread switch.

Another thing I've noted is that the best result is almost always the
first result after an idle period.  That's opposite of my expectations.

Could it be that the default Ganesha worker pool size of 200 (default)
or 500 (configured) is much too large, thread scheduler thrashing?

rpcping tcp localhost count=500 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 50989.4139, total 50989.4139
rpcping tcp localhost count=500 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 32562.0173, total 32562.0173
rpcping tcp localhost count=500 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 34479.7577, total 34479.7577
rpcping tcp localhost count=500 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 34070.8189, total 34070.8189
rpcping tcp localhost count=500 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 33861.2689, total 33861.2689
rpcping tcp localhost count=500 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 35843.8433, total 35843.8433
rpcping tcp localhost count=500 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 35367.2721, total 35367.2721
rpcping tcp localhost count=500 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 31642.2972, total 31642.2972
rpcping tcp localhost count=500 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 34738.4166, total 34738.4166
rpcping tcp localhost count=500 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 33211.7319, total 33211.7319
rpcping tcp localhost count=500 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 35000.5520, total 35000.5520
rpcping tcp localhost count=500 threads=1 workers=5 (port=2049 program=13 
version=3 procedure=0): mean 36557.6578, total 36557.6578

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: cmake/efence: fix USE_EFENCE

2018-03-15 Thread GerritHub
>From Dominique Martinet :

Dominique Martinet has uploaded this change for review. ( 
https://review.gerrithub.io/403963


Change subject: cmake/efence: fix USE_EFENCE
..

cmake/efence: fix USE_EFENCE

find_package no longer works for some reason, switching to
find_library is simpler and just as good for our purpose

Note that ganesha relies on malloc(0) to work, so efence users need to
set EF_ALLOW_MALLOC_0=1 before running ganesha.

Change-Id: I9b072455669b05e2896c3d5a3db40638860ab834
Signed-off-by: Dominique Martinet 
---
M src/CMakeLists.txt
1 file changed, 2 insertions(+), 2 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/63/403963/1
--
To view, visit https://review.gerrithub.io/403963
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: I9b072455669b05e2896c3d5a3db40638860ab834
Gerrit-Change-Number: 403963
Gerrit-PatchSet: 1
Gerrit-Owner: Dominique Martinet 
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


Re: [Nfs-ganesha-devel] intermittent malloc list corruption on shutdown in -dev.3

2018-03-15 Thread Dominique Martinet
Hi all,

I tracked this crash down and submitted a couple of patches here:
https://github.com/nfs-ganesha/ntirpc/pull/120

These are NOT good as they are: I'm fixing the obvious problem, but
removing the element isn't safe all the time as I wrote in the PR.

Bill, do you know how to fix that properly?

Thanks,
-- 
Dominique

--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


[Nfs-ganesha-devel] Change in ffilz/nfs-ganesha[next]: Fix minor typo sace -> space

2018-03-15 Thread GerritHub
>From :

supriti.si...@suse.com has uploaded this change for review. ( 
https://review.gerrithub.io/403932


Change subject: Fix minor typo sace -> space
..

Fix minor typo sace -> space

Change-Id: Id7ece081e28cc2c7a3158e44694f587f73e8279d
Signed-off-by: Supriti Singh 
---
M src/Protocols/NFS/nfs_proto_tools.c
1 file changed, 12 insertions(+), 12 deletions(-)



  git pull ssh://review.gerrithub.io:29418/ffilz/nfs-ganesha 
refs/changes/32/403932/1
--
To view, visit https://review.gerrithub.io/403932
To unsubscribe, visit https://review.gerrithub.io/settings

Gerrit-Project: ffilz/nfs-ganesha
Gerrit-Branch: next
Gerrit-MessageType: newchange
Gerrit-Change-Id: Id7ece081e28cc2c7a3158e44694f587f73e8279d
Gerrit-Change-Number: 403932
Gerrit-PatchSet: 1
Gerrit-Owner: supriti.si...@suse.com
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel