On 3/8/18 12:33 PM, William Allen Simpson wrote:
Still having no luck. Instead of relying on RPC itself, checked
with Ganesha about what it registers, and tried some of those.
Without running Ganesha, rpcinfo reports portmapper services by default
on my machine. Can talk to it via localhost (but not 127.0.0.1 loopback).
bill@simpson91:~/rdma/build_ganesha$ rpcinfo
program version netid address service owner
100000 4 tcp6 ::.0.111 portmapper superuser
100000 3 tcp6 ::.0.111 portmapper superuser
100000 4 udp6 ::.0.111 portmapper superuser
100000 3 udp6 ::.0.111 portmapper superuser
100000 4 tcp 0.0.0.0.0.111 portmapper superuser
100000 3 tcp 0.0.0.0.0.111 portmapper superuser
100000 2 tcp 0.0.0.0.0.111 portmapper superuser
100000 4 udp 0.0.0.0.0.111 portmapper superuser
100000 3 udp 0.0.0.0.0.111 portmapper superuser
100000 2 udp 0.0.0.0.0.111 portmapper superuser
100000 4 local /run/rpcbind.sock portmapper superuser
100000 3 local /run/rpcbind.sock portmapper superuser
TCP works. UDP with the same parameters hangs forever.
tests/rpcping tcp localhost 1 1000 100000 4
rpcping tcp localhost threads=1 count=1000 (program=100000 version=4
procedure=0): 50000.0000, total 50000.0000
tests/rpcping tcp localhost 1 10000 100000 4
rpcping tcp localhost threads=1 count=10000 (program=100000 version=4
procedure=0): 17543.8596, total 17543.8596
tests/rpcping tcp localhost 1 100000 100000 4
^C
What's interesting to me is that 1,000 async calls has much
better throughput (calls per second) than 10,000. Hard to
say where is the bottleneck without profiling.
100,000 async calls bogs down so long that I gave up. Same
with 2 threads and 10,000 -- or 3 threads down to 100.
tests/rpcping tcp localhost 2 1000 100000 4
rpcping tcp localhost threads=2 count=1000 (program=100000 version=4
procedure=0): 8333.3333, total 16666.6667
tests/rpcping tcp localhost 2 10000 100000 4
^C
tests/rpcping tcp localhost 3 1000 100000 4
^C
tests/rpcping tcp localhost 3 500 100000 4
^C
tests/rpcping tcp localhost 3 100 100000 4
rpcping tcp localhost threads=3 count=100 (program=100000 version=4
procedure=0): 6666.6667, total 20000.0000
tests/rpcping tcp localhost 5 100 100000 4
rpcping tcp localhost threads=5 count=100 (program=100000 version=4
procedure=0): 10000.0000, total 50000.0000
tests/rpcping tcp localhost 7 100 100000 4
rpcping tcp localhost threads=7 count=100 (program=100000 version=4
procedure=0): 8571.4286, total 60000.0000
tests/rpcping tcp localhost 10 100 100000 4
rpcping tcp localhost threads=10 count=100 (program=100000 version=4
procedure=0): 7000.0000, total 70000.0000
tests/rpcping tcp localhost 15 100 100000 4
rpcping tcp localhost threads=15 count=100 (program=100000 version=4
procedure=0): 5666.6667, total 85000.0000
tests/rpcping tcp localhost 20 100 100000 4
rpcping tcp localhost threads=20 count=100 (program=100000 version=4
procedure=0): 3750.0000, total 75000.0000
tests/rpcping tcp localhost 25 100 100000 4
rpcping tcp localhost threads=25 count=100 (program=100000 version=4
procedure=0): 2420.0000, total 60500.0000
Note that 5 threads and 100 catches up to 1 thread and 1,000?
So the bottleneck is probably in ntirpc. That seems validated by 7 to
25 threads; portmapper will handle more requests (with diminishing
returns), but ntirpc cannot handle more results (on the same thread).
Oh well, against nfs-ganesha still doesn't work.
tests/rpcping tcp localhost 1 10 100003 4
clnt_ncreate failed: RPC: Unknown protocol
tests/rpcping tcp localhost 1 10 100003 3
clnt_ncreate failed: RPC: Unknown protocol
But it's in the rpcinfo:
program version netid address service owner
100000 4 tcp6 ::.0.111 portmapper superuser
100000 3 tcp6 ::.0.111 portmapper superuser
100000 4 udp6 ::.0.111 portmapper superuser
100000 3 udp6 ::.0.111 portmapper superuser
100000 4 tcp 0.0.0.0.0.111 portmapper superuser
100000 3 tcp 0.0.0.0.0.111 portmapper superuser
100000 2 tcp 0.0.0.0.0.111 portmapper superuser
100000 4 udp 0.0.0.0.0.111 portmapper superuser
100000 3 udp 0.0.0.0.0.111 portmapper superuser
100000 2 udp 0.0.0.0.0.111 portmapper superuser
100000 4 local /run/rpcbind.sock portmapper superuser
100000 3 local /run/rpcbind.sock portmapper superuser
100003 3 udp 0.0.0.0.8.1 nfs superuser
100003 3 udp6 ::ffff:0.0.0.0.8.1 nfs superuser
100003 3 tcp 0.0.0.0.8.1 nfs superuser
100003 3 tcp6 ::ffff:0.0.0.0.8.1 nfs superuser
100003 4 udp 0.0.0.0.8.1 nfs superuser
100003 4 udp6 ::ffff:0.0.0.0.8.1 nfs superuser
100003 4 tcp 0.0.0.0.8.1 nfs superuser
100003 4 tcp6 ::ffff:0.0.0.0.8.1 nfs superuser
100005 1 udp 0.0.0.0.193.2 mountd superuser
100005 1 udp6 ::ffff:0.0.0.0.193.2 mountd superuser
100005 1 tcp 0.0.0.0.173.189 mountd superuser
100005 1 tcp6 ::ffff:0.0.0.0.173.189 mountd superuser
100005 3 udp 0.0.0.0.193.2 mountd superuser
100005 3 udp6 ::ffff:0.0.0.0.193.2 mountd superuser
100005 3 tcp 0.0.0.0.173.189 mountd superuser
100005 3 tcp6 ::ffff:0.0.0.0.173.189 mountd superuser
100021 4 udp 0.0.0.0.188.141 nlockmgr superuser
100021 4 udp6 ::ffff:0.0.0.0.188.141 nlockmgr superuser
100021 4 tcp 0.0.0.0.129.57 nlockmgr superuser
100021 4 tcp6 ::ffff:0.0.0.0.129.57 nlockmgr superuser
100011 1 udp 0.0.0.0.3.107 rquotad superuser
100011 1 udp6 ::ffff:0.0.0.0.3.107 rquotad superuser
100011 1 tcp 0.0.0.0.3.107 rquotad superuser
100011 1 tcp6 ::ffff:0.0.0.0.3.107 rquotad superuser
100011 2 udp 0.0.0.0.3.107 rquotad superuser
100011 2 udp6 ::ffff:0.0.0.0.3.107 rquotad superuser
100011 2 tcp 0.0.0.0.3.107 rquotad superuser
100011 2 tcp6 ::ffff:0.0.0.0.3.107 rquotad superuser
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Nfs-ganesha-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel