Hi,
when fio <https://linux.die.net/man/1/fio> utility laying out files for
testing, which are located on nfs, shared from OmniOS or Oracle Solaris, I
have following performance issue.
1) nfsd utilized 100% cpu in %sys and load averages about 539 ( 2 x CPU
E5-2640 v4 )
*On OmniOS*
*mpstat*
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 0 2140 105 9032 1 5626 553 122 153 0 100 0 0
1 0 0 0 216 4 10739 0 5899 649 101 0 0 100 0
0
2 0 0 0 43 7 10123 0 5812 548 89 0 0 100 0
0
3 0 0 0 44 8 10914 0 6144 406 84 0 0 100 0
0
4 34 0 0 48 5 10296 11 5838 457 93 497 0 100 0
0
5 0 0 0 62 9 10962 0 6134 510 101 0 0 100 0
0
6 0 0 0 28 4 9795 0 5822 504 92 0 0 100 0 0
7 0 0 0 63 7 10253 0 5849 396 71 0 0 100 0
0
8 0 0 0 39 4 10802 0 6101 433 89 0 0 100 0
0
9 0 0 0 247 1 10827 0 6085 453 89 0 0 100 0
0
10 0 0 0 5519 5491 11996 7 7224 653 95 174 0 100 0
0
11 0 0 0 29 0 11751 0 7188 477 67 0 0 100 0
0
12 0 0 0 5506 5483 11358 0 6858 553 78 0 0 100 0
0
13 0 0 0 10359 10338 12360 2 7250 560 116 0 0 100 0
0
14 0 0 0 28 7 12111 1 7343 537 107 112 0 100 0
0
15 0 0 0 25 11 11631 0 6973 602 108 0 0 100 0
0
16 0 0 0 15 0 12593 0 7068 508 102 0 0 100 0
0
17 0 0 0 18 0 11793 0 6789 509 80 0 0 100 0
0
18 0 0 0 18 0 13015 0 7454 530 98 0 0 100 0
0
19 0 0 0 9 0 12665 0 7545 492 95 0 0 100 0
0
20 0 0 0 25 1 10194 0 5989 512 104 0 0 100 0
0
21 0 0 0 31 3 10586 0 6059 488 102 0 0 100 0
0
22 0 0 0 13 0 8859 0 5485 401 83 0 0 100 0 0
23 0 0 0 41 2 11082 0 6239 538 120 0 0 100 0
0
24 0 0 0 33 2 10489 0 6268 379 88 0 0 100 0
0
25 0 0 0 35 1 10679 0 6170 365 85 0 0 100 0
0
26 0 0 0 41 5 11206 0 6307 456 97 2 0 100 0
0
27 0 0 0 47 6 11504 0 6512 456 95 0 0 100 0
0
28 0 0 0 26 1 10720 0 6437 471 100 0 0 100 0
0
29 0 0 0 35 2 9323 0 5504 526 93 0 0 100 0 0
30 0 0 0 23 0 11625 0 7136 642 108 13 0 100 0
0
31 0 0 0 19 1 12473 0 7445 460 97 0 0 100 0
0
32 0 0 0 18 0 11247 0 7002 571 91 0 0 100 0
0
33 0 0 0 16 0 12166 0 7075 573 84 0 0 100 0
0
34 0 0 0 17 1 12565 0 7239 564 89 0 0 100 0
0
35 0 0 0 18 0 12968 0 7556 653 114 0 0 100 0
0
36 0 0 0 14 0 12600 0 7378 591 107 0 0 100 0
0
37 0 0 0 18 1 12207 0 7280 648 95 0 0 100 0
0
38 0 0 0 12 0 12191 0 7156 695 118 0 0 100 0
0
39 0 0 0 13 0 12089 0 7220 458 110 0 0 100 0
0
*prstat -mLc*
PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG
PROCESS/LWPID
7563 daemon 0.0 6.8 0.0 0.0 0.0 0.0 38 55 432 0 0 0 nfsd/1511
7563 daemon 0.0 6.7 0.0 0.0 0.0 0.0 39 55 450 0 0 0 nfsd/577
7563 daemon 0.0 6.5 0.0 0.0 0.0 0.0 38 55 452 0 0 0 nfsd/1765
7563 daemon 0.0 6.4 0.0 0.0 0.0 0.0 38 56 437 0 0 0 nfsd/780
7563 daemon 0.0 6.4 0.0 0.0 0.0 0.0 38 56 439 0 0 0 nfsd/1633
7563 daemon 0.0 6.3 0.0 0.0 0.0 0.0 39 55 446 0 0 0 nfsd/1883
7563 daemon 0.0 6.1 0.0 0.0 0.0 0.0 37 57 433 0 0 0 nfsd/1737
7563 daemon 0.0 6.1 0.0 0.0 0.0 0.0 39 55 454 0 0 0 nfsd/2004
7563 daemon 0.0 6.1 0.0 0.0 0.0 0.0 38 56 448 0 0 0 nfsd/1994
7563 daemon 0.0 6.1 0.0 0.0 0.0 0.0 37 57 432 0 0 0 nfsd/1639
7563 daemon 0.0 6.0 0.0 0.0 0.0 0.0 38 56 431 0 0 0 nfsd/2104
7563 daemon 0.0 6.0 0.0 0.0 0.0 0.0 37 57 435 0 0 0 nfsd/1423
7563 daemon 0.0 6.0 0.0 0.0 0.0 0.0 39 55 439 1 0 0 nfsd/2088
7563 daemon 0.0 5.8 0.0 0.0 0.0 0.0 37 57 423 0 0 0 nfsd/1949
7563 daemon 0.0 5.7 0.0 0.0 0.0 0.0 37 57 428 0 0 0 nfsd/1313
Total: 47 processes, 1786 lwps, load averages: 602.25, 587.89, 507.80
root@zns2-n2:/usr/local/dtrace/zfs# fsstat zfs 1
new name name attr attr lookup rddir read read write write
file remov chng get set ops ops ops bytes ops bytes
1.27K 120 527 56.9M 252 1.08M 12.8K 140K 298M 28.3M 34.9G zfs
0 0 0 16.2K 0 0 0 0 0 8.11K 8.38K zfs
0 0 0 16.3K 0 1 0 0 0 8.16K 8.27K zfs
0 0 0 17.1K 0 0 0 0 0 8.52K 8.63K zfs
0 0 0 10.9K 0 0 0 0 0 5.46K 5.56K zfs
0 0 0 11.1K 0 0 0 0 0 5.55K 5.66K zfs
0 0 0 19.3K 0 0 0 0 0 9.7K 9.8K zfs
0 0 0 18.9K 0 0 0 0 0 9.43K 9.5K zfs
0 0 0 17.8K 0 4 0 0 0 8.89K 9.00K zfs
root@zns2-n2:/usr/local/dtrace/zfs# zpool iostat pool5 1
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
pool5 72.0G 64.9T 0 3.68K 0 25.4M
pool5 72.0G 64.9T 0 0 0 0
pool5 72.0G 64.9T 0 0 0 0
pool5 72.0G 64.9T 0 0 0 0
pool5 72.0G 64.9T 0 0 0 0
pool5 72.0G 64.9T 0 0 0 0
pool5 72.0G 64.9T 0 140 0 561K
pool5 72.0G 64.9T 0 0 0 0
pool5 72.0G 64.9T 0 0 0 0
*On Client*
[root@localhost ~]# strace -f -c -p $(pidof fio)
Process 77296 attached with 2 threads
^CProcess 77296 detached
Process 77377 detached
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
84.06 66.789277 14472 4615 2307 futex
15.94 12.667985 1 8480084 pwrite
0.00 0.000149 149 1 1 restart_syscall
------ ----------- ----------- --------- --------- ----------------
100.00 79.457411 8484700 2308 total
*Step to reproduce*
share filesystem on linux host, and mounted in with following options
mount -t nfs -o
rw,bg,hard,nointr,rsize=1048576,wsize=1048576,tcp,actimeo=0,vers=3,timeo=600
$1 $2
than run test
fio --time_based --name=benchmark --size=800g --runtime=46820
--filename=/mnt/a/file_r1:/mnt/a/file_r2:/mnt/a/file_r3:/mnt/a/file_r4
--ioengine=libaio --randrepeat=0 --iodepth=128 --direct=1 --invalidate=1
--verify=0 --verify_fatal=0 --numjobs=4 --rw=randread --rate_iops=10000
--blocksize=4k --group_reporting
--------------------------
Who faced this problem? what is the reason?
-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription:
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com