On Oct 7, 2021, at 13:19, Md Hasanur Rashid via lustre-discuss 
<[email protected]<mailto:[email protected]>> wrote:

Hello Everyone,

I am running the Filebench benchmark in my Lustre cluster. I set the 
max_rpcs_in_flight value to be 1. Before executing and after executing, I 
verified that the value of max_rpcs_in_flight is indeed 1. However, when I 
check the rpc_stats, the stat shows a much higher value for RPCs in flight.

If by "much higher than 1" you mean "2", then yes it appears there are mostly 
(95%) 2 RPCs being processed concurrently on this OST.

That might happen if you have 2 clients/mountpoints writing to the same OST, it 
might be an off-by-one logic error allowing an extra RPC in flight, it might be 
intentional for some reason (e.g. to avoid deadlock, memory pressure, etc), or 
it might be accounting error in the statistics (e.g. counting the next RPC to 
be sent before the first one is marked finished).

Following is the value shown for one OSC just for reference:

osc.hasanfs-OST0000-osc-ffff882fcf777000.rpc_stats=
snapshot_time:         1632483604.967291 (secs.usecs)
read RPCs in flight:  0
write RPCs in flight: 0
pending write pages:  0
pending read pages:   0

read write
pages per rpc         rpcs   % cum % |       rpcs   % cum %
1:         1 100 100   |          0   0   0
2:         0   0 100   |          0   0   0
4:         0   0 100   |          0   0   0
8:         0   0 100   |          0   0   0
16:         0   0 100   |          0   0   0
32:         0   0 100   |          0   0   0
64:         0   0 100   |          0   0   0
128:         0   0 100   |          0   0   0
256:         0   0 100   |       9508 100 100

read write
rpcs in flight        rpcs   % cum % |       rpcs   % cum %
0:         0   0   0   |          0   0   0
1:         1 100 100   |         10   0   0
2:         0   0 100   |       9033  95  95
3:         0   0 100   |        465   4 100

read write
offset                rpcs   % cum % |       rpcs   % cum %
0:         1 100 100   |        725   7   7
1:         0   0 100   |          0   0   7
2:         0   0 100   |          0   0   7
4:         0   0 100   |          0   0   7
8:         0   0 100   |          0   0   7
16:         0   0 100   |          0   0   7
32:         0   0 100   |          0   0   7
64:         0   0 100   |          0   0   7
128:         0   0 100   |          0   0   7
256:         0   0 100   |        718   7  15
512:         0   0 100   |       1386  14  29
1024:         0   0 100   |       2205  23  52
2048:         0   0 100   |       1429  15  67
4096:         0   0 100   |       1103  11  79
8192:         0   0 100   |       1942  20 100

Can anyone please explain to me why the RPCs in flight shown in the rpc_stats 
could be higher than the max_rpcs_in_flight?

I do see a similar behavior with the statistics of my home system, which has 
the default osc.*.max_rpcs_in_flight=8, but shows many cases of 9 RPCs in 
flight in the statistics for both read and write, and in a few cases 10 or 11:

read write
rpcs in flight        rpcs   % cum % |       rpcs   % cum %
1:        121   2   2   |      27831  93  93
2:         23   0   3   |        108   0  93
3:         22   0   3   |         19   0  93
4:         24   0   4   |         15   0  93
5:         19   0   5   |         10   0  93
6:         26   0   5   |         13   0  93
7:        176   4   9   |         39   0  93
8:        933  22  32   |         75   0  94
9:       2802  67  99   |       1207   4  98
10:         10   0 100   |        543   1  99
11:          0   0 100   |          1   0 100


The good news is that Lustre is open source, so you can look into the 
lustre/osc code to see why this is happening.  The limit is set by 
cli->cl_max_rpcs_in_flight, and the stats are accounted by 
cli->cl_write_rpc_hist.

Out of curiosity, you don't say _why_ this off-by-one error is of interest?  
Definitely it seems like a bug that could be fixed, but it doesn't seem too 
critical to correct functionality.

Cheers, Andreas
--
Andreas Dilger
Lustre Principal Architect
Whamcloud







_______________________________________________
lustre-discuss mailing list
[email protected]
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to