Hi --

I have been running some experiments involving processes sending large
volumes of data concurrently. The results show (on Linux 2.6.19.2)
that although the total throughput achieved by all the processes
remains constant, the jitter increases as the number of processes
increases. Beyond about 64 processes (on a 2.4GHz Xeon with 4Mb of
cache), processes start getting starved and the streams get very
bursty.

What steps can one take to ensure that CPU allocation to processes
transmitting concurrent packets be equitable? I'm using the default
CPU scheduler, and the processes are using best effort to send the
data.

I guess that it is reasonable that the jitter grow with added
contention through the TCP/IP stack - but what growth rate is
acceptable? Is the data I have below reasonable?

The jitter varies as follows and is shown as an average+/- sd across
25 10-second intervals.

Concurrency            Jitter (us)

1                               1.6+/-0.8
2                               1.5+/-1.1
4                               1.4+/-0.6
8                               0.8+/-0.5
12                             2.3+/-1.2
16                             3.3+/-2.1
20                             4.6+/-2.5
24                             6.2+/-1.4
28                             7.8+/-3.2
32                             10.0+/-3.4
64                             100+

Sen
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to