I have added the possibility to configure the number
of buffers used to store the trace data for packet delays.
The complete command to start netem with a trace file is:
tc qdisc add dev eth1 root netem trace path/to/trace/file.bin buf 3
loops 1 0
with buf: the number of buffers to be used
This patch applies to kernel 2.6.23.
It enhances the network emulator netem with the possibility
to read all delay/drop/duplicate etc values from a trace file.
This trace file contains for each packet to be processed one value.
The values are read from the file in a user space process called
The iproute patch is to big to send on the mailing list,
since the distribution data have changed the directory.
For ease of discussion I add the important changes in this mail.
signed-of-by: Ariane Keller [EMAIL PROTECTED]
---
diff -uprN iproute2-2.6.23/netem/trace/flowseed.c
On Mon, 10 Dec 2007 15:32:14 +0100
Ariane Keller [EMAIL PROTECTED] wrote:
I finally managed to rewrite the netem trace extension to use rtnetlink
communication for the data transfer for user space to kernel space.
The kernel patch is available here:
I finally managed to rewrite the netem trace extension to use rtnetlink
communication for the data transfer for user space to kernel space.
The kernel patch is available here:
http://www.tcn.hypert.net/tcn_kernel_2_6_23_rtnetlink
and the iproute patch is here:
Thanks for your comments!
Patrick McHardy wrote:
Ariane Keller wrote:
That sounds like it would also be possible using rtnetlink. You could
send out a notification whenever you switch the active buffer and have
userspace listen to these and replace the inactive one.
I guess using rtnetlink
Ariane Keller wrote:
Thanks for your comments!
Patrick McHardy wrote:
But with this we would need the tcm_handle, tcm_parent arguments etc.
which are not known in q_netem.c
Therefore we would have to change the parse_qopt() function prototype
in order to pass the whole req and not only the
That sounds like it would also be possible using rtnetlink. You could
send out a notification whenever you switch the active buffer and have
userspace listen to these and replace the inactive one.
I guess using rtnetlink is possible. However I'm not sure about how to
implement it:
The first
Ariane Keller wrote:
That sounds like it would also be possible using rtnetlink. You could
send out a notification whenever you switch the active buffer and have
userspace listen to these and replace the inactive one.
I guess using rtnetlink is possible. However I'm not sure about how to
Ariane Keller wrote:
Increasing the cache size to say 32k for each buffer would be no problem.
Is this enough?
Maybe just a variable length list of 4k buffers chained together? Its
usually easier
to get 4k chunks of memory than 32k chunks, especially under high
network load,
and if you go
I thought about that as well, but in my opinion this does not help much.
It's the same as before: in average every 10ms a new buffer needs to be
filled.
Ben Greear wrote:
Ariane Keller wrote:
Increasing the cache size to say 32k for each buffer would be no problem.
Is this enough?
Maybe
Ariane Keller wrote:
I thought about that as well, but in my opinion this does not help much.
It's the same as before: in average every 10ms a new buffer needs to
be filled.
But, you can fill 50 or 100 at a time, so if user-space is delayed for a
few ms, the
kernel still has plenty of buffers
Ben Greear wrote:
Ariane Keller wrote:
I thought about that as well, but in my opinion this does not help much.
It's the same as before: in average every 10ms a new buffer needs to
be filled.
But, you can fill 50 or 100 at a time, so if user-space is delayed for a
few ms, the
kernel still
Ariane Keller wrote:
Yes, for short-term starvation it helps certainly.
But I'm still not convinced that it is really necessary to add more
buffers, because I'm not sure whether the bottleneck is really the
loading of data from user space to kernel space.
Some basic tests have shown that the
If you actually run out of the trace buffers, do you just continue to
run with the last settings? If so, that would keep up throughput
even if you are out of trace buffers...
Upon configuring the qdisc you can specify a default value, which is
taken when the buffers are empty. It is either
Patrick McHardy wrote:
Ariane Keller wrote:
Thanks for your comments!
I'd like to better understand your dislike of the current
implementation of the data transfer from user space to kernel space.
Is it the fact that we use configfs?
I think, we had already a discussion about this (and we
Ariane Keller wrote:
Patrick McHardy wrote:
I dislike using anything besides rtnetlink for qdisc configuration.
The only way to transfer arbitary amounts of data over netlink would
be to spread the data over multiple messages. But then again, you're
using kmalloc and only seem to allocate 4k,
Patrick McHardy wrote:
That sounds like it would also be possible using rtnetlink. You could
send out a notification whenever you switch the active buffer and have
userspace listen to these and replace the inactive one.
Also, I think you will need a larger cache than 4-8k if you are running
Ariane Keller wrote:
Thanks for your comments!
I'd like to better understand your dislike of the current implementation
of the data transfer from user space to kernel space.
Is it the fact that we use configfs?
I think, we had already a discussion about this (and we changed from
procfs to
Thanks for your comments!
I'd like to better understand your dislike of the current implementation
of the data transfer from user space to kernel space.
Is it the fact that we use configfs?
I think, we had already a discussion about this (and we changed from
procfs to configfs).
Or don't you
Stephen Hemminger wrote:
Still interested in this. I got part way through integrating it but had
concerns about the API from the application to netem for getting the data.
It seemed like there ought to be a better way to do it that could handle large
data sets better, but never really got a good
On Tue, 27 Nov 2007 14:57:26 +0100
Ariane Keller [EMAIL PROTECTED] wrote:
I just wanted to ask whether there is a general interest in this patch.
If yes: great, how to proceed?
otherwise: please let me know why.
Thanks!
Ariane Keller wrote:
Hi Stephen
Approximately a year
22 matches
Mail list logo