Re: implementing idle-time networking

2000-09-19 Thread Mike Smith

> Closer inspection revealed that both the ifnet ifqueues as well as the
> driver transmission chain are always empty upon enqueue/dequeue. Thus, even
> though my fancy queuing code is executed, it has no effect, since there
> never are any queues.
> 
> Can someone shed some light on if this is expected behavior? Wouldn't that
> mean that as packets are being generated by the socket layer, they are
> handed down through the kernel to the driver one-by-one, incurring at
> interrupt for each packet? Or am I missing the obvious?

Packets are pushed down as far as they can go, ie. if the card has 
resources available to take another packet you'll go all the way into the 
device driver.  It's not until you actually run the card out of resources 
that the various queues start to fill up.

The actual interrupt rate depends on the specific card; many of the 
better cards have interrupt-reduction features that eg. only signal an 
interrupt when they have completed a set of transmitted packets, or no 
more than once every Nms, etc.  Otherwise, you're going to take one 
interrupt per packet anyway.



-- 
... every activity meets with opposition, everyone who acts has his
rivals and unfortunately opponents also.  But not because people want
to be opponents, rather because the tasks and relationships force
people to take different points of view.  [Dr. Fritz Todt]




To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: implementing idle-time networking

2000-09-18 Thread Luigi Rizzo

Hi,

i believe there are two things here that you need to consider before
you can see any queue build up in ipq:

 1. you should generate packets (way) faster than the card is able
to handle them;
 2. the network card itself might be able to queue multiple packets in
the "transmit ring";

to check if #2 is true you should either look at the driver, or trace
how fast ipq is drained (e.g. take timestamps) and see if it happens
faster than the packet transmission time.

re. #1, remember that on a 100Mbit net a full-sized packet goes
out in some 100us, which is fast. Maybe you have already done this,
but just in case, you should run your tests preferably with reasonably
long (that might mean some 50-100 packets if there is queueing in
the card) bursts full-sized UDP packets and on a 10Mbit/s link to
see queues build up in ipq.

cheers
luigi

> 
> as part of my thesis research, I'm implementing something similar to the
> POSIX idle-time CPU scheduler for other resource types, one being network
> I/O. The basic idea is to substitute two-level queues for the standard
> ones. I'm seeing some unexpected things (explained below), but let me first
> outline what I'm doing exactly:
> 
> 1. I extend the ifnet structure to contain a second ifqueue, for idle-time
> traffic; and also declare a new flag for mbufs, to indicate whether network
> idle-time processing should be done or not.
> 
> 2. In sosend(), I check if the sending process is running at a POSIX
> idle-time priority. If so, I set the idle-time flag in the mbuf.
> 
> 3. In ether_output_frame(), I check if the idle-time flag is set on an
> mbuf, and if so, enqueue it in the interface's idle-time queue (default
> queue otherwise.)
> 
> 4. In xl_start() (my onboard chip happens to use the xl driver), I first
> check the default queue for any mbufs ready to send. If there are none, I
> try the idle-time queue. If an mbuf could be dequeued from either queue, I
> continue with normal outbound processing (have mbuf be picked up by NIC).
> 
> Unfortunately, this scheme does not work. Some first experiments have shown
> that idle-time network performance is practically identical to
> regular-priority. I measured it going from a slower (10Mb/s) to a faster
> (100Mb/s) host through a private switch, so the NIC should be the
> bottleneck (the processors are both 800Mhz PIII). The new code is in fact
> executed, I have traced it heavily.
> 
> Closer inspection revealed that both the ifnet ifqueues as well as the
> driver transmission chain are always empty upon enqueue/dequeue. Thus, even
> though my fancy queuing code is executed, it has no effect, since there
> never are any queues.
> 
> Can someone shed some light on if this is expected behavior? Wouldn't that
> mean that as packets are being generated by the socket layer, they are
> handed down through the kernel to the driver one-by-one, incurring at
> interrupt for each packet? Or am I missing the obvious?
> 
> Thanks,
> Lars
> -- 
> Lars Eggert <[EMAIL PROTECTED]> Information Sciences Institute
> http://www.isi.edu/larse/University of Southern California
Content-Description: S/MIME Cryptographic Signature

[Attachment, skipping...]



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



implementing idle-time networking

2000-09-18 Thread Lars Eggert

Hi,

as part of my thesis research, I'm implementing something similar to the
POSIX idle-time CPU scheduler for other resource types, one being network
I/O. The basic idea is to substitute two-level queues for the standard
ones. I'm seeing some unexpected things (explained below), but let me first
outline what I'm doing exactly:

1. I extend the ifnet structure to contain a second ifqueue, for idle-time
traffic; and also declare a new flag for mbufs, to indicate whether network
idle-time processing should be done or not.

2. In sosend(), I check if the sending process is running at a POSIX
idle-time priority. If so, I set the idle-time flag in the mbuf.

3. In ether_output_frame(), I check if the idle-time flag is set on an
mbuf, and if so, enqueue it in the interface's idle-time queue (default
queue otherwise.)

4. In xl_start() (my onboard chip happens to use the xl driver), I first
check the default queue for any mbufs ready to send. If there are none, I
try the idle-time queue. If an mbuf could be dequeued from either queue, I
continue with normal outbound processing (have mbuf be picked up by NIC).

Unfortunately, this scheme does not work. Some first experiments have shown
that idle-time network performance is practically identical to
regular-priority. I measured it going from a slower (10Mb/s) to a faster
(100Mb/s) host through a private switch, so the NIC should be the
bottleneck (the processors are both 800Mhz PIII). The new code is in fact
executed, I have traced it heavily.

Closer inspection revealed that both the ifnet ifqueues as well as the
driver transmission chain are always empty upon enqueue/dequeue. Thus, even
though my fancy queuing code is executed, it has no effect, since there
never are any queues.

Can someone shed some light on if this is expected behavior? Wouldn't that
mean that as packets are being generated by the socket layer, they are
handed down through the kernel to the driver one-by-one, incurring at
interrupt for each packet? Or am I missing the obvious?

Thanks,
Lars
-- 
Lars Eggert <[EMAIL PROTECTED]> Information Sciences Institute
http://www.isi.edu/larse/University of Southern California
 S/MIME Cryptographic Signature