Your analysis is not correct.

The system-wide packet receive queue has bounded length.  When packets
arrive while the queue is full, they are immediately dropped.  (All of
this assuming that you don't use NAPI.)

The operation of the bridge code is instantaneous.  It does not maintain
an internal queue of any kind, it merely gives the packet to the destination
interface's queue (dev_queue_xmit).  IPv4 routing does the same thing.

The network layer and the destination interface are to do their own queue
control.  The driver notifies the networking layer of its hardware queue
status by means of the netif_start_queue/netif_stop_queue operations.
If a packet is sent to a congested interface, it is either put on a
software queue (if queue space permits), or dropped and freed.

[ What you talk about are local IPv4 _socket_ queues.  These are even
  different, and on a different (higher) level.  ]

What is most likely is that the driver for your ethernet interface
does not properly free packets on transmit queue overflow, or does
not properly utilise the netif_start_queue/netif_stop_queue operations.




Can you get the same effect by blasting a lot of data out of one of your
interfaces?  Don't use a userspace UDP sending application or something,
or something like pktgen (in-kernel packet generator).

Can you get the same effect with IPv4 routing?


On Thu, Feb 06, 2003 at 11:55:45AM +0100, Lutz Jaenicke wrote:

> > The fact that noone else is having this problem points in the direction
> > of your 'ARM based embedded system' or the ethernet drivers used therein.
> 
> We are not yet done with our analysis.
> 
> As further investigation turned out, it does not seem to be a memory leak.
> sk_buffs are allocated and freed, just the memory is not given back
> to the userspace after being done.
> It seems, that the data are not sent out fast enough at the second interface.
> Our analysis of the situation currently is as follows:
> When a frame enters the system, it is still in layer 2. If the destination
> is local (or the packet is to be _routed_) it will be received via
> netif_rx() and enter layer 3 (IP). At this point a queuing is done
> and the queue has a maximum number of entries; further packets are
> dropped. (The same seems to be true for sending from layer 3.)
> Therefore without bridging an upper limit of queued packets (in
> several queues) applies.
> When a packet stays in the bridge (layer 2), no queueing is ever done,
> so there is no upper limit on the frames being received and the
> system will eat up all memory, if the frames are not processed or
> sent out as fast as they are coming in.
> 
> Could you (or somebody else) confirm this analysis. If it is correct,
> it means that we have to implement some limit to the number of frames
> in the bridge. If not, we have to re-analyze the behaviour...
> 
> (If somebody wants to test himself: pump UDP data (no handshake)
> through the bridge and watch "free" and "/proc/slabinfo"...)
> 
> Best regards,
>       Lutz
> -- 
> [EMAIL PROTECTED]          Innominate Security Technologies AG
> Dr.-Ing. Lutz J?nicke                               networking people
> Engineer/Software Engineer                 http://www.innominate.com/
> Tel ++49 30 6392-3308                           Fax ++49 30 6392-3307
> _______________________________________________
> Bridge mailing list
> [EMAIL PROTECTED]
> http://www.math.leidenuniv.nl/mailman/listinfo/bridge
_______________________________________________
Bridge mailing list
[EMAIL PROTECTED]
http://www.math.leidenuniv.nl/mailman/listinfo/bridge

Reply via email to