On Wednesday 23 August 2006 21:30, Simon Barber wrote:
> One question - in most hardware implementations today the queues are DMA
> rings. In this case the right length of the queue is determined by the
> interrupt/tx_softirq latency required to keep the queue from becoming
> empty. With 802.11 there are large differences in the time it takes to
> transmit different frames - a full size 1Mbit frame vs. a short 54Mbit
> frame. Would it be worth optimizing the DMA queue length to be a
> function of the amount of time rather than number of frames?

I doubt that the added complexity would do any good.
We should look at what a ring actually is.
A ring is an allocated memory space with dma descriptors in it.
The ring size if the number of dma descriptors. So theoretically
the number of descriptors adds up to the size to allocate.
In practice we have alignment issues (at least for bcm43xx).
For a ring we always allocate one page of memory. Regardless
of how much of it is actually used by descriptors.
One wireless-specific thing remains, though. We store meta
data for each descriptor (The ieee80211_tx_control at least).
So basically only the memory consumption of the meta data
could be optimized.

We used to have 512 TX descriptors for each ring until I recently
submitted a patch to lower it to 128. I did this because after
doing stress tests the maximum amount of used descriptors did
not go over about 80 on my machines.

I think it basically comes down to the question:
Do we want to save 1 kb of memory[*] but pay the price of
additional code complexity?

[*] 1kb is a random value invented by me. I did not calculate
    it, but it should be somewhere around that value.

-- 
Greetings Michael.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to