On Fri, 2007-08-06 at 12:38 +0400, Evgeniy Polyakov wrote:
> On Thu, Jun 07, 2007 at 06:23:16PM -0400, jamal ([EMAIL PROTECTED]) wrote:

> > I believe both are called with no lock. The idea is to avoid the lock
> > entirely when unneeded. That code may end up finding that the packet
[..]
> +     netif_tx_lock_bh(odev);
> +     if (!netif_queue_stopped(odev)) {
> +
> +             idle_start = getCurUs();
> +             pkt_dev->tx_entered++;
> +             ret = odev->hard_batch_xmit(&odev->blist, odev);

[..]
> The same applies to *_gso case.
> 

You missed an important piece which is grabbing of
__LINK_STATE_QDISC_RUNNING


> Without lock that would be wrong - it accesses hardware.

We are achieving the goal of only a single CPU entering that path. Are
you saying that is not good enough?

> I only saw results Krishna posted, 

Ok, sorry - i thought you saw the git log or earlier results where
other things are captured.

> and i also do not know, what service demand is :)

>From the explanation seems to be how much cpu was used while sending. Do
you have any suggestions for computing cpu use?
in pktgen i added code to count how many microsecs were used in
transmitting.

> Result looks good, but I still do not understand how it appeared, that
> is why I'm not that excited about idea - I just do not know it in
> details.

To add to KKs explanation on other email:
Essentially the value is in amortizing the cost of barriers and IO per
packet. For example the queue lock is held/released only once per X
packets. DMA kicking which includes both a PCI IO write and mbs is done
only once per X packets. There are still a lot of room for improvement
of such IO;

cheers,
jamal

-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to