> "Werner" == Werner Almesberger <[EMAIL PROTECTED]> writes:
Werner> Jeff Garzik wrote:
>> 3) Slabbier packet allocation.
Werner> Hmm, this may actually be worse during bursts: if you burst
Werner> exceeds the preallocated size, you have to perform more
Werner> expensive/slower operations
"Werner" == Werner Almesberger [EMAIL PROTECTED] writes:
Werner Jeff Garzik wrote:
3) Slabbier packet allocation.
Werner Hmm, this may actually be worse during bursts: if you burst
Werner exceeds the preallocated size, you have to perform more
Werner expensive/slower operations (e.g. running
> "Noah" == Noah Romer <[EMAIL PROTECTED]> writes:
Noah> In my experience, Tx interrupt mitigation is of little
Noah> benefit. I actually saw a performance increase of ~20% when I
Noah> turned off Tx interrupt mitigation in my driver (could have been
Noah> poor implementation on my part).
"Noah" == Noah Romer [EMAIL PROTECTED] writes:
Noah In my experience, Tx interrupt mitigation is of little
Noah benefit. I actually saw a performance increase of ~20% when I
Noah turned off Tx interrupt mitigation in my driver (could have been
Noah poor implementation on my part).
You need to
> "Jeff" == Jeff Garzik <[EMAIL PROTECTED]> writes:
Jeff> 1) Rx Skb recycling. It would be nice to have skbs returned to
Jeff> the driver after the net core is done with them, rather than
Jeff> have netif_rx free the skb. Many drivers pre-allocate a number
Jeff> of maximum-sized skbs into
Hi!
> > an alloc of a PKT_BUF_SZ'd skb immediately follows a free of a
> > same-sized skb. 100% of the time.
>
> Free/Alloc gives the mm the chance to throttle it by failing, and also to
> recover from fragmentation by packing the slabs. If you don't do it you need
> to add a hook somewhere
Hi!
an alloc of a PKT_BUF_SZ'd skb immediately follows a free of a
same-sized skb. 100% of the time.
Free/Alloc gives the mm the chance to throttle it by failing, and also to
recover from fragmentation by packing the slabs. If you don't do it you need
to add a hook somewhere that gets
"Jeff" == Jeff Garzik [EMAIL PROTECTED] writes:
Jeff 1) Rx Skb recycling. It would be nice to have skbs returned to
Jeff the driver after the net core is done with them, rather than
Jeff have netif_rx free the skb. Many drivers pre-allocate a number
Jeff of maximum-sized skbs into which the
Hello!
> > 3) Enforce correct usage of it in all the networking :-)
>
> ,) -- the tricky part.
No tricks, IP[v6] is already enforced to be clever; all the rest are free
to do this, if they desire. And btw, driver need not to parse anything,
but its internal stuff and even aligning eth II
Hello!
3) Enforce correct usage of it in all the networking :-)
,) -- the tricky part.
No tricks, IP[v6] is already enforced to be clever; all the rest are free
to do this, if they desire. And btw, driver need not to parse anything,
but its internal stuff and even aligning eth II header
"David S. Miller" wrote:
> Andi Kleen writes:
> > Or did I misunderstand you?
>
> What is wrong with making methods, keyed off of the ethernet protocol
> ID, that can do the "I know where/how-long headers are" stuff for that
> protocol? Only cards with the problem call into this function
Jeff Garzik writes:
> I only want to know if more are coming, not actually pass multiples..
Ok, then my only concern is that the path from "I know more is coming"
down to hard_start_xmit invocation is long. It would mean passing a
new piece of state a long distance inside the stack from SKB
Andi Kleen writes:
> Or did I misunderstand you?
What is wrong with making methods, keyed off of the ethernet protocol
ID, that can do the "I know where/how-long headers are" stuff for that
protocol? Only cards with the problem call into this function vector
or however we arrange it, and then
"David S. Miller" wrote:
> Jeff Garzik writes:
> > 2) Tx packet grouping.
> ...
> > Disadvantages?
>
> See Torvalds vs. world discussion on this list about API entry points
> which pass multiple pages at a time versus simpler ones which pass
> only a single page at a time. :-)
I only want to
On Mon, Feb 26, 2001 at 03:48:31PM -0800, David S. Miller wrote:
>
> Andi Kleen writes:
> > 4) Better support for aligned RX by only copying the header
>
> Andi you can make this now:
>
> 1) Add new "post-header data pointer" field in SKB.
That would imply to let the drivers parse all
Andi Kleen writes:
> 4) Better support for aligned RX by only copying the header
Andi you can make this now:
1) Add new "post-header data pointer" field in SKB.
2) Change drivers to copy into aligned headroom as
you mention, and they set this new post-header
pointer as appropriate. For
Jeff Garzik writes:
> 1) Rx Skb recycling.
...
> Advantages: A de-allocation immediately followed by a reallocation is
> eliminated, less L1 cache pollution during interrupt handling.
> Potentially less DMA traffic between card and host.
...
> Disadvantages?
It simply cannot work, as
Jeff Garzik writes:
1) Rx Skb recycling.
...
Advantages: A de-allocation immediately followed by a reallocation is
eliminated, less L1 cache pollution during interrupt handling.
Potentially less DMA traffic between card and host.
...
Disadvantages?
It simply cannot work, as Alexey
"David S. Miller" wrote:
Jeff Garzik writes:
2) Tx packet grouping.
...
Disadvantages?
See Torvalds vs. world discussion on this list about API entry points
which pass multiple pages at a time versus simpler ones which pass
only a single page at a time. :-)
I only want to know if
On Mon, Feb 26, 2001 at 03:48:31PM -0800, David S. Miller wrote:
Andi Kleen writes:
4) Better support for aligned RX by only copying the header
Andi you can make this now:
1) Add new "post-header data pointer" field in SKB.
That would imply to let the drivers parse all headers to
Andi Kleen writes:
4) Better support for aligned RX by only copying the header
Andi you can make this now:
1) Add new "post-header data pointer" field in SKB.
2) Change drivers to copy into aligned headroom as
you mention, and they set this new post-header
pointer as appropriate. For
Jeff Garzik writes:
I only want to know if more are coming, not actually pass multiples..
Ok, then my only concern is that the path from "I know more is coming"
down to hard_start_xmit invocation is long. It would mean passing a
new piece of state a long distance inside the stack from SKB
Andi Kleen writes:
Or did I misunderstand you?
What is wrong with making methods, keyed off of the ethernet protocol
ID, that can do the "I know where/how-long headers are" stuff for that
protocol? Only cards with the problem call into this function vector
or however we arrange it, and then
"David S. Miller" wrote:
Andi Kleen writes:
Or did I misunderstand you?
What is wrong with making methods, keyed off of the ethernet protocol
ID, that can do the "I know where/how-long headers are" stuff for that
protocol? Only cards with the problem call into this function vector
or
In message <[EMAIL PROTECTED]> you write:
> Jeff Garzik <[EMAIL PROTECTED]> writes:
>
> > Advantages: A de-allocation immediately followed by a reallocation is
> > eliminated, less L1 cache pollution during interrupt handling.
> > Potentially less DMA traffic between card and host.
> >
> >
Andrew Morton wrote:
(kernel profile of TCP tx/rx)So, naively, the most which can be saved here by
optimising
> the skb and memory usage is 5% of networking load. (1% of
> system load @100 mbps)
>
For a local tx/rx. (open question) What happens with
a router box with netfilter and queueing?
At 2:32 am + 25/2/2001, Jeremy Jackson wrote:
>Jeff Garzik wrote:
>
>(about optimizing kernel network code for busmastering NIC's)
>
>> Disclaimer: This is 2.5, repeat, 2.5 material.
>
>Related question: are there any 100Mbit NICs with cpu's onboard?
>Something mainstream/affordable?(i.e.
Chris Wedgwood wrote:
> That said, it would be an extemely neat thing to do from a technical
> perspective, but I don't know if you would ever get really good
> performance from it.
Well, you'd have to re-design the networking code to support NUMA
architectures, with a fairly fine granularity.
Jeff Garzik wrote:
> 1) Rx Skb recycling.
Sounds like a potentially useful idea. To solve the most immediate memory
pressure problems, maybe VM could provide some function that does a kfree
in cases of memory shortage, and that does nothing otherwise, so the
driver could offer to free the skb
Jeff Garzik wrote:
>
>...
> 1) Rx Skb recycling.
>...
> 2) Tx packet grouping.
>...
> 3) Slabbier packet allocation.
Let's see what the profiler says. 10 seconds of TCP xmit
followed by 10 seconds of TCP receive. 100 mbits/sec.
Kernel 2.4.2+ZC.
c0119470 do_softirq
Jeff Garzik wrote:
...
1) Rx Skb recycling.
...
2) Tx packet grouping.
...
3) Slabbier packet allocation.
Let's see what the profiler says. 10 seconds of TCP xmit
followed by 10 seconds of TCP receive. 100 mbits/sec.
Kernel 2.4.2+ZC.
c0119470 do_softirq
Chris Wedgwood wrote:
That said, it would be an extemely neat thing to do from a technical
perspective, but I don't know if you would ever get really good
performance from it.
Well, you'd have to re-design the networking code to support NUMA
architectures, with a fairly fine granularity. I'm
At 2:32 am + 25/2/2001, Jeremy Jackson wrote:
Jeff Garzik wrote:
(about optimizing kernel network code for busmastering NIC's)
Disclaimer: This is 2.5, repeat, 2.5 material.
Related question: are there any 100Mbit NICs with cpu's onboard?
Something mainstream/affordable?(i.e. not 1G
Andrew Morton wrote:
(kernel profile of TCP tx/rx)So, naively, the most which can be saved here by
optimising
the skb and memory usage is 5% of networking load. (1% of
system load @100 mbps)
For a local tx/rx. (open question) What happens with
a router box with netfilter and queueing?
In message [EMAIL PROTECTED] you write:
Jeff Garzik [EMAIL PROTECTED] writes:
Advantages: A de-allocation immediately followed by a reallocation is
eliminated, less L1 cache pollution during interrupt handling.
Potentially less DMA traffic between card and host.
Disadvantages?
On Sat, 24 Feb 2001, Jeff Garzik wrote:
> Disclaimer: This is 2.5, repeat, 2.5 material.
[snip]
> 1) Rx Skb recycling. It would be nice to have skbs returned to the
> driver after the net core is done with them, rather than have netif_rx
> free the skb. Many drivers pre-allocate a number of
Jeff Garzik wrote:
(about optimizing kernel network code for busmastering NIC's)
> Disclaimer: This is 2.5, repeat, 2.5 material.
Related question: are there any 100Mbit NICs with cpu's onboard?
Something mainstream/affordable?(i.e. not 1G ethernet)
Just recently someone posted asking some
> "Jeff" == Jeff Garzik <[EMAIL PROTECTED]> writes:
Jeff> 1) Rx Skb recycling. It would be nice to have skbs returned to the
Jeff> driver after the net core is done with them, rather than have netif_rx
Jeff> free the skb. Many drivers pre-allocate a number of maximum-sized skbs
On Sat, Feb 24, 2001 at 07:13:14PM -0500, Jeff Garzik wrote:
> Sorry... I should also point out that I was thinking of tulip
> architecture and similar architectures, where you have a fixed number of
> Skbs allocated at all times, and that number doesn't change for the
> lifetime of the driver.
>
Jeff Garzik wrote:
>
> Andi Kleen wrote:
> >
> > Jeff Garzik <[EMAIL PROTECTED]> writes:
> >
> > > Advantages: A de-allocation immediately followed by a reallocation is
> > > eliminated, less L1 cache pollution during interrupt handling.
> > > Potentially less DMA traffic between card and host.
On Sat, Feb 24, 2001 at 07:03:38PM -0500, Jeff Garzik wrote:
> Andi Kleen wrote:
> >
> > Jeff Garzik <[EMAIL PROTECTED]> writes:
> >
> > > Advantages: A de-allocation immediately followed by a reallocation is
> > > eliminated, less L1 cache pollution during interrupt handling.
> > >
Andi Kleen wrote:
>
> Jeff Garzik <[EMAIL PROTECTED]> writes:
>
> > Advantages: A de-allocation immediately followed by a reallocation is
> > eliminated, less L1 cache pollution during interrupt handling.
> > Potentially less DMA traffic between card and host.
> >
> > Disadvantages?
>
> You
Jeff Garzik <[EMAIL PROTECTED]> writes:
> Advantages: A de-allocation immediately followed by a reallocation is
> eliminated, less L1 cache pollution during interrupt handling.
> Potentially less DMA traffic between card and host.
>
> Disadvantages?
You need a new mechanism to cope with low
Disclaimer: This is 2.5, repeat, 2.5 material.
I've talked about the following items with a couple people on this list
in private. I wanted to bring these up again, to see if anyone has
comments on the following suggested netdevice changes for the upcoming
2.5 development series of kernels.
Disclaimer: This is 2.5, repeat, 2.5 material.
I've talked about the following items with a couple people on this list
in private. I wanted to bring these up again, to see if anyone has
comments on the following suggested netdevice changes for the upcoming
2.5 development series of kernels.
Jeff Garzik [EMAIL PROTECTED] writes:
Advantages: A de-allocation immediately followed by a reallocation is
eliminated, less L1 cache pollution during interrupt handling.
Potentially less DMA traffic between card and host.
Disadvantages?
You need a new mechanism to cope with low memory
Andi Kleen wrote:
Jeff Garzik [EMAIL PROTECTED] writes:
Advantages: A de-allocation immediately followed by a reallocation is
eliminated, less L1 cache pollution during interrupt handling.
Potentially less DMA traffic between card and host.
Disadvantages?
You need a new
On Sat, Feb 24, 2001 at 07:03:38PM -0500, Jeff Garzik wrote:
Andi Kleen wrote:
Jeff Garzik [EMAIL PROTECTED] writes:
Advantages: A de-allocation immediately followed by a reallocation is
eliminated, less L1 cache pollution during interrupt handling.
Potentially less DMA traffic
Jeff Garzik wrote:
Andi Kleen wrote:
Jeff Garzik [EMAIL PROTECTED] writes:
Advantages: A de-allocation immediately followed by a reallocation is
eliminated, less L1 cache pollution during interrupt handling.
Potentially less DMA traffic between card and host.
On Sat, Feb 24, 2001 at 07:13:14PM -0500, Jeff Garzik wrote:
Sorry... I should also point out that I was thinking of tulip
architecture and similar architectures, where you have a fixed number of
Skbs allocated at all times, and that number doesn't change for the
lifetime of the driver.
"Jeff" == Jeff Garzik [EMAIL PROTECTED] writes:
Jeff 1) Rx Skb recycling. It would be nice to have skbs returned to the
Jeff driver after the net core is done with them, rather than have netif_rx
Jeff free the skb. Many drivers pre-allocate a number of maximum-sized skbs
Jeff
Jeff Garzik wrote:
(about optimizing kernel network code for busmastering NIC's)
Disclaimer: This is 2.5, repeat, 2.5 material.
Related question: are there any 100Mbit NICs with cpu's onboard?
Something mainstream/affordable?(i.e. not 1G ethernet)
Just recently someone posted asking some
On Sat, 24 Feb 2001, Jeff Garzik wrote:
Disclaimer: This is 2.5, repeat, 2.5 material.
[snip]
1) Rx Skb recycling. It would be nice to have skbs returned to the
driver after the net core is done with them, rather than have netif_rx
free the skb. Many drivers pre-allocate a number of
53 matches
Mail list logo