Petri Helenius wrote:
This also has the desirable side effect that stack processing will
occur on the same CPU as the interrupt processing occurred. This
avoids inter-CPU memory bus arbitration cycles, and ensures that
you won't engage in a lot of unnecessary L1 cache busting. Hence
I
If you are asking for paper references, then I can at least tell
you where to start; go to: http://citeseer.nj.nec.com/cs and look
for Jeff Mogul, DEC Western Research Laboratories, Mohit
Aron, Peter Druschel, Sally Floyd, Van Jacobson, SCALA,
TCP Rate halving, Receiver Livelock, RICE
Petri Helenius wrote:
[ ... Citeseer earch terms for professional strength networking ... ]
These seem quite network-heavy, I was more interested in references
of SMP stuff and how the coherency is maintained and what is
the overhead of maintaining the coherency in read/write operations
and
Petri Helenius wrote:
Terry Lambert wrote:
Ah. You are receiver livelocked. Try enabling polling; it will
help up to the first stall barrier (NETISR not getting a chance
to run protocol processing to completion because of interrupt
overhead); there are two other stall barriers after that,
You can get to this same point in -CURRENT, if you are using up to
date sources, by enabling direct dispatch, which disables NETISR.
This will help somewhat more than polling, since it will remove the
normal timer latency between receipt of a packet, and processing of
the packet through the
Terry Lambert wrote:
Ah. You are receiver livelocked. Try enabling polling; it will
help up to the first stall barrier (NETISR not getting a chance
to run protocol processing to completion because of interrupt
overhead); there are two other stall barriers after that, and
another in user space
On Wed, Mar 05, 2003 at 10:07:35AM +0200, Petri Helenius wrote:
I think there is nothing really special about the driver there? The mbufs
are allocated in the driver and then freed when other parts in the kernel
are done with the packet? The issue I´m having is that mb_free takes
almost four
There's probably a tightloop of frees going on somewhere. It's tough
for me to analyze this as I cannot reproduce it. Have you tried
running your tests over loopback to see if the same thing happens?
What is the definition of tightloop? The received packet mbufs are freed
when the
On Fri, Mar 07, 2003 at 05:00:42PM +0200, Petri Helenius wrote:
There's probably a tightloop of frees going on somewhere. It's tough
for me to analyze this as I cannot reproduce it. Have you tried
running your tests over loopback to see if the same thing happens?
What is the
Petri Helenius wrote:
There's probably a tightloop of frees going on somewhere. It's tough
for me to analyze this as I cannot reproduce it. Have you tried
running your tests over loopback to see if the same thing happens?
What is the definition of tightloop? The received packet
In article local.mail.freebsd-current/[EMAIL PROTECTED] you write:
Petri Helenius wrote:
There's probably a tightloop of frees going on somewhere. It's tough
for me to analyze this as I cannot reproduce it. Have you tried
running your tests over loopback to see if the same thing
Yeah, it kinda sucks but I'm not sure how it works... when are the
mbufs freed? If they're all freed in a continous for loop that kinda
sucks.
I think there is nothing really special about the driver there? The mbufs
are allocated in the driver and then freed when other parts in the
While you are there debugging mbuf issues, you might also want to try
this patch:
Didn´t run profiling yet, but judging from the CPU utilization, this did not change
the whole picture a lot (dunno why it should since CPU is mostly spent freeing the
mbufs,
not allocating them)
Pete
This does look odd... maybe there's a leak somewhere... does in use
go back down to a much lower number eventually? What kind of test are
you running? in pool means that that's the number in the cache
while in use means that that's the number out of the cache
currently being used
Petri Helenius (Wed, Mar 05, 2003 at 01:42:05AM +0200) wrote:
This does look odd... maybe there's a leak somewhere... does in use
go back down to a much lower number eventually? What kind of test are
you running? in pool means that that's the number in the cache
while in use
On Wed, Mar 05, 2003 at 01:42:05AM +0200, Petri Helenius wrote:
This does look odd... maybe there's a leak somewhere... does in use
go back down to a much lower number eventually? What kind of test are
you running? in pool means that that's the number in the cache
while in use
Hiten Pandya (Tue, Mar 04, 2003 at 07:01:15PM -0500) wrote:
Petri Helenius (Wed, Mar 05, 2003 at 01:42:05AM +0200) wrote:
This does look odd... maybe there's a leak somewhere... does in use
go back down to a much lower number eventually? What kind of test are
you running? in
Any comments on the high cpu consumption of mb_free? Or any other places
where I should look to improve performance?
What do you mean high cpu consumption? The common case of mb_free()
is this:
According to profiling mb_free takes 18.9% of all time consumed in kernel and is
almost
On Wed, Mar 05, 2003 at 01:12:55AM +0200, Petri Helenius wrote:
Any comments on the high cpu consumption of mb_free? Or any other places
where I should look to improve performance?
What do you mean high cpu consumption? The common case of mb_free()
is this:
According to
On Tue, Mar 04, 2003 at 11:34:11PM +0200, Petri Helenius wrote:
I did some profiling on -CURRENT from a few days back, and I think I haven´t
figured the new tunables out or the code is not doing what it´s supposed to
or I´m asking more than it is supposed to do but it seems that mb_free
is
Yes, it's normal. The commit log clearly states that the new
watermarks do nothing for now. I have a patch that changes that but I
haven't committed it yet because I left for vacation last Sunday and I
only returned early this Monday. Since then, I've been too busy to
clean up
On Wed, Mar 05, 2003 at 12:24:27AM +0200, Petri Helenius wrote:
Yes, it's normal. The commit log clearly states that the new
watermarks do nothing for now. I have a patch that changes that but I
haven't committed it yet because I left for vacation last Sunday and I
only
22 matches
Mail list logo