Re: mbuf cache

2003-03-17 Thread Terry Lambert
Petri Helenius wrote: [ ... Citeseer earch terms for professional strength networking ... ] > These seem quite network-heavy, I was more interested in references > of SMP stuff and how the coherency is maintained and what is > the overhead of maintaining the coherency in read/write operations > an

Re: mbuf cache

2003-03-17 Thread Petri Helenius
> If you are asking for paper references, then I can at least tell > you where to start; go to: http://citeseer.nj.nec.com/cs and look > for "Jeff Mogul", "DEC Western Research Laboratories", "Mohit > Aron", "Peter Druschel", "Sally Floyd", Van Jacobson", "SCALA", > "TCP Rate halving", "Receiver Li

Re: mbuf cache

2003-03-17 Thread Terry Lambert
Petri Helenius wrote: > > This also has the desirable side effect that stack processing will > > occur on the same CPU as the interrupt processing occurred. This > > avoids inter-CPU memory bus arbitration cycles, and ensures that > > you won't engage in a lot of unnecessary L1 cache busting. Hen

Re: mbuf cache

2003-03-16 Thread Petri Helenius
> You can get to this same point in -CURRENT, if you are using up to > date sources, by enabling direct dispatch, which disables NETISR. > This will help somewhat more than polling, since it will remove the > normal timer latency between receipt of a packet, and processing of > the packet through t

Re: mbuf cache

2003-03-16 Thread Terry Lambert
Petri Helenius wrote: > Terry Lambert wrote: > >Ah. You are receiver livelocked. Try enabling polling; it will > >help up to the first stall barrier (NETISR not getting a chance > >to run protocol processing to completion because of interrupt > >overhead); there are two other stall barriers after

Re: mbuf cache

2003-03-13 Thread Petri Helenius
Terry Lambert wrote: Ah. You are receiver livelocked. Try enabling polling; it will help up to the first stall barrier (NETISR not getting a chance to run protocol processing to completion because of interrupt overhead); there are two other stall barriers after that, and another in user space is

Re: mbuf cache

2003-03-07 Thread Jonathan Lemon
In article you write: >Petri Helenius wrote: >> > There's probably a tightloop of frees going on somewhere. It's tough >> > for me to analyze this as I cannot reproduce it. Have you tried >> > running your tests over loopback to see if the same thing happens? >> >> What is the definition

Re: mbuf cache

2003-03-07 Thread Terry Lambert
Petri Helenius wrote: > > There's probably a tightloop of frees going on somewhere. It's tough > > for me to analyze this as I cannot reproduce it. Have you tried > > running your tests over loopback to see if the same thing happens? > > What is the definition of "tightloop"? The received

Re: mbuf cache

2003-03-07 Thread Bosko Milekic
On Fri, Mar 07, 2003 at 05:00:42PM +0200, Petri Helenius wrote: > > There's probably a tightloop of frees going on somewhere. It's tough > > for me to analyze this as I cannot reproduce it. Have you tried > > running your tests over loopback to see if the same thing happens? > > What is t

Re: mbuf cache

2003-03-07 Thread Petri Helenius
> There's probably a tightloop of frees going on somewhere. It's tough > for me to analyze this as I cannot reproduce it. Have you tried > running your tests over loopback to see if the same thing happens? What is the definition of "tightloop"? The received packet mbufs are freed when the

Re: mbuf cache

2003-03-07 Thread Bosko Milekic
On Wed, Mar 05, 2003 at 10:07:35AM +0200, Petri Helenius wrote: > I think there is nothing really special about the driver there? The mbufs > are allocated in the driver and then freed when other parts in the kernel > are done with the packet? The issue I´m having is that mb_free takes > almost fo

Re: mbuf cache

2003-03-05 Thread Petri Helenius
> > While you are there debugging mbuf issues, you might also want to try > > this patch: > > > Didn´t run profiling yet, but judging from the CPU utilization, this did not change the whole picture a lot (dunno why it should since CPU is mostly spent freeing the mbufs, not allocating them) Pete

Re: mbuf cache

2003-03-05 Thread Petri Helenius
> Yeah, it kinda sucks but I'm not sure how it works... when are the > mbufs freed? If they're all freed in a continous for loop that kinda > sucks. I think there is nothing really special about the driver there? The mbufs are allocated in the driver and then freed when other parts in the k

Re: mbuf cache

2003-03-04 Thread Hiten Pandya
Hiten Pandya (Tue, Mar 04, 2003 at 07:01:15PM -0500) wrote: > Petri Helenius (Wed, Mar 05, 2003 at 01:42:05AM +0200) wrote: > > > > > > This does look odd... maybe there's a leak somewhere... does "in use" > > > go back down to a much lower number eventually? What kind of test are > > > you

Re: mbuf cache

2003-03-04 Thread Bosko Milekic
On Wed, Mar 05, 2003 at 01:42:05AM +0200, Petri Helenius wrote: > > > > This does look odd... maybe there's a leak somewhere... does "in use" > > go back down to a much lower number eventually? What kind of test are > > you running? "in pool" means that that's the number in the cache > >

Re: mbuf cache

2003-03-04 Thread Hiten Pandya
Petri Helenius (Wed, Mar 05, 2003 at 01:42:05AM +0200) wrote: > > > > This does look odd... maybe there's a leak somewhere... does "in use" > > go back down to a much lower number eventually? What kind of test are > > you running? "in pool" means that that's the number in the cache > > wh

Re: mbuf cache

2003-03-04 Thread Petri Helenius
> > This does look odd... maybe there's a leak somewhere... does "in use" > go back down to a much lower number eventually? What kind of test are > you running? "in pool" means that that's the number in the cache > while "in use" means that that's the number out of the cache > currently

Re: mbuf cache

2003-03-04 Thread Bosko Milekic
w: There is no way there is enough > traffic on the system to allocate 7075 mbufs when this netstat -m was taken. > > mbuf usage: > GEN cache: 0/0 (in use/in pool) > CPU #0 cache: 7075/8896 (in use/in pool) > CPU #1 cache: 1119/4864 (in use/

Re: mbuf cache

2003-03-04 Thread Petri Helenius
he: 0/0 (in use/in pool) CPU #0 cache: 7075/8896 (in use/in pool) CPU #1 cache: 1119/4864 (in use/in pool) Total: 8194/13760 (in use/in pool) Mbuf cache high watermark: 8192 Mbuf cache low watermark: 128 Pete > > bucket = mb_list-&g

Re: mbuf cache

2003-03-04 Thread Bosko Milekic
On Wed, Mar 05, 2003 at 12:24:27AM +0200, Petri Helenius wrote: > > > > Yes, it's normal. The commit log clearly states that the new > > watermarks do nothing for now. I have a patch that changes that but I > > haven't committed it yet because I left for vacation last Sunday and I > >

Re: mbuf cache

2003-03-04 Thread Petri Helenius
> > Yes, it's normal. The commit log clearly states that the new > watermarks do nothing for now. I have a patch that changes that but I > haven't committed it yet because I left for vacation last Sunday and I > only returned early this Monday. Since then, I've been too busy to > cle

Re: mbuf cache

2003-03-04 Thread Bosko Milekic
On Tue, Mar 04, 2003 at 11:34:11PM +0200, Petri Helenius wrote: > > I did some profiling on -CURRENT from a few days back, and I think I haven´t > figured the new tunables out or the code is not doing what it´s supposed to > or I´m asking more than it is supposed to do but it seems that mb_free >

mbuf cache

2003-03-04 Thread Petri Helenius
mbuf usage: GEN cache: 56/256 (in use/in pool) CPU #0 cache: 8138/12064 (in use/in pool) Total: 8194/12320 (in use/in pool) Mbuf cache high watermark: 4096 Mbuf cache low watermark: 128 Maximum possible: 51200 Allocated mbuf types: