On Sun, 27 May 2001, Andrea Arcangeli wrote:
> Yes the stock kernel.
yep you are right.
i had this fixed too at a certain point, there is one subtle issue: under
certain circumstances tasklets re-activate the tasklet softirq(s) from
within the softirq handler, which leads to infinite loops if
On Sun, May 27, 2001 at 12:09:29PM -0700, David S. Miller wrote:
> "live lock". What do you hope to avoid by pushing softirq processing
> into a scheduled task? I think doing that is a stupid idea.
NOTE: I'm not pushing anything out of the atomic context, I'm using
ksoftirqd only to cure the ca
On Sun, May 27, 2001 at 09:05:50PM +0200, Ingo Molnar wrote:
>
> On Sun, 27 May 2001, Andrea Arcangeli wrote:
>
> > I mean everything is fine until the same softirq is marked active
> > again under do_softirq, in such case neither the do_softirq in do_IRQ
> > will run it (because we are in the c
On Sun, 27 May 2001, Andrea Arcangeli wrote:
> I mean everything is fine until the same softirq is marked active
> again under do_softirq, in such case neither the do_softirq in do_IRQ
> will run it (because we are in the critical section and we hold the
> per-cpu locks), nor we will run it agai
On Sat, May 26, 2001 at 07:59:28PM +0200, Ingo Molnar wrote:
> the two error cases are:
>
> #1 hard-IRQ interrupts user-space code, activates softirq, and returns to
> user-space code
Before returning to userspace do_IRQ just runs do_softirq by hand from C
code.
> #2 hard-IRQ interrupts t
David S. Miller
> Ingo Molnar writes:
>> (unlike bottom halves, soft-IRQs do not preempt kernel code.)
> ...
>
> Since when do we have this rule? :-)
...
> You should check Softirqs on return from every single IRQ.
> In do_softirq() it will make sure that we won't run softirqs
> while already doi
Ingo Molnar writes:
> (unlike bottom halves, soft-IRQs do not preempt kernel code.)
...
Since when do we have this rule? :-)
> the two error cases are:
>
> #1 hard-IRQ interrupts user-space code, activates softirq, and returns to
> user-space code
>
> #2 hard-IRQ interrupts the
i've been seeing really bad average TCP latencies on certain gigabit cards
(~300-400 microseconds instead of the expected 100-200 microseconds), ever
since softnet went into the main kernel, and never found a real
explanation for it, until today.
the problem always went away when i tried to use
8 matches
Mail list logo