On Tue, Sep 12, 2000 at 11:37:46AM +0100, Alan Cox wrote:
> That code example can in theory deadlock without any patches if the CPU's
> end up locked in sync with each other and the same one always wins the test.
> It isnt likely on current x86 but other processors are a different story
If seen
On Tue, Sep 12, 2000 at 11:37:46AM +0100, Alan Cox wrote:
That code example can in theory deadlock without any patches if the CPU's
end up locked in sync with each other and the same one always wins the test.
It isnt likely on current x86 but other processors are a different story
If seen
On Tue, 12 Sep 2000, Alan Cox wrote:
>That code example can in theory deadlock without any patches if the CPU's
Woops I really meant:
while (test_and_set_bit(0, ));
/* critical section */
mb();
clear_bit(0, );
Andrea
-
To unsubscribe from this list: send the
> > while (test_and_set_bit(0, )) {
> > /* critical section */
> > mb();
> > clear_bit(0, );
> > }
>
> > The above construct it's discouraged of course when you can do
> > the same thing with a spinlock but some place is doing that.
>
> Hmmm, maybe
Rik van Riel wrote:
>
> On Tue, 12 Sep 2000, Andrea Arcangeli wrote:
> > On Wed, 6 Sep 2000, George Anzinger wrote:
> >
> > >The times a kernel is not preemptable under this patch are:
> > >
> > >While handling interrupts.
> > >While doing "bottom half" processing.
> > >While holding a spinlock,
Rik van Riel wrote:
On Tue, 12 Sep 2000, Andrea Arcangeli wrote:
On Wed, 6 Sep 2000, George Anzinger wrote:
The times a kernel is not preemptable under this patch are:
While handling interrupts.
While doing "bottom half" processing.
While holding a spinlock, writelock or
On Tue, 12 Sep 2000, Alan Cox wrote:
That code example can in theory deadlock without any patches if the CPU's
Woops I really meant:
while (test_and_set_bit(0, lock));
/* critical section */
mb();
clear_bit(0, lock);
Andrea
-
To unsubscribe from this list:
On Mon, 11 Sep 2000, Rik van Riel wrote:
>Hmmm, maybe the Montavista people can volunteer to clean
>up all those places in the kernel code? ;)
That would be nice and welcome indipendently of the preemptible kernel
indeed. The right construct to convert that stuff is
spin_is_locked/spin_trylock
On Tue, 12 Sep 2000, Andrea Arcangeli wrote:
> On Wed, 6 Sep 2000, George Anzinger wrote:
>
> >The times a kernel is not preemptable under this patch are:
> >
> >While handling interrupts.
> >While doing "bottom half" processing.
> >While holding a spinlock, writelock or readlock.
> >
> >At all
On Wed, 6 Sep 2000, George Anzinger wrote:
>The times a kernel is not preemptable under this patch are:
>
>While handling interrupts.
>While doing "bottom half" processing.
>While holding a spinlock, writelock or readlock.
>
>At all other times the algorithm allows preemption.
So it can
On Wed, 6 Sep 2000, George Anzinger wrote:
The times a kernel is not preemptable under this patch are:
While handling interrupts.
While doing "bottom half" processing.
While holding a spinlock, writelock or readlock.
At all other times the algorithm allows preemption.
So it can deadlock if
On Mon, 11 Sep 2000, Rik van Riel wrote:
Hmmm, maybe the Montavista people can volunteer to clean
up all those places in the kernel code? ;)
That would be nice and welcome indipendently of the preemptible kernel
indeed. The right construct to convert that stuff is
spin_is_locked/spin_trylock
George Anzinger wrote:
>
> This patch, for 2.4.0-test6, allows the kernel to be built with full
> preemption.
Neat. Congratulations.
> ...
> The measured context switch latencies with this patch
> have been as high as 12 ms, however, we are actively working to
> isolate and fix the areas of
George Anzinger wrote:
This patch, for 2.4.0-test6, allows the kernel to be built with full
preemption.
Neat. Congratulations.
...
The measured context switch latencies with this patch
have been as high as 12 ms, however, we are actively working to
isolate and fix the areas of the
14 matches
Mail list logo