As I remember timer interrupt as well is an NMI so, it is possible (although
not advised) to call schedule function while holding spinlock on same core.

spin_lock_irqsave();
schedule();
spin_lock_irqrestore();

however if you have debugging options turned on like CONFIG_DEBUG_SPINLOCK,
you may likely get kernel warning for 'scheduling in atomic context'.

Then what can happen if this core is allowed to switched to new process?
Consider the case where new process as well tries to aquire same spin_lock()
which new process can not aquire and start spinning for the lock for ever
:). Likewise, other cores will also get locked down.

However stil you can detect softlockup through NMI watchdog.

Rajat

On Fri, Jan 7, 2011 at 12:32 PM, Tirtha Ghosh <[email protected]> wrote:

> NMI has greater priority over spinlock, cause this is non maskable and NMI
> watchdog can be used for debugging spinlock deadlocks
> (CONFIG_DEBUG_SPINLOCK).
>
> So we will hit NMI watchdog even if spinlock is acquired.
>
>
>
> On Fri, Jan 7, 2011 at 11:57 AM, Tayade, Nilesh <
> [email protected]> wrote:
>
>> Hi,
>>
>> > -----Original Message-----
>> > From: [email protected] [mailto:kernelnewbies-
>> > [email protected]] On Behalf Of Dave Hylands
>> > Sent: Friday, January 07, 2011 10:59 AM
>> > To: Viral Mehta
>> > Cc: [email protected]
>> > Subject: Re: spin_lock and scheduler confusion
>> >
>> > Hi Viral,
>> >
>> > On Wed, Jan 5, 2011 at 2:23 PM, Viral Mehta
>> > <[email protected]> wrote:
>> > >
>> > > Hi ,
>> > >
>> > > I need your help to solve below confusion.
>> > >
>> [...]
>> >
>> > Note that you can't sleep while you hold a spinlock. You're not
>> > allowed to perform any type of blocking operations. If you're holding
>> > the spinlock for any significant length of time, then you're using the
>> > wrong design.
>> >
>> > >     spin_lock_irqrestore();
>> > > 3. One of the CPU core tries to execute this code and so acquires the
>> > lock.
>> > > 4. Now, second core is also goes to execute same piece of code and so
>> > will
>> [...]
>> >
>> > Not while it's holding the spinlock or waiting for the spinlock.
>> >
>> > > Ever if timeslice is over for the current task ?
>> >
>> > The time tick interrupt is what determines when the timeslice is over.
>> > Since you have interrupts disabled, the timer interrupt can't happen.
>> >
>> > > What if scheduler code is running on CPU core-3 and sees that
>> > > timeslice for task running on CPU core-2 has expired ?
>> >
>> > Each core only considers the timeslices for its own core.
>> >
>> > > I guess timeslice expire case is not as same as preemption. Or may be
>> > I am
>> > > terribly wrong.
>> >
>> > You shouldn't be holding  a spinlock for periods of time approaching
>> > the length of a timeslice. The timer interrupt is what determines the
>> > end of a timeslice. No timer interrupt, no end of a timeslice.
>> > Preemption is also triggered by the timer interrupt, or by releasing a
>> > resource that a higher priority task is waiting for.
>>
>> May be my understanding is incorrect, but wouldn't we hit the NMI watchdog
>> here(assuming we are running on x86/x86_64)?
>> We have a system lockup for long time.
>> http://lxr.linux.no/#linux+v2.6.37/Documentation/nmi_watchdog.txt
>>
>> Could someone please clarify?
>> >
>> > Dave Hylands
>>
>> --
>> Thanks,
>> Nilesh
>>
>> _______________________________________________
>> Kernelnewbies mailing list
>> [email protected]
>> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>>
>
>
> _______________________________________________
> Kernelnewbies mailing list
> [email protected]
> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>
>
_______________________________________________
Kernelnewbies mailing list
[email protected]
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

Reply via email to