Gilles Chanteperdrix wrote:
> Philippe Gerum wrote:
>  > Gilles Chanteperdrix wrote:
>  > > On Jan 23, 2008 7:34 PM, Philippe Gerum <[EMAIL PROTECTED]> wrote:
>  > >> Gilles Chanteperdrix wrote:
>  > >>> On Jan 23, 2008 6:48 PM, Philippe Gerum <[EMAIL PROTECTED]> wrote:
>  > >>>> Gilles Chanteperdrix wrote:
>  > >>>>> Gilles Chanteperdrix wrote:
>  > >>>>>  > Please find attached a patch implementing these ideas. This adds 
> some
>  > >>>>>  > clutter, which I would be happy to reduce. Better ideas are 
> welcome.
>  > >>>>>  >
>  > >>>>>
>  > >>>>> Ok. New version of the patch, this time split in two parts, should
>  > >>>>> hopefully make it more readable.
>  > >>>>>
>  > >>>> Ack. I'd suggest the following:
>  > >>>>
>  > >>>> - let's have a rate limiter when walking the zombie queue in
>  > >>>> __xnpod_finalize_zombies. We hold the superlock here, and what the 
> patch
>  > >>>> also introduces is the potential for flushing more than a single TCB 
> at
>  > >>>> a time, which might not always be a cheap operation, depending on 
> which
>  > >>>> cra^H^Hode runs on behalf of the deletion hooks for instance. We may
>  > >>>> take for granted that no sane code would continuously create more
>  > >>>> threads than we would be able to finalize in a given time frame 
> anyway.
>  > >>> The maximum number of zombies in the queue is
>  > >>> 1 + XNARCH_WANT_UNLOCKED_CTXSW, since a zombie is added to the queue
>  > >>> only if a deleted thread is xnpod_current_thread(), or if the XNLOCKSW
>  > >>> bit is armed.
>  > >> Ack. rate_limit = 1? I'm really reluctant to increase the WCET here,
>  > >> thread deletion isn't cheap already.
>  > > 
>  > > I am not sure that holding the nklock while we run the thread deletion
>  > > hooks is really needed.
>  > > 
>  > 
>  > Deletion hooks may currently rely on the following assumptions when 
> running:
>  > 
>  > - rescheduling is locked
>  > - nklock is held, interrupts are off
>  > - they run on behalf of the deletor context
>  > 
>  > The self-delete refactoring currently kills #3 because we now run the
>  > hooks after the context switch, and would also kill #2 if we did not
>  > hold the nklock (btw, enabling the nucleus debug while running with this
>  > patch should raise an abort, from xnshadow_unmap, due to the second
>  > assertion).
>  > 

Forget about this; shadows are always exited in secondary mode, so
that's fine, i.e. xnpod_current_thread() != deleted thread, hence we
should always run the deletion hooks immediately on behalf of the caller.

>  > It should be possible to get rid of #3 for xnshadow_unmap (serious
>  > testing needed here), but we would have to grab the nklock from this
>  > routine anyway.
> 
> Since the unmapped task is no longer running on the current CPU, is no
> there any chance that it is run on another CPU by the time we get to
> xnshadow_unmap ?
> 

The unmapped task is running actually, and do_exit() may reschedule
quite late until kernel preemption is eventually disabled, which happens
long after the I-pipe notifier is fired. We would need the nklock to
protect the RPI management too.

-- 
Philippe.

_______________________________________________
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core

Reply via email to