Philippe Gerum wrote:
> On Sun, 2006-12-03 at 19:01 +0100, Jan Kiszka wrote:
>> Philippe Gerum wrote:
>>> On Sun, 2006-12-03 at 16:05 +0100, Jan Kiszka wrote:
>>>> Philippe Gerum wrote:
>>>>> On Sun, 2006-12-03 at 15:13 +0100, Jan Kiszka wrote:
>>>>>> Hi,
>>>>>>
>>>>>> I came across a few things in latest 2.6.19-i386-1.6-01 patch:
>>>>>>
>>>>>> The usage of __ipipe_pipelock in __ipipe_common_info_proc is broken (raw 
>>>>>> lock used as
>>>>>> Linux lock here), and I do not see any volatile data it could protect 
>>>>>> anyway. So let's
>>>>>> remove it.
>>>>> The interrupt status word, and whether any virtual interrupt is
>>>>> allocated or not, are the volatile data protected by this lock on a SMP
>>>>> system. Since this is a common spinlock with no interrupt control
>>>>> required which is only used over the Linux domain (/proc handler), you
>>>>> don't need to go for the _hw() form of it.
>>>> As far as I see, nothing prevents the other users of __ipipe_pipelock to
>>>> be executed over non-root domain (IRQ registration in Xenomai context is
>>>> allowed, no?).
>>>>
>>> Indeed, this is also the sense of my second reply:
>>> https://mail.gna.org/public/adeos-main/2006-12/msg00004.html
>>>
>>> Which means that our problem is more an issue regarding preemption by
>>> interrupts.
>>>
>>>> But I have to re-check what data for __ipipe_common_info_proc actually
>>>> can be released (I'm not considering inconsistency a problem here). I
>>>> didn't see anything on first review.
>>>>
>>>> Still, this kind of merging of _hw with non-_hw spinlock operations is
>>>> fishy in my eyes.
>>>>
>>> It's not without interrupt control, provided you accept the possible
>>> side-effects regarding kernel preemption, which /proc handlers do. What
>>> would have been really problematic is a mismatch between
>>> spin_lock_irqsave_hw and spinlock_irqsave forms, and what is really a
>>> bug is the current lack of protection wrt interrupt.
>> I re-checked the code, and I can only repeat that I see ZERO need for
>> this lock here. All we are reading is the control mask from static IRQ
>> arrays + some bits from the related I-pipe domain.
> 
> We do want the IRQ descriptors to be in a stable state while dumping
> their contents, wrt ipipe_virtualize_irq() and ipipe_control_irq(). It
> could be seen as a conservative measure, but it's saner than assuming
> that fields from the IRQ arrays will never relate to each other wrt what
> is dumped by the /proc handler. However, this global spinlock should be
> moved as per-domain lock, there is no need for global contention here.

/me still not convinced. Neither ipipe_virtualize_irq() nor
ipipe_control_irq() are used at a relevant rate so that user will see
inconsistent dumps here. But if you really see a need, let us please use
a sequence lock-like mechanism. It's not worth to stall anyone just for
the sake of debug /proc output.

> 
>> If there is actually something to protect, than it should be calling
>> this proc handler vs. unregistering the domain (and it's proc entry) -
>> but that is only feasible with something like synchronize_sched, i.e.
>> waiting a grace period after unregistering so that all handlers are
>> through. A really uncritical race which exists with a lot of /proc code.
>>
> 
> This race could crash the box, would the descriptor from the
> unregistered domain belong to a module being unloaded. Since
> ipipe_register_domain() grabs the critical lock, masking IRQs in
> the /proc handler would do the trick, but this is a fairly high price to
> pay for running such a routine. 
> 

Better no IRQ lock for this.

Well, also my synchronize_sched idea is not truly safe (unless the proce
handler would be called under rcu_read_lock - don't think this is the
case). I really can't tell how to safely cleanup proc entries that have
volatile data structures attached. I once asked on LKML for a correct
pattern but got no reply.

Jan

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Adeos-main mailing list
[email protected]
https://mail.gna.org/listinfo/adeos-main

Reply via email to