Re: [Xenomai-core] [RFC][PATCH] shirq locking rework

2007-06-30 Thread Dmitry Adamushko
Hello Jan,

I appologize for the huge reply latency.


 Yeah, that might explain while already trying to parse it manually
 failed: What is xnintr_sync_stat_references? :)

yeah.. it was supposed to be xnintr_sync_stat_refs()


  'prev = xnstat_get_current()' reference is also tracked as reference 
  accounting becomes
  a part of the xnstat interface (not sure we do need it though).

 Mind to elaborate on _why_ you think we need this, specifically if it
 adds new atomic counters?

Forget about it, it was a wrong approach. We do reschedule in
xnintr_*_handler() and if 'prev-refs' is non-zero and a newly
scheduled thread calls xnstat_runtime_synch() (well, how it could be
in theory with this interfcae) before deleting the first thread..
oops. so this 'referencing' scheme is bad anyway.

Note, that if the real re-schedule took place in xnpod_schedule() , we
actually don't need to _restore_ 'prev' when we get control back.. it
must be already restored by xnpod_schedule() when the preempted thread
('prev' is normally a thread in which context an interrupt occurs)
gets CPU back. if I'm not missing something. hum?

...
if (--sched-inesting == 0  xnsched_resched_p())
xnpod_schedule();

(*)  'sched-current_account' should be already == 'prev' in case
xnpod_schedule() took place

xnltt_log_event(xeno_ev_iexit, irq);
xnstat_runtime_switch(sched, prev);
...

The simpler scheme with xnstat_ accounting would be if we account only
time spent in intr-isr() to corresponding intr-stat[cpu].account...
This way, all accesses to the later one would be inside
xnlock_{get,put}(xnirqs[irq].lock) sections [*].

It's preciceness (although, it's arguable to some extent) vs.
simplicity (e.g. no need for any xnintr_sync_stat_references()). I
would still prefer this approach :-)

Otherwise, so far I don't see any much nicer solution that the one
illustrated by your first patch.


 Uhh, be careful, I burned my fingers with similar things recently as
 well. You have to make sure that all types are resolvable for _all_
 includers of that header. Otherwise, I'm fine with cleanups like this.
 But I think there was once a reason for #define.

yeah.. now I recall it as well :-)



 Thanks,
 Jan


-- 
Best regards,
Dmitry Adamushko

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC][PATCH] shirq locking rework

2007-06-25 Thread Jan Kiszka
Hi Dmitry,

Dmitry Adamushko wrote:
 Hello Jan,
 
 well, take it with 2 huge grains of salt.. it's even not compile-tested :-)

Yeah, that might explain while already trying to parse it manually
failed: What is xnintr_sync_stat_references? :)

 
 I don't think I've got it really nicer than your first patch.. the only 
 additional point is that
 'prev = xnstat_get_current()' reference is also tracked as reference 
 accounting becomes
 a part of the xnstat interface (not sure we do need it though).

Mind to elaborate on _why_ you think we need this, specifically if it
adds new atomic counters? I'm too lazy looking for the probably valid
reason on my own. Apropos atomic use counters: already considered all
memory barrier issues for your locking scheme?

 
 Well (#2), I have to say that the whole intr subsystem looks not that nice 
 (maybe partly due
 to being overloaded with xnstat details).. so I think, it'd require a closer 
 look.

Let's get it right first, then think about aesthetic or even concrete
optimisations again.

 
 Anyway, here is what I've got out of your patch so far.. the main issue is 
 whether we go this way
 or it just makes things even more uglier.

Again, correctness, race-avoidance worries me first right now.

 
 (a side effect : I transformed xnstat_* macroses into 'static inline' 
 function.. I guess, this way it's a bit nicer)

Uhh, be careful, I burned my fingers with similar things recently as
well. You have to make sure that all types are resolvable for _all_
includers of that header. Otherwise, I'm fine with cleanups like this.
But I think there was once a reason for #define.

Thanks,
Jan


PS: Please note my latest refactoring check-in to SVN, touching stat.h.



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC][PATCH] shirq locking rework

2007-06-22 Thread Dmitry Adamushko
On 22/06/07, Jan Kiszka [EMAIL PROTECTED] wrote:
 [ ... ]

 Only compile-tested under various .configs. Any comment welcome.


 @@ -76,7 +102,7 @@ static inline void xnintr_stat_counter_d
  static void xnintr_irq_handler(unsigned irq, void *cookie)
  {
 xnsched_t *sched = xnpod_current_sched();
 -   xnintr_t *intr = (xnintr_t *)cookie;
 +   xnintr_t *intr;
 xnstat_runtime_t *prev;
 xnticks_t start;
 int s;
 @@ -86,6 +112,16 @@ static void xnintr_irq_handler(unsigned
 xnltt_log_event(xeno_ev_ienter, irq);

 ++sched-inesting;
 +
 +   xnlock_get(xnirqs[irq].lock);
 +
 +#ifdef CONFIG_SMP
 +   /* In SMP case, we have to reload the cookie under the per-IRQ lock
 +  to avoid racing with xnintr_detach. */
 +   intr = rthal_irq_cookie(rthal_domain, irq);
 +#else
 +   intr = cookie;
 +#endif
 s = intr-isr(intr);

I guess, 'intr' can be NULL here.

Could you please send me attached (non-inlined) a combo patch on top
of the trunk version (as I see this one seems to be on top of your
previous one)? I'll try to come up with some solution during this
weekend.


-- 
Best regards,
Dmitry Adamushko

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC][PATCH] shirq locking rework

2007-06-22 Thread Jan Kiszka
Jan Kiszka wrote:
 Dmitry Adamushko wrote:
 On 21/06/07, Jan Kiszka [EMAIL PROTECTED] wrote:
 [ .. ]
 surprise-surprise.. sure, one way or another it must be fixed. heh..
 too many bugs (and I don't even want to say who's responsible) :-/
 I wouldn't accept all the responsibility if I were you.
 I have no problems in this respect. I was just a bit sarcastic with
 the way to say it's my fault.


 It's a sign that the design might be
 too complex now
 frankly speaking, I don't think it's really complex :)


 Things get worse, at least with XENO_OPT_STATS: Due to the runtime
 statistics collection, we may end up with dangling references to our
 xnintr object even after xnintr_shirq_unlock().

 We would actually need some kind of IRQ_INPROGRESS + synchronize_irq()
 at i-pipe level. But we have the problem, in contrast to the kernel,
 that we reschedule from inside the handler (more precisely: at nucleus
 level), thus synchronize_irq() would not just wait on some simple
 handler to exit...
 Yeah.. we had already conversations on this topic (I think with
 Philippe) and, if I recall right, that was one of the reasons. That's
 why synchronize_irq() is in the nucleus layer.
 
 Hmm, then we may be forced to get the cookie out of ipipe's hands again
 and put it back into a nucleus irq array, i.e. move the cookie
 dereferencing under a nucleus managed per-irq lock. But there is still
 the issue with xnsched_t::current_account...
 

Hell, what a mess...

Here is a first attempt to address the remaining issues. Take it with a
huge grain of salt, I haven't yet made up my mind if it really solves
our races and if it isn't too ugly.

Please direct special attention to

 - xnintr_sync_stat_references (the comment says it all)

 - xnintr_edge_shirq_handler (unrelated change, but I somehow felt like
   this is nicer)

Only compile-tested under various .configs. Any comment welcome.

Thanks,
Jan
Index: xenomai/ksrc/nucleus/intr.c
===
--- xenomai.orig/ksrc/nucleus/intr.c
+++ xenomai/ksrc/nucleus/intr.c
@@ -41,6 +41,18 @@
 
 DEFINE_PRIVATE_XNLOCK(intrlock);
 
+typedef struct xnintr_irq {
+
+   DECLARE_XNLOCK(lock);
+
+#if defined(CONFIG_XENO_OPT_SHIRQ_LEVEL) || defined(CONFIG_XENO_OPT_SHIRQ_EDGE)
+   xnintr_t *handlers;
+   int unhandled;
+#endif
+} cacheline_aligned_in_smp xnintr_irq_t;
+
+static xnintr_irq_t xnirqs[RTHAL_NR_IRQS];
+
 #ifdef CONFIG_XENO_OPT_STATS
 xnintr_t nkclock;  /* Only for statistics */
 int xnintr_count = 1;  /* Number of attached xnintr objects + nkclock */
@@ -63,9 +75,23 @@ static inline void xnintr_stat_counter_d
xnarch_memory_barrier();
xnintr_list_rev++;
 }
+
+static inline void xnintr_sync_stat_references(xnintr_t *intr)
+{
+   int cpu;
+
+   for_each_online_cpu(cpu) {
+   xnsched_t *sched = xnpod_sched_slot(cpu);
+
+   /* I don't feel very well... hacking this. */
+   while (sched-current_account == intr-stat[cpu].account)
+   cpu_relax();
+   }
+}
 #else
 static inline void xnintr_stat_counter_inc(void) {}
 static inline void xnintr_stat_counter_dec(void) {}
+static inline void xnintr_sync_stat_references(xnintr_t *intr) {}
 #endif /* CONFIG_XENO_OPT_STATS */
 
 /*
@@ -76,7 +102,7 @@ static inline void xnintr_stat_counter_d
 static void xnintr_irq_handler(unsigned irq, void *cookie)
 {
xnsched_t *sched = xnpod_current_sched();
-   xnintr_t *intr = (xnintr_t *)cookie;
+   xnintr_t *intr;
xnstat_runtime_t *prev;
xnticks_t start;
int s;
@@ -86,6 +112,16 @@ static void xnintr_irq_handler(unsigned 
xnltt_log_event(xeno_ev_ienter, irq);
 
++sched-inesting;
+
+   xnlock_get(xnirqs[irq].lock);
+
+#ifdef CONFIG_SMP
+   /* In SMP case, we have to reload the cookie under the per-IRQ lock
+  to avoid racing with xnintr_detach. */
+   intr = rthal_irq_cookie(rthal_domain, irq);
+#else
+   intr = cookie;
+#endif
s = intr-isr(intr);
 
if (unlikely(s == XN_ISR_NONE)) {
@@ -102,6 +138,8 @@ static void xnintr_irq_handler(unsigned 
intr-unhandled = 0;
}
 
+   xnlock_put(xnirqs[irq].lock);
+
if (s  XN_ISR_PROPAGATE)
xnarch_chain_irq(irq);
else if (!(s  XN_ISR_NOENABLE))
@@ -161,18 +199,6 @@ void xnintr_clock_handler(void)
 
 #if defined(CONFIG_XENO_OPT_SHIRQ_LEVEL) || defined(CONFIG_XENO_OPT_SHIRQ_EDGE)
 
-typedef struct xnintr_shirq {
-
-   xnintr_t *handlers;
-   int unhandled;
-#ifdef CONFIG_SMP
-   xnlock_t lock;
-#endif
-
-} xnintr_shirq_t;
-
-static xnintr_shirq_t xnshirqs[RTHAL_NR_IRQS];
-
 #if defined(CONFIG_XENO_OPT_SHIRQ_LEVEL)
 /*
  * Low-level interrupt handler dispatching the user-defined ISRs for
@@ -184,7 +210,7 @@ static void xnintr_shirq_handler(unsigne
xnsched_t *sched = xnpod_current_sched();
xnstat_runtime_t *prev;
xnticks_t start;
-   

Re: [Xenomai-core] [RFC][PATCH] shirq locking rework

2007-06-21 Thread Dmitry Adamushko
On 20/06/07, Jan Kiszka [EMAIL PROTECTED] wrote:
 [ ... ]
  xnintr_attach/detach()).. your approach does increase a worst case
  length of the lock(intrlock) -- unlock(intrlock) section... but
  that's unlikely to be critical.
 
  I think, the patch I sent before addresses a reported earlier case
  with xnintr_edge_shirq_handler().. but your approach does make code
  shorter (and likely simpler), right? I don't see any problems it would
  possibly cause (maybe I'm missing smth yet :)

 I must confess I didn't get your idea immediately. Later on (cough,
 after hacking my own patch, cough) I had a closer look, and it first
 appeared fairly nice, despite the additional ifs. But then I realised
 that end == old_end may produce false positives in case we have
 several times the same (and only the same) IRQ in a row.
 So, I'm afraid
 we have to live with only one candidate. :-

It's not clear, can you elaborate your (non-working) scenario in more
details? :-)

Note: I sent the second patch with the following correction :

...
} else if (code == XN_ISR_NONE  end == NULL) {
end = intr;
+old_end = NULL;
}
...

I don't see why this idea can't work (even if I made yet another error).
Under some circumstances the following code will be triggered to end a
loop even when 'end' is still valid

if (end  old_end == end)
intr = NULL;

_but_ it'd be exactly a case when intr == end and hence, the loop
would be finished anyway by the while (intr  intr != end)
condition.. So sometimes it acts as yet _another_ check to exit the
loop, IOW while (intr  intr != end) may be converted to just
while (intr).



 Jan


-- 
Best regards,
Dmitry Adamushko

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC][PATCH] shirq locking rework

2007-06-21 Thread Jan Kiszka
Dmitry Adamushko wrote:
 On 20/06/07, Jan Kiszka [EMAIL PROTECTED] wrote:
 [ ... ]
  xnintr_attach/detach()).. your approach does increase a worst case
  length of the lock(intrlock) -- unlock(intrlock) section... but
  that's unlikely to be critical.
 
  I think, the patch I sent before addresses a reported earlier case
  with xnintr_edge_shirq_handler().. but your approach does make code
  shorter (and likely simpler), right? I don't see any problems it would
  possibly cause (maybe I'm missing smth yet :)

 I must confess I didn't get your idea immediately. Later on (cough,
 after hacking my own patch, cough) I had a closer look, and it first
 appeared fairly nice, despite the additional ifs. But then I realised
 that end == old_end may produce false positives in case we have
 several times the same (and only the same) IRQ in a row.
 So, I'm afraid
 we have to live with only one candidate. :-
 
 It's not clear, can you elaborate your (non-working) scenario in more
 details? :-)
 
 Note: I sent the second patch with the following correction :
 
 ...
} else if (code == XN_ISR_NONE  end == NULL) {
end = intr;
 +old_end = NULL;
}
 ...
 
 I don't see why this idea can't work (even if I made yet another error).
 Under some circumstances the following code will be triggered to end a
 loop even when 'end' is still valid
 
if (end  old_end == end)
intr = NULL;
 
 _but_ it'd be exactly a case when intr == end and hence, the loop
 would be finished anyway by the while (intr  intr != end)
 condition.. So sometimes it acts as yet _another_ check to exit the
 loop, IOW while (intr  intr != end) may be converted to just
 while (intr).

Yeah, looks like you are right again, should work as well.


Unfortunately, that whole thing make me meditate about the whole issue
again. And now I wonder why we make such a fuss about locking for shared
IRQs (where it should be correct with any of the new patches) while we
do not care about the non-shared, standard scenario. I currently find no
reason why we shouldn't race when some non-shared IRQ pops up on one CPU
and deregistration takes place on another. Also in this case, the xnintr
object must remain valid for the whole handler execution time. Do we need

struct xnintr_irq {
xnintr_t *handler;
whatever-lock;
} xnirqs[RTHAL_NR_IRQS];

unconditionally? Or should we better move the whole locking thing into
i-pipe somehow? Argh.

Well, and I wonder what this xnarch_memory_barrier() at each handler
entry is for. Seems to be there for ages, don't see why right now. (The
kernel has a golden rule for this: no barrier without comments!)

Sigh, the never ending IRQ story...

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC][PATCH] shirq locking rework

2007-06-21 Thread Dmitry Adamushko
On 21/06/07, Jan Kiszka [EMAIL PROTECTED] wrote:
 [ ... ]

 Unfortunately, that whole thing make me meditate about the whole issue
 again. And now I wonder why we make such a fuss about locking for shared
 IRQs (where it should be correct with any of the new patches) while we
 do not care about the non-shared, standard scenario.

surprise-surprise.. sure, one way or another it must be fixed. heh..
too many bugs (and I don't even want to say who's responsible) :-/

 Sigh, the never ending IRQ story...

should be reviewed.



 Jan


-- 
Best regards,
Dmitry Adamushko

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC][PATCH] shirq locking rework

2007-06-21 Thread Jan Kiszka
Dmitry Adamushko wrote:
 On 21/06/07, Jan Kiszka [EMAIL PROTECTED] wrote:
 [ ... ]

 Unfortunately, that whole thing make me meditate about the whole issue
 again. And now I wonder why we make such a fuss about locking for shared
 IRQs (where it should be correct with any of the new patches) while we
 do not care about the non-shared, standard scenario.
 
 surprise-surprise.. sure, one way or another it must be fixed. heh..
 too many bugs (and I don't even want to say who's responsible) :-/

I wouldn't accept all the responsibility if I were you. We all went
through this code several times. It's a sign that the design might be
too complex now, and I feel like having some share in this.

 
 Sigh, the never ending IRQ story...
 
 should be reviewed.
 

Things get worse, at least with XENO_OPT_STATS: Due to the runtime
statistics collection, we may end up with dangling references to our
xnintr object even after xnintr_shirq_unlock().

We would actually need some kind of IRQ_INPROGRESS + synchronize_irq()
at i-pipe level. But we have the problem, in contrast to the kernel,
that we reschedule from inside the handler (more precisely: at nucleus
level), thus synchronize_irq() would not just wait on some simple
handler to exit...

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC][PATCH] shirq locking rework

2007-06-21 Thread Dmitry Adamushko
On 21/06/07, Jan Kiszka [EMAIL PROTECTED] wrote:
  [ .. ]
  surprise-surprise.. sure, one way or another it must be fixed. heh..
  too many bugs (and I don't even want to say who's responsible) :-/

 I wouldn't accept all the responsibility if I were you.

I have no problems in this respect. I was just a bit sarcastic with
the way to say it's my fault.


 It's a sign that the design might be
 too complex now

frankly speaking, I don't think it's really complex :)


 Things get worse, at least with XENO_OPT_STATS: Due to the runtime
 statistics collection, we may end up with dangling references to our
 xnintr object even after xnintr_shirq_unlock().

 We would actually need some kind of IRQ_INPROGRESS + synchronize_irq()
 at i-pipe level. But we have the problem, in contrast to the kernel,
 that we reschedule from inside the handler (more precisely: at nucleus
 level), thus synchronize_irq() would not just wait on some simple
 handler to exit...

Yeah.. we had already conversations on this topic (I think with
Philippe) and, if I recall right, that was one of the reasons. That's
why synchronize_irq() is in the nucleus layer.



 Jan


-- 
Best regards,
Dmitry Adamushko

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC][PATCH] shirq locking rework

2007-06-21 Thread Jan Kiszka
Dmitry Adamushko wrote:
 On 21/06/07, Jan Kiszka [EMAIL PROTECTED] wrote:
  [ .. ]
  surprise-surprise.. sure, one way or another it must be fixed. heh..
  too many bugs (and I don't even want to say who's responsible) :-/

 I wouldn't accept all the responsibility if I were you.
 
 I have no problems in this respect. I was just a bit sarcastic with
 the way to say it's my fault.
 
 
 It's a sign that the design might be
 too complex now
 
 frankly speaking, I don't think it's really complex :)
 
 
 Things get worse, at least with XENO_OPT_STATS: Due to the runtime
 statistics collection, we may end up with dangling references to our
 xnintr object even after xnintr_shirq_unlock().

 We would actually need some kind of IRQ_INPROGRESS + synchronize_irq()
 at i-pipe level. But we have the problem, in contrast to the kernel,
 that we reschedule from inside the handler (more precisely: at nucleus
 level), thus synchronize_irq() would not just wait on some simple
 handler to exit...
 
 Yeah.. we had already conversations on this topic (I think with
 Philippe) and, if I recall right, that was one of the reasons. That's
 why synchronize_irq() is in the nucleus layer.

Hmm, then we may be forced to get the cookie out of ipipe's hands again
and put it back into a nucleus irq array, i.e. move the cookie
dereferencing under a nucleus managed per-irq lock. But there is still
the issue with xnsched_t::current_account...



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC][PATCH] shirq locking rework

2007-06-21 Thread Philippe Gerum
On Thu, 2007-06-21 at 13:05 +0200, Jan Kiszka wrote:
 Jan Kiszka wrote:
  Well, and I wonder what this xnarch_memory_barrier() at each handler
  entry is for. Seems to be there for ages, don't see why right now.

AFAICT, this probably dates back to Xenomai 1.x, when we used to have a
threaded interrupt model. The actual code looked like as follows, and
the barrier was likely here to make sure that any change to the
interrupt hit counter was visible from any other CPU which would run the
interrupt service thread. The funny thing is that we did not have SMP
support at that time, anyway...

static void xnintr_irq_handler (unsigned irq, void *cookie)

{
xnintr_t *intr = (xnintr_t *)cookie;
int s = XN_ISR_SCHED_T;

intr-hits++;

xnarch_memory_barrier();


In short, I don't see any reason to keep this membar.

  (The
  kernel has a golden rule for this: no barrier without comments!)

Yeah, right. It looks like the kernel has a slew of very official golden
rules it basically does not care one dime to enforce in practice.
Looking at the code, commenting membars is surely one of them...

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC][PATCH] shirq locking rework

2007-06-21 Thread Jan Kiszka
Philippe Gerum wrote:
 On Thu, 2007-06-21 at 13:05 +0200, Jan Kiszka wrote:
 Jan Kiszka wrote:
 Well, and I wonder what this xnarch_memory_barrier() at each handler
 entry is for. Seems to be there for ages, don't see why right now.
 
 AFAICT, this probably dates back to Xenomai 1.x, when we used to have a
 threaded interrupt model. The actual code looked like as follows, and
 the barrier was likely here to make sure that any change to the
 interrupt hit counter was visible from any other CPU which would run the
 interrupt service thread. The funny thing is that we did not have SMP
 support at that time, anyway...
 
 static void xnintr_irq_handler (unsigned irq, void *cookie)
 
 {
 xnintr_t *intr = (xnintr_t *)cookie;
 int s = XN_ISR_SCHED_T;
 
 intr-hits++;
 
 xnarch_memory_barrier();
 
 
 In short, I don't see any reason to keep this membar.

Fascinating: So many people came along this place, but no one dared to
touch it. :)

 
  (The
 kernel has a golden rule for this: no barrier without comments!)
 
 Yeah, right. It looks like the kernel has a slew of very official golden
 rules it basically does not care one dime to enforce in practice.
 Looking at the code, commenting membars is surely one of them...
 

I saw Andrew Morton being strictly after uncommented ones in new code at
least.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [RFC][PATCH] shirq locking rework

2007-06-20 Thread Dmitry Adamushko
Hello Jan,

 Well, I hate nested locks when it comes to real-time, but in this case I
 really found no simpler approach. There is the risk of deadlocks via

 IRQ:xnintr_shirq::lock - handler - nklock vs.
 Application:nklock - xnintr_attach/detach - xnintr_shirq::lock,

it's also relevant for the current code - xnintr_attach/detach() must
not be called while holding the 'nklock'.

I think, your approach should work as well.. provided we have only a
single reader vs. a single writter contention case, which seems to be
the case here ('intrlock' is responsible for synchronization between
xnintr_attach/detach()).. your approach does increase a worst case
length of the lock(intrlock) -- unlock(intrlock) section... but
that's unlikely to be critical.

I think, the patch I sent before addresses a reported earlier case
with xnintr_edge_shirq_handler().. but your approach does make code
shorter (and likely simpler), right? I don't see any problems it would
possibly cause (maybe I'm missing smth yet :)


-- 
Best regards,
Dmitry Adamushko

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core