Gilles Chanteperdrix wrote:
Gilles Chanteperdrix wrote:
 > These patches are not ready for inclusion, they are not tested
 > yet.

The attached versions are tested. I still wonder if handling this in
shadow.c is the right solution, or if there should be an xnppd_set call
that could be called from within the skins event callbacks.

That would likely be better, since some skin might not want any cleanup code to be called, and in the current implementation, every skin needs to provide some ppd info when returning from the event callback, even if only the CLIENT_ATTACH event is to be monitored. On the other hand, we could argue that registering an event callback requires to implement all requests, including CLIENT_DETACH, possibly by returning a no-op value, so that no ppd registration would be done from bind_to_interface(). Actually, I think the latter would be the better option.


--- ./include/asm-i386/ipipe.h~ 2006-05-07 16:00:37.000000000 +0200
+++ ./include/asm-i386/ipipe.h  2006-05-07 19:00:34.000000000 +0200
@@ -135,7 +135,8 @@ do { \
 #define IPIPE_NR_EVENTS                (IPIPE_LAST_EVENT + 1)

I've merged such support for process cleanup event notification in the Adeos codebase, it will be available with the next patches to come. I've named it IPIPE_EVENT_CLEANUP instead of PEXIT though, since that's what it is about, and EXIT/PEXIT might be confusing.

+static inline struct xnppd_holder *
+xnppd_lookup(unsigned skin_magic, struct mm_struct *mm, unsigned remove)

Let's separate the lookup and removal ops here, making the latter return the removed element. All-in-one stuff may save a few lines but is a pain for readability.

+static inline void do_pexit_event (struct mm_struct *mm)
+    unsigned muxid;
+    spl_t s;
+ + for (muxid = 0; muxid < XENOMAI_MUX_NR; muxid++)
+        {
+        xnlock_get_irqsave(&nklock, s);
+        if (muxtable[muxid].systab && muxtable[muxid].eventcb)
+            {
+            struct xnppd_holder *ppd;
+            ppd = xnppd_lookup(muxtable[muxid].magic, mm, 1);
+            if (ppd)
+                muxtable[muxid].eventcb(XNSHADOW_CLIENT_DETACH, ppd);
+            }
+        xnlock_put_irqrestore(&nklock, s);
+        }

Maybe a better approach would be to simply hash holders referring to busy muxtable entries on mm struct pointers? This way, we would only have to scan a list handling one element per attached skin per terminating process, which will be most of the time a matter of walking through a 1/2 element queue.

i.e. indexing table entries on mm pointers from bind_to_interface() to ask for the CLIENT_DETACH event to be fired upon process cleanup:

(key: curr->mm) ->
        (holder: &muxtable[muxid1]) ->
        (holder: &muxtable[muxid2]) ->
        (holder: &muxtable[muxid3])

then, scanning the list hashed on the terminating process's mm struct, in order to fire the appropriate event callbacks.

Btw, on 2.4 at least, the global Linux mm_lock is held before firing the process cleanup notification at Adeos level, so one has to be careful about what's actually done in those per-skin cleanup handlers.



Xenomai-core mailing list

Reply via email to