Re: [Xenomai-core] Native: Fixing auto-cleanup

2009-10-20 Thread Philippe Gerum
On Sun, 2009-10-18 at 19:56 +0200, Jan Kiszka wrote:
 Philippe Gerum wrote:
  On Sun, 2009-10-18 at 14:54 +0200, Jan Kiszka wrote:
  Philippe Gerum wrote:
  On Fri, 2009-10-16 at 19:08 +0200, Jan Kiszka wrote:
  Hi,
 
  our automatic object cleanup on process termination is slightly broken
  for the native skin. The inline and macro magic behind
  __native_*_flush_rq() blindly calls rt_*_delete(), but that's not
  correct for mutexes (we can leak memory and/or corrupt the system heap),
  queues and heaps (we may leak shared heaps).
  Please elaborate regarding both queues and heaps (scenario).
  Master creates heap, slave binds to it, master wants to terminate (or is
  killed, doesn't matter), heap cannot be released as the slave is still
  bound to it, slave terminates but heap object is still reserved on the
  main heap = memory leak (just confirmed with a test case).
  
  This fixes it:
  
  diff --git a/ksrc/skins/native/heap.c b/ksrc/skins/native/heap.c
  index 0a24735..0fcb3c2 100644
  --- a/ksrc/skins/native/heap.c
  +++ b/ksrc/skins/native/heap.c
  @@ -340,6 +340,11 @@ static void __heap_post_release(struct xnheap *h)
  xnpod_schedule();
   
 
 + xeno_mark_deleted(heap);
 

Actually, we need more than this:

diff --git a/ksrc/skins/native/heap.c b/ksrc/skins/native/heap.c
index 0a24735..5d43fa7 100644
--- a/ksrc/skins/native/heap.c
+++ b/ksrc/skins/native/heap.c
@@ -323,6 +323,7 @@ int rt_heap_create(RT_HEAP *heap, const char *name, size_t 
heapsize, int mode)
 static void __heap_post_release(struct xnheap *h)
 {
RT_HEAP *heap = container_of(h, RT_HEAP, heap_base);
+   int resched;
spl_t s;
 
xnlock_get_irqsave(nklock, s);
@@ -332,14 +333,24 @@ static void __heap_post_release(struct xnheap *h)
if (heap-handle)
xnregistry_remove(heap-handle);
 
-   if (xnsynch_destroy(heap-synch_base) == XNSYNCH_RESCHED)
+   xeno_mark_deleted(heap);
+
+   resched = xnsynch_destroy(heap-synch_base);
+
+   xnlock_put_irqrestore(nklock, s);
+
+#ifdef CONFIG_XENO_OPT_PERVASIVE
+   if (heap-cpid) {
+   heap-cpid = 0;
+   xnfree(heap);
+   }
+#endif
+   if (resched)
/*
 * Some task has been woken up as a result of the
 * deletion: reschedule now.
 */
xnpod_schedule();
-
-   xnlock_put_irqrestore(nklock, s);
 }
 
 /**
@@ -404,7 +415,7 @@ int rt_heap_delete_inner(RT_HEAP *heap, void __user 
*mapaddr)
 
/*
 * The heap descriptor has been marked as deleted before we
-* released the superlock thus preventing any sucessful
+* released the superlock thus preventing any successful
 * subsequent calls of rt_heap_delete(), so now we can
 * actually destroy it safely.
 */
diff --git a/ksrc/skins/native/queue.c b/ksrc/skins/native/queue.c
index 527bde8..35e292b 100644
--- a/ksrc/skins/native/queue.c
+++ b/ksrc/skins/native/queue.c
@@ -286,6 +286,7 @@ int rt_queue_create(RT_QUEUE *q,
 static void __queue_post_release(struct xnheap *heap)
 {
RT_QUEUE *q = container_of(heap, RT_QUEUE, bufpool);
+   int resched;
spl_t s;
 
xnlock_get_irqsave(nklock, s);
@@ -295,14 +296,24 @@ static void __queue_post_release(struct xnheap *heap)
if (q-handle)
xnregistry_remove(q-handle);
 
-   if (xnsynch_destroy(q-synch_base) == XNSYNCH_RESCHED)
+   xeno_mark_deleted(q);
+
+   resched = xnsynch_destroy(q-synch_base);
+
+   xnlock_put_irqrestore(nklock, s);
+
+#ifdef CONFIG_XENO_OPT_PERVASIVE
+   if (q-cpid) {
+   q-cpid = 0;
+   xnfree(q);
+   }
+#endif
+   if (resched)
/*
-* Some task has been woken up as a result of
-* the deletion: reschedule now.
+* Some task has been woken up as a result of the
+* deletion: reschedule now.
 */
xnpod_schedule();
-
-   xnlock_put_irqrestore(nklock, s);
 }
 
 /**
@@ -366,7 +377,7 @@ int rt_queue_delete_inner(RT_QUEUE *q, void __user *mapaddr)
 
/*
 * The queue descriptor has been marked as deleted before we
-* released the superlock thus preventing any sucessful
+* released the superlock thus preventing any successful
 * subsequent calls of rt_queue_delete(), so now we can
 * actually destroy the associated heap safely.
 */
diff --git a/ksrc/skins/native/syscall.c b/ksrc/skins/native/syscall.c
index 28c720e..a75ed3b 100644
--- a/ksrc/skins/native/syscall.c
+++ b/ksrc/skins/native/syscall.c
@@ -2073,24 +2073,17 @@ static int __rt_queue_delete(struct pt_regs *regs)
 {
RT_QUEUE_PLACEHOLDER ph;
RT_QUEUE *q;
-   int err;
 
if (__xn_safe_copy_from_user(ph, (void __user *)__xn_reg_arg1(regs),
 sizeof(ph)))
return -EFAULT;
 

Re: [Xenomai-core] Native: Fixing auto-cleanup

2009-10-20 Thread Jan Kiszka
Philippe Gerum wrote:
 Well, I can tell you that there are quite a few corner cases you would
 face in rewriting the queue/heap cleanup code. I'm not saying this
 should not be done, I just won't merge this to 2.5.0 to avoid more
 regressions. Let's fix the obvious for now, such as the missing
 descriptor deallocation in the post-release callback, and schedule a
 global cleanup refactoring for 2.5.1.

I would suggest to let us discuss this based on my patch series that I'm
currently testing (I also wanted to run a xenosim build test, but that
looks like futile effort - no chance to even get it configured on a
x86-64 host). Maybe you may want to cherry pick some of them, but maybe
they will show that there is enough benefit (== code simplification ==
bug risk reduction) to merge them all. Will be posted today.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Native: Fixing auto-cleanup

2009-10-18 Thread Jan Kiszka
Philippe Gerum wrote:
 On Fri, 2009-10-16 at 19:08 +0200, Jan Kiszka wrote:
 Hi,

 our automatic object cleanup on process termination is slightly broken
 for the native skin. The inline and macro magic behind
 __native_*_flush_rq() blindly calls rt_*_delete(), but that's not
 correct for mutexes (we can leak memory and/or corrupt the system heap),
 queues and heaps (we may leak shared heaps).
 
 Please elaborate regarding both queues and heaps (scenario).

Master creates heap, slave binds to it, master wants to terminate (or is
killed, doesn't matter), heap cannot be released as the slave is still
bound to it, slave terminates but heap object is still reserved on the
main heap = memory leak (just confirmed with a test case).

I'm not sure if that object migration to the global queue helps to some
degree here (it's not really useful due to other problems, will post a
removal patch) - I've build Xenomai support into the kernel...

 
 I'm in the process of fixing this, but that latter two are tricky. They
 need user space information (the user space address of the mapping base)
 for ordinary cleanup, and this is not available otherwise.

 At the time we are called with our cleanup handler, can we assume that
 the dying process has already unmapped all its rtheap segments?
 
 Unfortunately, no. Cleanup is a per-skin action, and the process may be
 bound to more than a single skin, which could turn out as requiring a
 sequence of cleanup calls.
 
 The only thing you may assume is that an attempt to release all memory
 mappings for the dying process will have been done prior to receive the
 cleanup event from the pipeline, but this won't help much in this case.

That's already very helpful!

 This attempt may fail and be postponed though, hence the deferred
 release callback fired via vmclose.

I already started to look into the release callback thing, but I'm still
scratching my head: Why do you set the callback even on explicit
rt_heap/queue_delete? I mean those that are supposed to fail with -EBUSY
and then to be retried by user land? What happens if rt_heap_unbind and
retried rt_heap_delete race?

Anyway, auto-cleanup of heap and queue must be made none-failing, ie.
the objects have to be discarded, just the heap memory deletion has to
be deferred. I'm digging into this direction, but I'm still wondering if
the none-automatic heap/queue cleanup is safe in its current form.

Jan

PS: Mutex cleanup leak is fixed now.



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Native: Fixing auto-cleanup

2009-10-18 Thread Philippe Gerum
On Sun, 2009-10-18 at 14:54 +0200, Jan Kiszka wrote:
 Philippe Gerum wrote:
  On Fri, 2009-10-16 at 19:08 +0200, Jan Kiszka wrote:
  Hi,
 
  our automatic object cleanup on process termination is slightly broken
  for the native skin. The inline and macro magic behind
  __native_*_flush_rq() blindly calls rt_*_delete(), but that's not
  correct for mutexes (we can leak memory and/or corrupt the system heap),
  queues and heaps (we may leak shared heaps).
  
  Please elaborate regarding both queues and heaps (scenario).
 
 Master creates heap, slave binds to it, master wants to terminate (or is
 killed, doesn't matter), heap cannot be released as the slave is still
 bound to it, slave terminates but heap object is still reserved on the
 main heap = memory leak (just confirmed with a test case).

This fixes it:

diff --git a/ksrc/skins/native/heap.c b/ksrc/skins/native/heap.c
index 0a24735..0fcb3c2 100644
--- a/ksrc/skins/native/heap.c
+++ b/ksrc/skins/native/heap.c
@@ -340,6 +340,11 @@ static void __heap_post_release(struct xnheap *h)
xnpod_schedule();
 
xnlock_put_irqrestore(nklock, s);
+
+#ifdef CONFIG_XENO_OPT_PERVASIVE
+   if (heap-cpid)
+   xnfree(heap);
+#endif
 }
 
 /**
diff --git a/ksrc/skins/native/queue.c b/ksrc/skins/native/queue.c
index 527bde8..50af544 100644
--- a/ksrc/skins/native/queue.c
+++ b/ksrc/skins/native/queue.c
@@ -303,6 +303,11 @@ static void __queue_post_release(struct xnheap *heap)
xnpod_schedule();
 
xnlock_put_irqrestore(nklock, s);
+
+#ifdef CONFIG_XENO_OPT_PERVASIVE
+   if (q-cpid)
+   xnfree(q);
+#endif
 }
 
 /**

 
 I'm not sure if that object migration to the global queue helps to some
 degree here (it's not really useful due to other problems, will post a
 removal patch) - I've build Xenomai support into the kernel...
 

This is a last resort action mainly aimed at kernel-based apps, assuming
that rmmoding them will ultimately flush the pending objects. We need
this.

We might want to avoid linking to the global queue whenever the deletion
call returns -EBUSY though, assuming that a post-release hook will do
the cleanup, but other errors may still happen.

  
  I'm in the process of fixing this, but that latter two are tricky. They
  need user space information (the user space address of the mapping base)
  for ordinary cleanup, and this is not available otherwise.
 
  At the time we are called with our cleanup handler, can we assume that
  the dying process has already unmapped all its rtheap segments?
  
  Unfortunately, no. Cleanup is a per-skin action, and the process may be
  bound to more than a single skin, which could turn out as requiring a
  sequence of cleanup calls.
  
  The only thing you may assume is that an attempt to release all memory
  mappings for the dying process will have been done prior to receive the
  cleanup event from the pipeline, but this won't help much in this case.
 
 That's already very helpful!
 

Not really, at least this is not relevant to the bug being fixed.
Additionally, the release attempt may fail due to pending references.

  This attempt may fail and be postponed though, hence the deferred
  release callback fired via vmclose.
 
 I already started to look into the release callback thing, but I'm still
 scratching my head: Why do you set the callback even on explicit
 rt_heap/queue_delete? I mean those that are supposed to fail with -EBUSY
 and then to be retried by user land?

Userland could retry, but most of the time it will just bail out and
leave this to vmclose.

  What happens if rt_heap_unbind and
 retried rt_heap_delete race?
 

A successful final unmapping clears the release handler.

 Anyway, auto-cleanup of heap and queue must be made none-failing, ie.
 the objects have to be discarded, just the heap memory deletion has to
 be deferred. I'm digging into this direction, but I'm still wondering if
 the none-automatic heap/queue cleanup is safe in its current form.
 

This seems largely overkill for the purpose of fixing the leak. Granted,
the common pattern would rather be to invalidate the front-end object
(heap/queue desc) and schedule a release for the backend one (i.e.
shared mem). However, the only impact this has for now is to allow apps
to keep an object indefinitely busy by binding to it continuously albeit
a deletion request is pending; I don't think this deserves a major
change in the cleanup actions at this stage of 2.5. Cleanup stuff
between userland and kernel space is prone to regression.

 Jan
 
 PS: Mutex cleanup leak is fixed now.
 

Nice. Thanks.

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Native: Fixing auto-cleanup

2009-10-18 Thread Jan Kiszka
Philippe Gerum wrote:
 On Sun, 2009-10-18 at 14:54 +0200, Jan Kiszka wrote:
 Philippe Gerum wrote:
 On Fri, 2009-10-16 at 19:08 +0200, Jan Kiszka wrote:
 Hi,

 our automatic object cleanup on process termination is slightly broken
 for the native skin. The inline and macro magic behind
 __native_*_flush_rq() blindly calls rt_*_delete(), but that's not
 correct for mutexes (we can leak memory and/or corrupt the system heap),
 queues and heaps (we may leak shared heaps).
 Please elaborate regarding both queues and heaps (scenario).
 Master creates heap, slave binds to it, master wants to terminate (or is
 killed, doesn't matter), heap cannot be released as the slave is still
 bound to it, slave terminates but heap object is still reserved on the
 main heap = memory leak (just confirmed with a test case).
 
 This fixes it:
 
 diff --git a/ksrc/skins/native/heap.c b/ksrc/skins/native/heap.c
 index 0a24735..0fcb3c2 100644
 --- a/ksrc/skins/native/heap.c
 +++ b/ksrc/skins/native/heap.c
 @@ -340,6 +340,11 @@ static void __heap_post_release(struct xnheap *h)
   xnpod_schedule();
  

+ xeno_mark_deleted(heap);

But I still think this approach has too complex (and so far
undocumented) user-visible semantics and is going the wrong path.

   xnlock_put_irqrestore(nklock, s);
 +
 +#ifdef CONFIG_XENO_OPT_PERVASIVE
 + if (heap-cpid)
 + xnfree(heap);
 +#endif
  }
  
  /**
 diff --git a/ksrc/skins/native/queue.c b/ksrc/skins/native/queue.c
 index 527bde8..50af544 100644
 --- a/ksrc/skins/native/queue.c
 +++ b/ksrc/skins/native/queue.c
 @@ -303,6 +303,11 @@ static void __queue_post_release(struct xnheap *heap)
   xnpod_schedule();
  
   xnlock_put_irqrestore(nklock, s);
 +
 +#ifdef CONFIG_XENO_OPT_PERVASIVE
 + if (q-cpid)
 + xnfree(q);
 +#endif
  }
  
  /**
 
 I'm not sure if that object migration to the global queue helps to some
 degree here (it's not really useful due to other problems, will post a
 removal patch) - I've build Xenomai support into the kernel...

 
 This is a last resort action mainly aimed at kernel-based apps, assuming
 that rmmoding them will ultimately flush the pending objects. We need
 this.

Kernel-based apps do not stress this path at all, their objects are
already in the global queue. Only user-allocated objected can be requeued.

And that either indicates open issues in the cleanup path (ie. some
objects may practically never be deleted) or is superfluous as a
deferred cleanup mechanism will take care (namely the one of the xnheap).

 
 We might want to avoid linking to the global queue whenever the deletion
 call returns -EBUSY though, assuming that a post-release hook will do
 the cleanup, but other errors may still happen.

Even more important: -EIDRM, or we continue to risk serious corruptions.
Better remove this altogether.

 
 I'm in the process of fixing this, but that latter two are tricky. They
 need user space information (the user space address of the mapping base)
 for ordinary cleanup, and this is not available otherwise.

 At the time we are called with our cleanup handler, can we assume that
 the dying process has already unmapped all its rtheap segments?
 Unfortunately, no. Cleanup is a per-skin action, and the process may be
 bound to more than a single skin, which could turn out as requiring a
 sequence of cleanup calls.

 The only thing you may assume is that an attempt to release all memory
 mappings for the dying process will have been done prior to receive the
 cleanup event from the pipeline, but this won't help much in this case.
 That's already very helpful!

 
 Not really, at least this is not relevant to the bug being fixed.
 Additionally, the release attempt may fail due to pending references.

For which kind of objects? What kind of references? At least according
to the docs, there is only the risk of -EBUSY with heaps and queues. All
other objects terminate properly.

 
 This attempt may fail and be postponed though, hence the deferred
 release callback fired via vmclose.
 I already started to look into the release callback thing, but I'm still
 scratching my head: Why do you set the callback even on explicit
 rt_heap/queue_delete? I mean those that are supposed to fail with -EBUSY
 and then to be retried by user land?
 
 Userland could retry, but most of the time it will just bail out and
 leave this to vmclose.
 
  What happens if rt_heap_unbind and
 retried rt_heap_delete race?

 
 A successful final unmapping clears the release handler.

rt_heap_unbind would trigger the release handler, thus the object
deletion, and the actual creator would find the object destroyed under
its feet despite failed deletion is supposed to leave the object intact.
That's the kind of complex semantics I was referring to.

Let's get this right: If the creator of a shared semaphore deletes that
object, it's synchronously removed; anyone trying to use it is informed.
That's reasonable to expect from heaps and queues as 

Re: [Xenomai-core] Native: Fixing auto-cleanup

2009-10-17 Thread Philippe Gerum
On Fri, 2009-10-16 at 19:08 +0200, Jan Kiszka wrote:
 Hi,
 
 our automatic object cleanup on process termination is slightly broken
 for the native skin. The inline and macro magic behind
 __native_*_flush_rq() blindly calls rt_*_delete(), but that's not
 correct for mutexes (we can leak memory and/or corrupt the system heap),
 queues and heaps (we may leak shared heaps).

Please elaborate regarding both queues and heaps (scenario).

 
 I'm in the process of fixing this, but that latter two are tricky. They
 need user space information (the user space address of the mapping base)
 for ordinary cleanup, and this is not available otherwise.
 
 At the time we are called with our cleanup handler, can we assume that
 the dying process has already unmapped all its rtheap segments?

Unfortunately, no. Cleanup is a per-skin action, and the process may be
bound to more than a single skin, which could turn out as requiring a
sequence of cleanup calls.

The only thing you may assume is that an attempt to release all memory
mappings for the dying process will have been done prior to receive the
cleanup event from the pipeline, but this won't help much in this case.
This attempt may fail and be postponed though, hence the deferred
release callback fired via vmclose.

  In that
 case I could simply pass NULL as base address, and the deletion will
 succeed. If not, I would currently lack a good idea how to resolve this
 issue.
 
 Jan
 
-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Native: Fixing auto-cleanup

2009-10-16 Thread Jan Kiszka
Hi,

our automatic object cleanup on process termination is slightly broken
for the native skin. The inline and macro magic behind
__native_*_flush_rq() blindly calls rt_*_delete(), but that's not
correct for mutexes (we can leak memory and/or corrupt the system heap),
queues and heaps (we may leak shared heaps).

I'm in the process of fixing this, but that latter two are tricky. They
need user space information (the user space address of the mapping base)
for ordinary cleanup, and this is not available otherwise.

At the time we are called with our cleanup handler, can we assume that
the dying process has already unmapped all its rtheap segments? In that
case I could simply pass NULL as base address, and the deletion will
succeed. If not, I would currently lack a good idea how to resolve this
issue.

Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Native: Fixing auto-cleanup

2009-10-16 Thread Jan Kiszka
Jan Kiszka wrote:
 Hi,
 
 our automatic object cleanup on process termination is slightly broken
 for the native skin. The inline and macro magic behind
 __native_*_flush_rq() blindly calls rt_*_delete(), but that's not
 correct for mutexes (we can leak memory and/or corrupt the system heap),

Hmm, xnheap_free is actually robust enough to cope with this invalid
release (sem_heap object to system heap), so no corruptions. Still, we
leak, and that's what made me stumble over these bugs.

 queues and heaps (we may leak shared heaps).
 
 I'm in the process of fixing this, but that latter two are tricky. They
 need user space information (the user space address of the mapping base)
 for ordinary cleanup, and this is not available otherwise.
 
 At the time we are called with our cleanup handler, can we assume that
 the dying process has already unmapped all its rtheap segments? In that
 case I could simply pass NULL as base address, and the deletion will
 succeed. If not, I would currently lack a good idea how to resolve this
 issue.
 
 Jan
 

Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core