[Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Jan Kiszka
Hi Gilles,

trying to understand the cb_read/write lock usage, some question came up
here: What prevents that the mutexq iteration in pse51_mutex_check_init
races against pse51_mutex_destroy_internal?

If nothing, then I wonder if we actually have to iterate over the whole
queue to find out whether a given object has been initialized and
registered already or not. Can't this be encoded differently?

BTW, shadow_mutex.mutex is a kernel pointer sitting in a user-reachable
memory region? Why not using a handle here, like the native skin does?
Won't that allow to resolve the issue above as well?

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH 2/2] Unify asm-x86/atomic.h

2008-08-23 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 ...and also automatically fixes the missing LOCK prefix for
 pthread_mutex_* services on x86_32 SMP.
 
 This looks to me as a half-way unification. Can we not totally get rid
 of atomic_32.h and atomic_64.h ? I mean since we are using unsigned long
 as atomic_t on both platforms, there should not be much difference
 (except maybe the inline asm).
 

I could merge all atomic_32/64.h hunks into atomic.h if that this
preferred, but I cannot help getting rid of the atomic_t vs. atomic64_t
differences, thus the sub-arch specific part cannot be reduced as far as
I see it ATM.

However, yesterday's version contained a regression /wrt 32 bit (missing
atomic_counter_t and xnarch_atomic_t type definitions), this one is
better:

---
 include/asm-x86/atomic.h|   64 
 include/asm-x86/atomic_32.h |   31 -
 include/asm-x86/atomic_64.h |   33 --
 3 files changed, 65 insertions(+), 63 deletions(-)

Index: b/include/asm-x86/atomic.h
===
--- a/include/asm-x86/atomic.h
+++ b/include/asm-x86/atomic.h
@@ -1,5 +1,69 @@
+/*
+ * Copyright (C) 2007 Philippe Gerum [EMAIL PROTECTED].
+ *
+ * Xenomai is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published
+ * by the Free Software Foundation; either version 2 of the License,
+ * or (at your option) any later version.
+ *
+ * Xenomai is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with Xenomai; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
+ * 02111-1307, USA.
+ */
+
+#ifndef _XENO_ASM_X86_ATOMIC_H
+#define _XENO_ASM_X86_ATOMIC_H
+
+#include asm/xenomai/features.h
+
+typedef unsigned long atomic_flags_t;
+
+#ifdef __KERNEL__
+
+#include linux/bitops.h
+#include asm/atomic.h
+#include asm/system.h
+
+#define xnarch_atomic_set_mask(pflags,mask) \
+   atomic_set_mask((mask),(unsigned *)(pflags))
+#define xnarch_atomic_clear_mask(pflags,mask) \
+   atomic_clear_mask((mask),(unsigned *)(pflags))
+#define xnarch_atomic_xchg(ptr,x)  xchg(ptr,x)
+
+#define xnarch_memory_barrier()  smp_mb()
+
+#else /* !__KERNEL__ */
+
+#include xeno_config.h
+
+#ifdef CONFIG_SMP
+#define LOCK_PREFIX lock ; 
+#else
+#define LOCK_PREFIX 
+#endif
+
+typedef struct { unsigned long counter; } xnarch_atomic_t;
+
+#define xnarch_atomic_get(v)   ((v)-counter)
+
+#define xnarch_atomic_set(v,i) (((v)-counter) = (i))
+
+#define xnarch_write_memory_barrier()  xnarch_memory_barrier()
+
+#endif /* __KERNEL__ */
+
 #ifdef __i386__
 #include atomic_32.h
 #else
 #include atomic_64.h
 #endif
+
+#include asm-generic/xenomai/atomic.h
+
+#endif /* !_XENO_ASM_X86_ATOMIC_64_H */
Index: b/include/asm-x86/atomic_32.h
===
--- a/include/asm-x86/atomic_32.h
+++ b/include/asm-x86/atomic_32.h
@@ -19,48 +19,26 @@
 
 #ifndef _XENO_ASM_X86_ATOMIC_32_H
 #define _XENO_ASM_X86_ATOMIC_32_H
-#define _XENO_ASM_X86_ATOMIC_H
 
 #ifdef __KERNEL__
 
-#include linux/bitops.h
-#include asm/atomic.h
-#include asm/system.h
-
 #define xnarch_atomic_set(pcounter,i)  atomic_set(pcounter,i)
 #define xnarch_atomic_get(pcounter)atomic_read(pcounter)
 #define xnarch_atomic_inc(pcounter)atomic_inc(pcounter)
 #define xnarch_atomic_dec(pcounter)atomic_dec(pcounter)
 #define xnarch_atomic_inc_and_test(pcounter)   atomic_inc_and_test(pcounter)
 #define xnarch_atomic_dec_and_test(pcounter)   atomic_dec_and_test(pcounter)
-#define xnarch_atomic_set_mask(pflags,mask)atomic_set_mask(mask,pflags)
-#define xnarch_atomic_clear_mask(pflags,mask)  atomic_clear_mask(mask,pflags)
-#define xnarch_atomic_xchg(ptr,x)  xchg(ptr,x)
 #define xnarch_atomic_cmpxchg(pcounter,old,new) \
atomic_cmpxchg((pcounter),(old),(new))
 
-#define xnarch_memory_barrier()  smp_mb()
-
 typedef atomic_t atomic_counter_t;
 typedef atomic_t xnarch_atomic_t;
 
 #else /* !__KERNEL__ */
 
-#ifdef CONFIG_SMP
-#define LOCK_PREFIX lock ; 
-#else
-#define LOCK_PREFIX 
-#endif
-
-typedef struct { int counter; } xnarch_atomic_t;
-
 struct __xeno_xchg_dummy { unsigned long a[100]; };
 #define __xeno_xg(x) ((struct __xeno_xchg_dummy *)(x))
 
-#define xnarch_atomic_get(v)   ((v)-counter)
-
-#define xnarch_atomic_set(v,i) (((v)-counter) = (i))
-
 static inline unsigned long xnarch_atomic_xchg (volatile void *ptr,
unsigned long x)
 {
@@ -84,17 +62,10 @@ xnarch_atomic_cmpxchg(xnarch_atomic_t *v
return 

Re: [Xenomai-core] [PATCH] Remove redundant LOCK prefix

2008-08-23 Thread Philippe Gerum
Jan Kiszka wrote:
 According to Linux and the Intel spec, this prefix is not needed.


Obviously, it's not, since the whole purpose of xchg() is to guarantee bus
locking for memory operands anyway. Please merge.

 ---
  include/asm-x86/atomic_32.h |2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 Index: b/include/asm-x86/atomic_32.h
 ===
 --- a/include/asm-x86/atomic_32.h
 +++ b/include/asm-x86/atomic_32.h
 @@ -64,7 +64,7 @@ struct __xeno_xchg_dummy { unsigned long
  static inline unsigned long xnarch_atomic_xchg (volatile void *ptr,
   unsigned long x)
  {
 - __asm__ __volatile__(LOCK_PREFIX xchgl %0,%1
 + __asm__ __volatile__(xchgl %0,%1
:=r (x)
:m (*__xeno_xg(ptr)), 0 (x)
:memory);
 
 
 
 
 
 ___
 Xenomai-core mailing list
 Xenomai-core@gna.org
 https://mail.gna.org/listinfo/xenomai-core


-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Gilles Chanteperdrix
Hi Jan,

Please do not use my address at gmail, gna does not want me to post from
this address:

2008-08-23 12:10:19 1KWq4T-zD-9E ** xenomai-core@gna.org 
Xenomai-core@gna.org
 R=dnslookup T=remote_smtp: SMTP error from remote mailer after RCPT 
 TO:Xenomai-
[EMAIL PROTECTED]: host mail.gna.org [88.191.250.46]: 550 rejected because 
gmail.com i
s in a black list at dsn.rfc-ignorant.org

so, here is a repost of my answer:

Jan Kiszka wrote:
  Hi Gilles,
  
  trying to understand the cb_read/write lock usage, some question came up
  here: What prevents that the mutexq iteration in pse51_mutex_check_init
  races against pse51_mutex_destroy_internal?

Well, I am afraid the mechanism used is not 100% safe. Anyway, the aim
is to catch most of invalid usages, it seems we can not catch them all.

  
  If nothing, then I wonder if we actually have to iterate over the whole
  queue to find out whether a given object has been initialized and
  registered already or not. Can't this be encoded differently?
  
  BTW, shadow_mutex.mutex is a kernel pointer sitting in a user-reachable
  memory region? Why not using a handle here, like the native skin does?
  Won't that allow to resolve the issue above as well?

This has been so from the beginning, and I did not change it.

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH 2/2] Unify asm-x86/atomic.h

2008-08-23 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 ...and also automatically fixes the missing LOCK prefix for
 pthread_mutex_* services on x86_32 SMP.
 This looks to me as a half-way unification. Can we not totally get rid
 of atomic_32.h and atomic_64.h ? I mean since we are using unsigned long
 as atomic_t on both platforms, there should not be much difference
 (except maybe the inline asm).

 
 I could merge all atomic_32/64.h hunks into atomic.h if that this
 preferred, but I cannot help getting rid of the atomic_t vs. atomic64_t
 differences, thus the sub-arch specific part cannot be reduced as far as
 I see it ATM.

We could use atomic_long_t on the two arches.

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Gilles Chanteperdrix
Gilles Chanteperdrix wrote:
 Hi Jan,
 
 Please do not use my address at gmail, gna does not want me to post from
 this address:
 
 2008-08-23 12:10:19 1KWq4T-zD-9E ** xenomai-core@gna.org 
 Xenomai-core@gna.org
 R=dnslookup T=remote_smtp: SMTP error from remote mailer after RCPT 
 TO:Xenomai-
 [EMAIL PROTECTED]: host mail.gna.org [88.191.250.46]: 550 rejected because 
 gmail.com i
 s in a black list at dsn.rfc-ignorant.org
 
 so, here is a repost of my answer:
 
 Jan Kiszka wrote:
 Hi Gilles,

 trying to understand the cb_read/write lock usage, some question came up
 here: What prevents that the mutexq iteration in pse51_mutex_check_init
 races against pse51_mutex_destroy_internal?
 
 Well, I am afraid the mechanism used is not 100% safe. Anyway, the aim
 is to catch most of invalid usages, it seems we can not catch them all.

No, it works, because pthread_mutex_destroy will not be able to get the
write lock is the lock is read-locked.

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 Hi Gilles,
 
 trying to understand the cb_read/write lock usage, some question came up
 here: What prevents that the mutexq iteration in pse51_mutex_check_init
 races against pse51_mutex_destroy_internal?
 
 If nothing, then I wonder if we actually have to iterate over the whole
 queue to find out whether a given object has been initialized and
 registered already or not. Can't this be encoded differently?

We actually iterate over the queue only if the magic happens to be
correct, which is not the common case.

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH 2/2] Unify asm-x86/atomic.h

2008-08-23 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 ...and also automatically fixes the missing LOCK prefix for
 pthread_mutex_* services on x86_32 SMP.
 This looks to me as a half-way unification. Can we not totally get rid
 of atomic_32.h and atomic_64.h ? I mean since we are using unsigned long
 as atomic_t on both platforms, there should not be much difference
 (except maybe the inline asm).

 I could merge all atomic_32/64.h hunks into atomic.h if that this
 preferred, but I cannot help getting rid of the atomic_t vs. atomic64_t
 differences, thus the sub-arch specific part cannot be reduced as far as
 I see it ATM.
 
 We could use atomic_long_t on the two arches.

OK, but then it becomes wrapping business (2.4...) - on the long term a
vanishing issue, granted. Will look into this.

Jan




signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 Hi Gilles,

 trying to understand the cb_read/write lock usage, some question came up
 here: What prevents that the mutexq iteration in pse51_mutex_check_init
 races against pse51_mutex_destroy_internal?

 If nothing, then I wonder if we actually have to iterate over the whole
 queue to find out whether a given object has been initialized and
 registered already or not. Can't this be encoded differently?
 
 We actually iterate over the queue only if the magic happens to be
 correct, which is not the common case.

However, there remains a race window with other threads removing other
mutex objects in parallel, changing the queue - risking a kernel oops.
And that is what worries me. It's unlikely. but possible. It's unclean.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 Hi Gilles,

 trying to understand the cb_read/write lock usage, some question came up
 here: What prevents that the mutexq iteration in pse51_mutex_check_init
 races against pse51_mutex_destroy_internal?

 If nothing, then I wonder if we actually have to iterate over the whole
 queue to find out whether a given object has been initialized and
 registered already or not. Can't this be encoded differently?
 We actually iterate over the queue only if the magic happens to be
 correct, which is not the common case.
 
 However, there remains a race window with other threads removing other
 mutex objects in parallel, changing the queue - risking a kernel oops.
 And that is what worries me. It's unlikely. but possible. It's unclean.

Ok. This used to be protected by the nklock. We should add the nklock again.

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Gilles Chanteperdrix
Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 Hi Gilles,

 trying to understand the cb_read/write lock usage, some question came up
 here: What prevents that the mutexq iteration in pse51_mutex_check_init
 races against pse51_mutex_destroy_internal?

 If nothing, then I wonder if we actually have to iterate over the whole
 queue to find out whether a given object has been initialized and
 registered already or not. Can't this be encoded differently?
 We actually iterate over the queue only if the magic happens to be
 correct, which is not the common case.
 However, there remains a race window with other threads removing other
 mutex objects in parallel, changing the queue - risking a kernel oops.
 And that is what worries me. It's unlikely. but possible. It's unclean.
 
 Ok. This used to be protected by the nklock. We should add the nklock again.

Well I do not think that anyone is rescheduling, so we could probably
replace the nklock with a per-kqueue xnlock.

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
 Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 Hi Gilles,

 trying to understand the cb_read/write lock usage, some question came up
 here: What prevents that the mutexq iteration in pse51_mutex_check_init
 races against pse51_mutex_destroy_internal?

 If nothing, then I wonder if we actually have to iterate over the whole
 queue to find out whether a given object has been initialized and
 registered already or not. Can't this be encoded differently?
 We actually iterate over the queue only if the magic happens to be
 correct, which is not the common case.
 However, there remains a race window with other threads removing other
 mutex objects in parallel, changing the queue - risking a kernel oops.
 And that is what worries me. It's unlikely. but possible. It's unclean.
 Ok. This used to be protected by the nklock. We should add the nklock again.
 
 Well I do not think that anyone is rescheduling, so we could probably
 replace the nklock with a per-kqueue xnlock.

If nklock or per queue - both will introduce O(n) at least local
preemption blocking. That's why I was asking for an alternative
algorithm than iterating over the whole list.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Gilles Chanteperdrix
Philippe Gerum wrote:
 Gilles Chanteperdrix wrote:
 Hi Jan,

 Please do not use my address at gmail, gna does not want me to post from
 this address:

 2008-08-23 12:10:19 1KWq4T-zD-9E ** xenomai-core@gna.org 
 Xenomai-core@gna.org
 R=dnslookup T=remote_smtp: SMTP error from remote mailer after RCPT 
 TO:Xenomai-
 [EMAIL PROTECTED]: host mail.gna.org [88.191.250.46]: 550 rejected because 
 gmail.com i
 s in a black list at dsn.rfc-ignorant.org

 so, here is a repost of my answer:

 Jan Kiszka wrote:
 Hi Gilles,

 trying to understand the cb_read/write lock usage, some question came up
 here: What prevents that the mutexq iteration in pse51_mutex_check_init
 races against pse51_mutex_destroy_internal?
 Well, I am afraid the mechanism used is not 100% safe. Anyway, the aim
 is to catch most of invalid usages, it seems we can not catch them all.

 If nothing, then I wonder if we actually have to iterate over the whole
 queue to find out whether a given object has been initialized and
 registered already or not. Can't this be encoded differently?

 BTW, shadow_mutex.mutex is a kernel pointer sitting in a user-reachable
 memory region? Why not using a handle here, like the native skin does?
 Won't that allow to resolve the issue above as well?
 This has been so from the beginning, and I did not change it.

 
 To get registry handles, you first need to register objects. The POSIX skin
 still does not use the built-in registry, that's why.

Well the registry is about associating objects with their name, and
since most posix skin objects have no name, I did not see the point of
using the registry. And for the named objects, the nucleus registry was
not compatible with the posix skin requirements, which is why I did not
use it...

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Philippe Gerum
Gilles Chanteperdrix wrote:
 Philippe Gerum wrote:
 Gilles Chanteperdrix wrote:
 Hi Jan,

 Please do not use my address at gmail, gna does not want me to post from
 this address:

 2008-08-23 12:10:19 1KWq4T-zD-9E ** xenomai-core@gna.org 
 Xenomai-core@gna.org
 R=dnslookup T=remote_smtp: SMTP error from remote mailer after RCPT 
 TO:Xenomai-
 [EMAIL PROTECTED]: host mail.gna.org [88.191.250.46]: 550 rejected because 
 gmail.com i
 s in a black list at dsn.rfc-ignorant.org

 so, here is a repost of my answer:

 Jan Kiszka wrote:
 Hi Gilles,

 trying to understand the cb_read/write lock usage, some question came up
 here: What prevents that the mutexq iteration in pse51_mutex_check_init
 races against pse51_mutex_destroy_internal?
 Well, I am afraid the mechanism used is not 100% safe. Anyway, the aim
 is to catch most of invalid usages, it seems we can not catch them all.

 If nothing, then I wonder if we actually have to iterate over the whole
 queue to find out whether a given object has been initialized and
 registered already or not. Can't this be encoded differently?

 BTW, shadow_mutex.mutex is a kernel pointer sitting in a user-reachable
 memory region? Why not using a handle here, like the native skin does?
 Won't that allow to resolve the issue above as well?
 This has been so from the beginning, and I did not change it.

 To get registry handles, you first need to register objects. The POSIX skin
 still does not use the built-in registry, that's why.
 
 Well the registry is about associating objects with their name, and
 since most posix skin objects have no name, I did not see the point of
 using the registry. And for the named objects, the nucleus registry was
 not compatible with the posix skin requirements, which is why I did not
 use it...
 

The thing is that, without built-in registry support, you have no /proc export
of any status data either.

-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 Gilles Chanteperdrix wrote:
 Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 Hi Gilles,

 trying to understand the cb_read/write lock usage, some question came up
 here: What prevents that the mutexq iteration in pse51_mutex_check_init
 races against pse51_mutex_destroy_internal?

 If nothing, then I wonder if we actually have to iterate over the whole
 queue to find out whether a given object has been initialized and
 registered already or not. Can't this be encoded differently?
 We actually iterate over the queue only if the magic happens to be
 correct, which is not the common case.
 However, there remains a race window with other threads removing other
 mutex objects in parallel, changing the queue - risking a kernel oops.
 And that is what worries me. It's unlikely. but possible. It's unclean.
 Ok. This used to be protected by the nklock. We should add the nklock again.
 Well I do not think that anyone is rescheduling, so we could probably
 replace the nklock with a per-kqueue xnlock.
 
 If nklock or per queue - both will introduce O(n) at least local
 preemption blocking. That's why I was asking for an alternative
 algorithm than iterating over the whole list.

I insist:
- the loop does almost nothing, so n will have to become very large for
it to take a long time, and n is the number of mutexes allocated so far
in one application, or the number of shared mutexes, which is probably
even less.
- the loop happens if the magic happens to be good, so probably only if
you are calling pthread_mutex_init twice for the same mutex, the normal
use-case is to use memory from BSS to allocate mutex, so the magic of a
normal application calling pthread_mutex_init is always 0, and you do
not enter the loop.

Today, I consider it much more a problem that I can not call fork in a
xenomai application with opened descriptors and exit the parent
application without the file descriptors being closed in the child too.
And this will be the thing I will spend time to fix first. Using the
registry in the posix skin will only come next.

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Gilles Chanteperdrix
Philippe Gerum wrote:
 Gilles Chanteperdrix wrote:
 Philippe Gerum wrote:
 Gilles Chanteperdrix wrote:
 Hi Jan,

 Please do not use my address at gmail, gna does not want me to post from
 this address:

 2008-08-23 12:10:19 1KWq4T-zD-9E ** xenomai-core@gna.org 
 Xenomai-core@gna.org
 R=dnslookup T=remote_smtp: SMTP error from remote mailer after RCPT 
 TO:Xenomai-
 [EMAIL PROTECTED]: host mail.gna.org [88.191.250.46]: 550 rejected 
 because gmail.com i
 s in a black list at dsn.rfc-ignorant.org

 so, here is a repost of my answer:

 Jan Kiszka wrote:
 Hi Gilles,

 trying to understand the cb_read/write lock usage, some question came up
 here: What prevents that the mutexq iteration in pse51_mutex_check_init
 races against pse51_mutex_destroy_internal?
 Well, I am afraid the mechanism used is not 100% safe. Anyway, the aim
 is to catch most of invalid usages, it seems we can not catch them all.

 If nothing, then I wonder if we actually have to iterate over the whole
 queue to find out whether a given object has been initialized and
 registered already or not. Can't this be encoded differently?

 BTW, shadow_mutex.mutex is a kernel pointer sitting in a user-reachable
 memory region? Why not using a handle here, like the native skin does?
 Won't that allow to resolve the issue above as well?
 This has been so from the beginning, and I did not change it.

 To get registry handles, you first need to register objects. The POSIX skin
 still does not use the built-in registry, that's why.
 Well the registry is about associating objects with their name, and
 since most posix skin objects have no name, I did not see the point of
 using the registry. And for the named objects, the nucleus registry was
 not compatible with the posix skin requirements, which is why I did not
 use it...

 
 The thing is that, without built-in registry support, you have no /proc export
 of any status data either.

Yes, I did not see the point at the time when the registry was added,
but I do see it now...

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 Gilles Chanteperdrix wrote:
 Philippe Gerum wrote:
 Gilles Chanteperdrix wrote:
 Hi Jan,

 Please do not use my address at gmail, gna does not want me to post from
 this address:

 2008-08-23 12:10:19 1KWq4T-zD-9E ** xenomai-core@gna.org 
 Xenomai-core@gna.org
 R=dnslookup T=remote_smtp: SMTP error from remote mailer after RCPT 
 TO:Xenomai-
 [EMAIL PROTECTED]: host mail.gna.org [88.191.250.46]: 550 rejected 
 because gmail.com i
 s in a black list at dsn.rfc-ignorant.org

 so, here is a repost of my answer:

 Jan Kiszka wrote:
 Hi Gilles,

 trying to understand the cb_read/write lock usage, some question came up
 here: What prevents that the mutexq iteration in pse51_mutex_check_init
 races against pse51_mutex_destroy_internal?
 Well, I am afraid the mechanism used is not 100% safe. Anyway, the aim
 is to catch most of invalid usages, it seems we can not catch them all.

 If nothing, then I wonder if we actually have to iterate over the whole
 queue to find out whether a given object has been initialized and
 registered already or not. Can't this be encoded differently?

 BTW, shadow_mutex.mutex is a kernel pointer sitting in a user-reachable
 memory region? Why not using a handle here, like the native skin does?
 Won't that allow to resolve the issue above as well?
 This has been so from the beginning, and I did not change it.

 To get registry handles, you first need to register objects. The POSIX skin
 still does not use the built-in registry, that's why.
 Well the registry is about associating objects with their name, and
 since most posix skin objects have no name, I did not see the point of
 using the registry. And for the named objects, the nucleus registry was
 not compatible with the posix skin requirements, which is why I did not
 use it...
 
 The registry is also about providing user-safe handles for unnamed
 object - so that you don't risk accepting borken kernel pointers from
 user space.

Yes, and from a security point of view, accepting pointers from
user-space may help an ordinary user become root by passing cleverly
crafted kernel-space addresses.

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core