Re: [Xenomai-core] [Patch 5/7] Define new syscalls for the system skin

2008-05-19 Thread Philippe Gerum
Gilles Chanteperdrix wrote:
 Philippe Gerum wrote:
   Gilles Chanteperdrix wrote:
As far as I understood, the user-space atomic operations, used to
acquire a free mutex in user-space, are not part of the futex API. In
our case, we are using xnarch_atomic_* operations to implement portably
this user-space locking stuff. I think that even setting the bit saying
that the mutex is currently owned is done in pthread_mutexes
implementation, not in the Futex API.
   
   I would fully agree if the futex API did not define PI-based ops, which are
   needed for proper real-time operations from userland. You will certainly 
 agree
   that PI implies that some kind of ownership exists; and because there 
 can't be
   more than a single owner in that case, you end up with an object that 
 can't be
   held by more than a single task. So you do have a mutex in disguise, 
 whatever
   the way to keep its state is (a bit, an integer, whatever). So there is 
 stronger
   semantics attached to that API than to simply manage an event notification 
 scheme.
   
Now, what remains is
sys_futex(FUTEX_WAIT) and sys_futex(FUTEX_WAKE), this terribly looks like
xnsync_sleep_on and xnsynch_wakeup_one_sleeper.

   
   Yes, here again I partially agree, except for a significant issue: xnsynch 
 is a
   stateless object (that's why we can use it for different syncobjs which are
   totally unrelated in their semantics - mutex, queue, region, counting sems,
   whatever). I was just wondering if we could make the *tex thingy a bit more
   stateful to ease the job for the skins, just in case we would use it for 
 mutexes
   only. I have no immediate answer to this question, just asking -- this is 
 my
   contribution as a senior member of the peanut gallery.
 
 We can certainly implement an abstraction managing xnarch_atomic_t +
 xnsynch_t, however, it seems that we would have to re-factor all
 mutex/semaphores implementations to use this new abstraction. The
 current approach is to add an xnarch_atomic_cmpxchg in user-space, and
 fall back to an almost unchanged kernel-space support when it fails.


Ok. Let's merge this as it is. Common code will emerge eventually if it happens
to make sense when plugging the feature into the native and VxWorks mutex 
support.

   
  
   I feel this
   would complicate things: currently, the way I implemented user-space
   mutexes for the posix skin kept the old association between the
   user-space mutex, and its kernel-space companion, also used by
   kernel-space operations.
   
  
  My concern boils down to: how much of the POSIX implementation, 
 beyond the
  cb_lock stuff, would have to be duplicated to get the same support 
 ported to,
  say the VxWorks semM services?

The initialization code, and the additional calls to
xnarch_atomic_cmpxchg in user-space. If xnarch_atomic_cmpxchg fails we
call kernel-space, which is mostly unchanged.

Because of the cb_lock stuff, I also needed to implement the
kernel-space syscalls in two versions: one if user-space has
xnarch_atomic_cmpxchg and could lock the mutex control block, the other
if the mutex control block needs to be locked by kernel-space.

   
   This is the part my laziness would very much like to factor as much as 
 possible.
   If ever possible.
 
 The current implementation is more or less:
 
 /* Common generic function */
 pse51_mutex_thing_inner(mutex)
 {
 /*... Assume mutex cb is locked ... */
 }
 
 /* Kernel-space operation */
 pthread_mutex_thing(mutex)
 {
  cb_trylock(mutex-lock);
   
  pse51_mutex_thing_inner(mutex);
 
  cb_unlock(mutex-lock);
 }
 
 /* syscall wrapper */
 #ifdef XNARCH_HAVE_US_ATOMIC_CMPXCHG
 __pthread_mutex_thing()
 {
  copy_from_user();
 
  pse51_mutex_thing_inner(mutex)
 
  copy_to_user();
 }
 #else
 __pthread_mutex_thing()
 {
  copy_from_user();
 
  pthread_mutex_thing(mutex)
 
  copy_to_user();
 }
 #endif
 
 


-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Patch 5/7] Define new syscalls for the system skin

2008-05-19 Thread Gilles Chanteperdrix
On Mon, May 19, 2008 at 2:56 PM, Philippe Gerum [EMAIL PROTECTED] wrote:
 Gilles Chanteperdrix wrote:
 Philippe Gerum wrote:
   Gilles Chanteperdrix wrote:
As far as I understood, the user-space atomic operations, used to
acquire a free mutex in user-space, are not part of the futex API. In
our case, we are using xnarch_atomic_* operations to implement portably
this user-space locking stuff. I think that even setting the bit saying
that the mutex is currently owned is done in pthread_mutexes
implementation, not in the Futex API.
  
   I would fully agree if the futex API did not define PI-based ops, which 
 are
   needed for proper real-time operations from userland. You will certainly 
 agree
   that PI implies that some kind of ownership exists; and because there 
 can't be
   more than a single owner in that case, you end up with an object that 
 can't be
   held by more than a single task. So you do have a mutex in disguise, 
 whatever
   the way to keep its state is (a bit, an integer, whatever). So there is 
 stronger
   semantics attached to that API than to simply manage an event 
 notification scheme.
  
Now, what remains is
sys_futex(FUTEX_WAIT) and sys_futex(FUTEX_WAKE), this terribly looks 
 like
xnsync_sleep_on and xnsynch_wakeup_one_sleeper.
   
  
   Yes, here again I partially agree, except for a significant issue: 
 xnsynch is a
   stateless object (that's why we can use it for different syncobjs which 
 are
   totally unrelated in their semantics - mutex, queue, region, counting 
 sems,
   whatever). I was just wondering if we could make the *tex thingy a bit 
 more
   stateful to ease the job for the skins, just in case we would use it for 
 mutexes
   only. I have no immediate answer to this question, just asking -- this is 
 my
   contribution as a senior member of the peanut gallery.

 We can certainly implement an abstraction managing xnarch_atomic_t +
 xnsynch_t, however, it seems that we would have to re-factor all
 mutex/semaphores implementations to use this new abstraction. The
 current approach is to add an xnarch_atomic_cmpxchg in user-space, and
 fall back to an almost unchanged kernel-space support when it fails.


 Ok. Let's merge this as it is. Common code will emerge eventually if it 
 happens
 to make sense when plugging the feature into the native and VxWorks mutex 
 support.

Ok. I need a few more days though, to adapt it to other architectures than arm.

-- 
 Gilles

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Patch 5/7] Define new syscalls for the system skin

2008-05-19 Thread Gilles Chanteperdrix
Gilles Chanteperdrix wrote:
  On Mon, May 19, 2008 at 2:56 PM, Philippe Gerum [EMAIL PROTECTED] wrote:
   Gilles Chanteperdrix wrote:
   Philippe Gerum wrote:
 Gilles Chanteperdrix wrote:
  As far as I understood, the user-space atomic operations, used to
  acquire a free mutex in user-space, are not part of the futex API. In
  our case, we are using xnarch_atomic_* operations to implement 
   portably
  this user-space locking stuff. I think that even setting the bit 
   saying
  that the mutex is currently owned is done in pthread_mutexes
  implementation, not in the Futex API.

 I would fully agree if the futex API did not define PI-based ops, 
   which are
 needed for proper real-time operations from userland. You will 
   certainly agree
 that PI implies that some kind of ownership exists; and because there 
   can't be
 more than a single owner in that case, you end up with an object that 
   can't be
 held by more than a single task. So you do have a mutex in disguise, 
   whatever
 the way to keep its state is (a bit, an integer, whatever). So there 
   is stronger
 semantics attached to that API than to simply manage an event 
   notification scheme.

  Now, what remains is
  sys_futex(FUTEX_WAIT) and sys_futex(FUTEX_WAKE), this terribly looks 
   like
  xnsync_sleep_on and xnsynch_wakeup_one_sleeper.
 

 Yes, here again I partially agree, except for a significant issue: 
   xnsynch is a
 stateless object (that's why we can use it for different syncobjs 
   which are
 totally unrelated in their semantics - mutex, queue, region, counting 
   sems,
 whatever). I was just wondering if we could make the *tex thingy a bit 
   more
 stateful to ease the job for the skins, just in case we would use it 
   for mutexes
 only. I have no immediate answer to this question, just asking -- this 
   is my
 contribution as a senior member of the peanut gallery.
  
   We can certainly implement an abstraction managing xnarch_atomic_t +
   xnsynch_t, however, it seems that we would have to re-factor all
   mutex/semaphores implementations to use this new abstraction. The
   current approach is to add an xnarch_atomic_cmpxchg in user-space, and
   fall back to an almost unchanged kernel-space support when it fails.
  
  
   Ok. Let's merge this as it is. Common code will emerge eventually if it 
   happens
   to make sense when plugging the feature into the native and VxWorks mutex 
   support.
  
  Ok. I need a few more days though, to adapt it to other architectures than 
  arm.

I changed my mind and commited the whole stuff. Instead of implementing
full atomic operations in user-space for other platforms than ARM, I
have left this work for others.

-- 


Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Patch 5/7] Define new syscalls for the system skin

2008-05-18 Thread Philippe Gerum
Gilles Chanteperdrix wrote:
 The two syscalls defined in the posix skin now moved to the sys skin, they are
 used in user-space by include/asm-generic/bits/bind.h and the new header
 include/asm-generic/bits/current.h. The global and process-specific shared 
 heaps
 are now part of this patch.


Is there any reason why the nucleus should not implement a full-fledged RT
futex support, instead of a toolbox to build them? I'm concerned by skins
reinventing their own wheel uselessly to get to the same point at the end of the
day; e.g. cb_lock ops seem to me fairly generic when it comes to handling
futexes, so I would move them upstream one level more.

In that respect, talking about semaphore heaps at nucleus level looks a bit of
a misnomer: if we mostly bring a service to map non-cacheable memory to
user-space, then we don't actually provide semaphore support.

 ---
  include/asm-generic/bits/Makefile.am |1
  include/asm-generic/bits/bind.h  |  121 
 +++
  include/asm-generic/bits/current.h   |   14 
  include/asm-generic/syscall.h|2
  include/nucleus/Makefile.am  |1
  include/nucleus/sys_ppd.h|   37 ++
  include/posix/syscall.h  |1
  ksrc/nucleus/Kconfig |   25 +++
  ksrc/nucleus/module.c|   19 -
  ksrc/nucleus/shadow.c|  103 -
  src/skins/posix/thread.c |4 +
  11 files changed, 323 insertions(+), 5 deletions(-)
 
 Index: include/asm-generic/syscall.h
 ===
 --- include/asm-generic/syscall.h (revision 3738)
 +++ include/asm-generic/syscall.h (working copy)
 @@ -30,6 +30,8 @@
  #define __xn_sys_info   4/* xnshadow_get_info(muxid,info) */
  #define __xn_sys_arch   5/* r = xnarch_local_syscall(args) */
  #define __xn_sys_trace  6/* r = xntrace_xxx(...) */
 +#define __xn_sys_sem_heap   7
 +#define __xn_sys_current8
  
  #define XENOMAI_LINUX_DOMAIN  0
  #define XENOMAI_XENO_DOMAIN   1
 Index: ksrc/nucleus/Kconfig
 ===
 --- ksrc/nucleus/Kconfig  (revision 3738)
 +++ ksrc/nucleus/Kconfig  (working copy)
 @@ -158,6 +158,31 @@ config XENO_OPT_SYS_STACKPOOLSZ
   default 0
  endif
  
 +config XENO_OPT_SEM_HEAPSZ
 + int Size of private semaphores heap (Kb)
 + default 12
 + help
 +
 + Xenomai implementation of user-space semaphores relies on heaps 
 + shared between kernel and user-space. This configuration entry
 + allow to set the size of the heap used for private semaphores.
 +
 + Note that each semaphore will allocate 4 or 8 bytes of memory,

It would be nice to tell the folks why such difference (32/64bit arch?)

 + so, the default of 12 Kb allows creating many semaphores.
 +
 +config XENO_OPT_GLOBAL_SEM_HEAPSZ
 + int Size of global semaphores heap (Kb)
 + default 12
 + help
 +
 + Xenomai implementation of user-space semaphores relies on heaps 
 + shared between kernel and user-space. This configuration entry
 + allow to set the size of the heap used for semaphores shared 
 + between several processes.
 +
 + Note that each semaphore will allocate 4 or 8 bytes of memory,
 + so, the default of 12 Kb allows creating many semaphores.
 +
  config XENO_OPT_STATS
   bool Statistics collection
   depends on PROC_FS
 Index: ksrc/nucleus/module.c
 ===
 --- ksrc/nucleus/module.c (revision 3738)
 +++ ksrc/nucleus/module.c (working copy)
 @@ -28,6 +28,7 @@
  #include nucleus/timer.h
  #include nucleus/heap.h
  #include nucleus/version.h
 +#include nucleus/sys_ppd.h
  #ifdef CONFIG_XENO_OPT_PIPE
  #include nucleus/pipe.h
  #endif /* CONFIG_XENO_OPT_PIPE */
 @@ -50,6 +51,8 @@ u_long xnmod_sysheap_size;
  
  int xeno_nucleus_status = -EINVAL;
  
 +struct xnsys_ppd __xnsys_global_ppd;
 +
  void xnmod_alloc_glinks(xnqueue_t *freehq)
  {
   xngholder_t *sholder, *eholder;
 @@ -1156,6 +1159,14 @@ int __init __xeno_sys_init(void)
   if (err)
   goto fail;
  
 + err = xnheap_init_mapped(__xnsys_global_ppd.sem_heap,
 +  CONFIG_XENO_OPT_GLOBAL_SEM_HEAPSZ * 1024,
 +  XNARCH_SHARED_HEAP_FLAGS ?:
 +  (CONFIG_XENO_OPT_GLOBAL_SEM_HEAPSZ = 128
 +   ? GFP_USER : 0));
 + if (err)
 + goto cleanup_arch;
 + 
  #ifdef __KERNEL__
  #ifdef CONFIG_PROC_FS
   xnpod_init_proc();
 @@ -1167,7 +1178,7 @@ int __init __xeno_sys_init(void)
   err = xnpipe_mount();
  
   if (err)
 - goto cleanup_arch;
 + goto cleanup_proc;
  #endif /* CONFIG_XENO_OPT_PIPE */
  
  #ifdef CONFIG_XENO_OPT_PERVASIVE
 @@ -1207,7 +1218,7 @@ int __init 

Re: [Xenomai-core] [Patch 5/7] Define new syscalls for the system skin

2008-05-18 Thread Gilles Chanteperdrix
Philippe Gerum wrote:
  Gilles Chanteperdrix wrote:
   The two syscalls defined in the posix skin now moved to the sys skin, they 
   are
   used in user-space by include/asm-generic/bits/bind.h and the new header
   include/asm-generic/bits/current.h. The global and process-specific shared 
   heaps
   are now part of this patch.
  
  
  Is there any reason why the nucleus should not implement a full-fledged RT
  futex support, instead of a toolbox to build them? I'm concerned by skins
  reinventing their own wheel uselessly to get to the same point at the end of 
  the
  day; e.g. cb_lock ops seem to me fairly generic when it comes to handling
  futexes, so I would move them upstream one level more.
  

First of all, because I do not know much what a futex is, but from my
point of view it has very much to do with making threads wait in
kernel-space for a user-space change. By using kernel/user shared heaps,
we seem to be far from these considerations.

Now for the implementation of user-space mutexes per-se, apart from the
xnarch_atomic_ operations, I think every skin will have its own
tradeoffs. For instance, the cb_lock thing is itself a tradeoff, it
sacrifices a bit of performance by making a mutex lock operation use
three atomic operations instead of only one for the sake of safely
getting a correct value of the mapped memory pointer. Besides, it works
well with the posix skin, because the posix specification allows
pthread_mutex_destroy to return an error if a mutex is currently in
use. Currently, the native skin mutex destruction operation is
successful even if the mutex is currently in use (the owner does not get
any notification, simply its unlock operation will fail, and waiters
receive an -EIDRM error). So, I would guess the native skin will have to
use something else than cb_lock.

  In that respect, talking about semaphore heaps at nucleus level looks a 
  bit of
  a misnomer: if we mostly bring a service to map non-cacheable memory to
  user-space, then we don't actually provide semaphore support.

It was only a name making clear to the user what the heaps will be used
for... I imagine people having to configure yet another heap size in
kernel configuration and wondering what this heap will be used for.

-- 


Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Patch 5/7] Define new syscalls for the system skin

2008-05-18 Thread Gilles Chanteperdrix
Philippe Gerum wrote:
  Gilles Chanteperdrix wrote:
   The two syscalls defined in the posix skin now moved to the sys skin, they 
   are
   used in user-space by include/asm-generic/bits/bind.h and the new header
   include/asm-generic/bits/current.h. The global and process-specific shared 
   heaps
   are now part of this patch.
  
  
  Is there any reason why the nucleus should not implement a full-fledged RT
  futex support, instead of a toolbox to build them? I'm concerned by skins
  reinventing their own wheel uselessly to get to the same point at the end of 
  the
  day; e.g. cb_lock ops seem to me fairly generic when it comes to handling
  futexes, so I would move them upstream one level more.
  
  In that respect, talking about semaphore heaps at nucleus level looks a 
  bit of
  a misnomer: if we mostly bring a service to map non-cacheable memory to
  user-space, then we don't actually provide semaphore support.

If I understand correctly, a futex is, in xenomai terms, a way to
associate a user-space address, with an xnsynch object. I feel this
would complicate things: currently, the way I implemented user-space
mutexes for the posix skin kept the old association between the
user-space mutex, and its kernel-space companion, also used by
kernel-space operations.

-- 


Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Patch 5/7] Define new syscalls for the system skin

2008-05-18 Thread Philippe Gerum
Gilles Chanteperdrix wrote:
 Philippe Gerum wrote:
   Gilles Chanteperdrix wrote:
The two syscalls defined in the posix skin now moved to the sys skin, 
 they are
used in user-space by include/asm-generic/bits/bind.h and the new header
include/asm-generic/bits/current.h. The global and process-specific 
 shared heaps
are now part of this patch.
   
   
   Is there any reason why the nucleus should not implement a full-fledged RT
   futex support, instead of a toolbox to build them? I'm concerned by skins
   reinventing their own wheel uselessly to get to the same point at the end 
 of the
   day; e.g. cb_lock ops seem to me fairly generic when it comes to handling
   futexes, so I would move them upstream one level more.
   
   In that respect, talking about semaphore heaps at nucleus level looks a 
 bit of
   a misnomer: if we mostly bring a service to map non-cacheable memory to
   user-space, then we don't actually provide semaphore support.
 
 If I understand correctly, a futex is, in xenomai terms, a way to
 associate a user-space address, with an xnsynch object.

I would specialize it more actually so that it really resembles the vanilla
futex support, i.e. a basic object implementing the required operations to
provide mutually exclusive access, working on a pinned memory area shared
between kernel and userland. AFAICS, the current patchset implements the pinned
memory support in the nucleus, but not the operations, which remain a per-skin
issue.

 I feel this
 would complicate things: currently, the way I implemented user-space
 mutexes for the posix skin kept the old association between the
 user-space mutex, and its kernel-space companion, also used by
 kernel-space operations.
 

My concern boils down to: how much of the POSIX implementation, beyond the
cb_lock stuff, would have to be duplicated to get the same support ported to,
say the VxWorks semM services?

-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Patch 5/7] Define new syscalls for the system skin

2008-05-18 Thread Gilles Chanteperdrix
Philippe Gerum wrote:
  Gilles Chanteperdrix wrote:
   Philippe Gerum wrote:
 Gilles Chanteperdrix wrote:
  The two syscalls defined in the posix skin now moved to the sys skin, 
   they are
  used in user-space by include/asm-generic/bits/bind.h and the new 
   header
  include/asm-generic/bits/current.h. The global and process-specific 
   shared heaps
  are now part of this patch.
 
 
 Is there any reason why the nucleus should not implement a full-fledged 
   RT
 futex support, instead of a toolbox to build them? I'm concerned by 
   skins
 reinventing their own wheel uselessly to get to the same point at the 
   end of the
 day; e.g. cb_lock ops seem to me fairly generic when it comes to 
   handling
 futexes, so I would move them upstream one level more.
 
 In that respect, talking about semaphore heaps at nucleus level looks 
   a bit of
 a misnomer: if we mostly bring a service to map non-cacheable memory to
 user-space, then we don't actually provide semaphore support.
   
   If I understand correctly, a futex is, in xenomai terms, a way to
   associate a user-space address, with an xnsynch object.
  
  I would specialize it more actually so that it really resembles the vanilla
  futex support, i.e. a basic object implementing the required operations to
  provide mutually exclusive access, working on a pinned memory area shared
  between kernel and userland. AFAICS, the current patchset implements the 
  pinned
  memory support in the nucleus, but not the operations, which remain a 
  per-skin
  issue.

As far as I understood, the user-space atomic operations, used to
acquire a free mutex in user-space, are not part of the futex API. In
our case, we are using xnarch_atomic_* operations to implement portably
this user-space locking stuff. I think that even setting the bit saying
that the mutex is currently owned is done in pthread_mutexes
implementation, not in the Futex API. Now, what remains is
sys_futex(FUTEX_WAIT) and sys_futex(FUTEX_WAKE), this terribly looks like
xnsync_sleep_on and xnsynch_wakeup_one_sleeper.

  
   I feel this
   would complicate things: currently, the way I implemented user-space
   mutexes for the posix skin kept the old association between the
   user-space mutex, and its kernel-space companion, also used by
   kernel-space operations.
   
  
  My concern boils down to: how much of the POSIX implementation, beyond the
  cb_lock stuff, would have to be duplicated to get the same support ported to,
  say the VxWorks semM services?

The initialization code, and the additional calls to
xnarch_atomic_cmpxchg in user-space. If xnarch_atomic_cmpxchg fails we
call kernel-space, which is mostly unchanged.

Because of the cb_lock stuff, I also needed to implement the
kernel-space syscalls in two versions: one if user-space has
xnarch_atomic_cmpxchg and could lock the mutex control block, the other
if the mutex control block needs to be locked by kernel-space.

-- 


Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Patch 5/7] Define new syscalls for the system skin

2008-05-18 Thread Philippe Gerum
Gilles Chanteperdrix wrote:
 Philippe Gerum wrote:
   Gilles Chanteperdrix wrote:
Philippe Gerum wrote:
  Gilles Chanteperdrix wrote:
   The two syscalls defined in the posix skin now moved to the sys 
 skin, they are
   used in user-space by include/asm-generic/bits/bind.h and the new 
 header
   include/asm-generic/bits/current.h. The global and process-specific 
 shared heaps
   are now part of this patch.
  
  
  Is there any reason why the nucleus should not implement a 
 full-fledged RT
  futex support, instead of a toolbox to build them? I'm concerned by 
 skins
  reinventing their own wheel uselessly to get to the same point at the 
 end of the
  day; e.g. cb_lock ops seem to me fairly generic when it comes to 
 handling
  futexes, so I would move them upstream one level more.
  
  In that respect, talking about semaphore heaps at nucleus level 
 looks a bit of
  a misnomer: if we mostly bring a service to map non-cacheable memory 
 to
  user-space, then we don't actually provide semaphore support.

If I understand correctly, a futex is, in xenomai terms, a way to
associate a user-space address, with an xnsynch object.
   
   I would specialize it more actually so that it really resembles the vanilla
   futex support, i.e. a basic object implementing the required operations to
   provide mutually exclusive access, working on a pinned memory area shared
   between kernel and userland. AFAICS, the current patchset implements the 
 pinned
   memory support in the nucleus, but not the operations, which remain a 
 per-skin
   issue.
 
 As far as I understood, the user-space atomic operations, used to
 acquire a free mutex in user-space, are not part of the futex API. In
 our case, we are using xnarch_atomic_* operations to implement portably
 this user-space locking stuff. I think that even setting the bit saying
 that the mutex is currently owned is done in pthread_mutexes
 implementation, not in the Futex API.

I would fully agree if the futex API did not define PI-based ops, which are
needed for proper real-time operations from userland. You will certainly agree
that PI implies that some kind of ownership exists; and because there can't be
more than a single owner in that case, you end up with an object that can't be
held by more than a single task. So you do have a mutex in disguise, whatever
the way to keep its state is (a bit, an integer, whatever). So there is stronger
semantics attached to that API than to simply manage an event notification 
scheme.

 Now, what remains is
 sys_futex(FUTEX_WAIT) and sys_futex(FUTEX_WAKE), this terribly looks like
 xnsync_sleep_on and xnsynch_wakeup_one_sleeper.
 

Yes, here again I partially agree, except for a significant issue: xnsynch is a
stateless object (that's why we can use it for different syncobjs which are
totally unrelated in their semantics - mutex, queue, region, counting sems,
whatever). I was just wondering if we could make the *tex thingy a bit more
stateful to ease the job for the skins, just in case we would use it for mutexes
only. I have no immediate answer to this question, just asking -- this is my
contribution as a senior member of the peanut gallery.

   
I feel this
would complicate things: currently, the way I implemented user-space
mutexes for the posix skin kept the old association between the
user-space mutex, and its kernel-space companion, also used by
kernel-space operations.

   
   My concern boils down to: how much of the POSIX implementation, beyond the
   cb_lock stuff, would have to be duplicated to get the same support ported 
 to,
   say the VxWorks semM services?
 
 The initialization code, and the additional calls to
 xnarch_atomic_cmpxchg in user-space. If xnarch_atomic_cmpxchg fails we
 call kernel-space, which is mostly unchanged.
 
 Because of the cb_lock stuff, I also needed to implement the
 kernel-space syscalls in two versions: one if user-space has
 xnarch_atomic_cmpxchg and could lock the mutex control block, the other
 if the mutex control block needs to be locked by kernel-space.
 

This is the part my laziness would very much like to factor as much as possible.
If ever possible.

-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [Patch 5/7] Define new syscalls for the system skin

2008-05-18 Thread Gilles Chanteperdrix
Philippe Gerum wrote:
  Gilles Chanteperdrix wrote:
   As far as I understood, the user-space atomic operations, used to
   acquire a free mutex in user-space, are not part of the futex API. In
   our case, we are using xnarch_atomic_* operations to implement portably
   this user-space locking stuff. I think that even setting the bit saying
   that the mutex is currently owned is done in pthread_mutexes
   implementation, not in the Futex API.
  
  I would fully agree if the futex API did not define PI-based ops, which are
  needed for proper real-time operations from userland. You will certainly 
  agree
  that PI implies that some kind of ownership exists; and because there can't 
  be
  more than a single owner in that case, you end up with an object that can't 
  be
  held by more than a single task. So you do have a mutex in disguise, whatever
  the way to keep its state is (a bit, an integer, whatever). So there is 
  stronger
  semantics attached to that API than to simply manage an event notification 
  scheme.
  
   Now, what remains is
   sys_futex(FUTEX_WAIT) and sys_futex(FUTEX_WAKE), this terribly looks like
   xnsync_sleep_on and xnsynch_wakeup_one_sleeper.
   
  
  Yes, here again I partially agree, except for a significant issue: xnsynch 
  is a
  stateless object (that's why we can use it for different syncobjs which are
  totally unrelated in their semantics - mutex, queue, region, counting sems,
  whatever). I was just wondering if we could make the *tex thingy a bit more
  stateful to ease the job for the skins, just in case we would use it for 
  mutexes
  only. I have no immediate answer to this question, just asking -- this is my
  contribution as a senior member of the peanut gallery.

We can certainly implement an abstraction managing xnarch_atomic_t +
xnsynch_t, however, it seems that we would have to re-factor all
mutex/semaphores implementations to use this new abstraction. The
current approach is to add an xnarch_atomic_cmpxchg in user-space, and
fall back to an almost unchanged kernel-space support when it fails.

  
 
  I feel this
  would complicate things: currently, the way I implemented user-space
  mutexes for the posix skin kept the old association between the
  user-space mutex, and its kernel-space companion, also used by
  kernel-space operations.
  
 
 My concern boils down to: how much of the POSIX implementation, beyond 
   the
 cb_lock stuff, would have to be duplicated to get the same support 
   ported to,
 say the VxWorks semM services?
   
   The initialization code, and the additional calls to
   xnarch_atomic_cmpxchg in user-space. If xnarch_atomic_cmpxchg fails we
   call kernel-space, which is mostly unchanged.
   
   Because of the cb_lock stuff, I also needed to implement the
   kernel-space syscalls in two versions: one if user-space has
   xnarch_atomic_cmpxchg and could lock the mutex control block, the other
   if the mutex control block needs to be locked by kernel-space.
   
  
  This is the part my laziness would very much like to factor as much as 
  possible.
  If ever possible.

The current implementation is more or less:

/* Common generic function */
pse51_mutex_thing_inner(mutex)
{
/*... Assume mutex cb is locked ... */
}

/* Kernel-space operation */
pthread_mutex_thing(mutex)
{
 cb_trylock(mutex-lock);
  
 pse51_mutex_thing_inner(mutex);

 cb_unlock(mutex-lock);
}

/* syscall wrapper */
#ifdef XNARCH_HAVE_US_ATOMIC_CMPXCHG
__pthread_mutex_thing()
{
 copy_from_user();

 pse51_mutex_thing_inner(mutex)

 copy_to_user();
}
#else
__pthread_mutex_thing()
{
 copy_from_user();

 pthread_mutex_thing(mutex)

 copy_to_user();
}
#endif


-- 


Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core