Re: [Xenomai-core] xnregistry_fetch friends

2008-08-26 Thread Philippe Gerum
Jan Kiszka wrote:
 Philippe Gerum wrote:
 Jan Kiszka wrote:
 But then I wonder

  a) why xnregistry_fetch uses nklock at all (even for totally uncritical
 XNOBJECT_SELF!)

 registry_validate() returns a pointer we want to dereference; we'd better 
 keep
 this unpreemptable, although it's useless for the self-fetching op (which is 
 an
 unused calling mode so far). If using xnregistry_remove() while fetching the
 object, the worst case is that your action ends up acting upon an object of 
 the
 same type, instead of the initially intended one. If that's a problem, goto 
 safe;
 
 I still don't see the benefit in picking up the object pointer under
 nklock compared to the overhead of acquiring and releasing that lock.
 That's all not truly safe anyway. Even if you ensure that handle and
 object match, someone may change that pair before we can do the lookup
 under nklock.

Indeed, this is the whole point of the discussion unless I'm mistaken.

 
 In my understanding, registry slots either contain NULL or a pointer to
 some object. If that object is valid, that is checked _after_ releasing
 the lock, so we are unsafe again, _nothing_ gained.

Yes, I agree. I was focusing on _put/_get which maintain the refcount and ought
to be locked, but solely fetching under lock makes no sense. It's probably a
paranoid surge about having dynamically allocated slots, which won't happen
anytime soon.

 
 Jan
 


-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] xnregistry_fetch friends

2008-08-26 Thread Jan Kiszka
Philippe Gerum wrote:
 Jan Kiszka wrote:
 Philippe Gerum wrote:
 Jan Kiszka wrote:
 Philippe Gerum wrote:
 Jan Kiszka wrote:
 But then I wonder

  a) why xnregistry_fetch uses nklock at all (even for totally uncritical
 XNOBJECT_SELF!)

 registry_validate() returns a pointer we want to dereference; we'd better 
 keep
 this unpreemptable, although it's useless for the self-fetching op (which 
 is an
 unused calling mode so far). If using xnregistry_remove() while fetching 
 the
 object, the worst case is that your action ends up acting upon an object 
 of the
 same type, instead of the initially intended one. If that's a problem, 
 goto safe;
 I still don't see the benefit in picking up the object pointer under
 nklock compared to the overhead of acquiring and releasing that lock.
 That's all not truly safe anyway. Even if you ensure that handle and
 object match, someone may change that pair before we can do the lookup
 under nklock.
 Indeed, this is the whole point of the discussion unless I'm mistaken.

 In my understanding, registry slots either contain NULL or a pointer to
 some object. If that object is valid, that is checked _after_ releasing
 the lock, so we are unsafe again, _nothing_ gained.
 Yes, I agree. I was focusing on _put/_get which maintain the refcount and 
 ought
 to be locked, but solely fetching under lock makes no sense. It's probably a
 paranoid surge about having dynamically allocated slots, which won't happen
 anytime soon.
 Then you are fine with this patch?

 
 Yes, only nitpicking about the else statement, but the logic is ok for me. 
 While
 your are at it, you may also want to move the other self-directed requests out
 of the locked section.

Updated patch below.

Another question came up here: Why do we allocate hash table objects
dynamically, why not putting the holding structure (probably only a
next pointer) in xnobject_t directly?

BTW, I'm also preparing a patch to overcome hash chain registrations for
anonymous (handle-only) objects as I need more of them (one for each
thread) for fast mutexes - to avoid fiddling with kernel pointers in
userland.

Jan

---
 ChangeLog   |6 ++
 ksrc/nucleus/registry.c |   42 +-
 2 files changed, 19 insertions(+), 29 deletions(-)

Index: b/ChangeLog
===
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,9 @@
+2008-08-26  Jan Kiszka  [EMAIL PROTECTED]
+
+   * ksrc/nucleus/registry.c (xnregistry_fetch/get/put): Remove
+   pointless locking, move XNOBJECT_SELF lookups out of critical
+   sections.
+
 2008-08-25  Jan Kiszka  [EMAIL PROTECTED]
 
* include/asm-generic/wrappers.h: Provide atomic_long wrappings.
Index: b/ksrc/nucleus/registry.c
===
--- a/ksrc/nucleus/registry.c
+++ b/ksrc/nucleus/registry.c
@@ -1024,16 +1024,14 @@ void *xnregistry_get(xnhandle_t handle)
void *objaddr;
spl_t s;
 
-   xnlock_get_irqsave(nklock, s);
-
if (handle == XNOBJECT_SELF) {
-   if (!xnpod_primary_p()) {
-   objaddr = NULL;
-   goto unlock_and_exit;
-   }
+   if (!xnpod_primary_p())
+   return NULL;
handle = xnpod_current_thread()-registry.handle;
}
 
+   xnlock_get_irqsave(nklock, s);
+
object = registry_validate(handle);
 
if (object) {
@@ -1087,17 +1085,14 @@ u_long xnregistry_put(xnhandle_t handle)
u_long newlock;
spl_t s;
 
-   xnlock_get_irqsave(nklock, s);
-
if (handle == XNOBJECT_SELF) {
-   if (!xnpod_primary_p()) {
-   newlock = 0;
-   goto unlock_and_exit;
-   }
-
+   if (!xnpod_primary_p())
+   return 0;
handle = xnpod_current_thread()-registry.handle;
}
 
+   xnlock_get_irqsave(nklock, s);
+
object = registry_validate(handle);
 
if (!object) {
@@ -1150,28 +1145,17 @@ u_long xnregistry_put(xnhandle_t handle)
 void *xnregistry_fetch(xnhandle_t handle)
 {
xnobject_t *object;
-   void *objaddr;
-   spl_t s;
 
-   xnlock_get_irqsave(nklock, s);
-
-   if (handle == XNOBJECT_SELF) {
-   objaddr = xnpod_primary_p()? xnpod_current_thread() : NULL;
-   goto unlock_and_exit;
-   }
+   if (handle == XNOBJECT_SELF)
+   return xnpod_primary_p()? xnpod_current_thread() : NULL;
 
object = registry_validate(handle);
 
-   if (object)
-   objaddr = object-objaddr;
-   else
-   objaddr = NULL;
+   if (!object)
+   return NULL;
 
-  unlock_and_exit:
+   return object-objaddr;
 
-   xnlock_put_irqrestore(nklock, s);
-
-   return objaddr;
 }
 
 /[EMAIL PROTECTED]/


Re: [Xenomai-core] xnregistry_fetch friends

2008-08-26 Thread Philippe Gerum
Jan Kiszka wrote:
 Philippe Gerum wrote:
 Jan Kiszka wrote:
 Philippe Gerum wrote:
 Jan Kiszka wrote:
 But then I wonder

  a) why xnregistry_fetch uses nklock at all (even for totally uncritical
 XNOBJECT_SELF!)

 registry_validate() returns a pointer we want to dereference; we'd better 
 keep
 this unpreemptable, although it's useless for the self-fetching op (which 
 is an
 unused calling mode so far). If using xnregistry_remove() while fetching 
 the
 object, the worst case is that your action ends up acting upon an object 
 of the
 same type, instead of the initially intended one. If that's a problem, 
 goto safe;
 I still don't see the benefit in picking up the object pointer under
 nklock compared to the overhead of acquiring and releasing that lock.
 That's all not truly safe anyway. Even if you ensure that handle and
 object match, someone may change that pair before we can do the lookup
 under nklock.
 Indeed, this is the whole point of the discussion unless I'm mistaken.

 In my understanding, registry slots either contain NULL or a pointer to
 some object. If that object is valid, that is checked _after_ releasing
 the lock, so we are unsafe again, _nothing_ gained.
 Yes, I agree. I was focusing on _put/_get which maintain the refcount and 
 ought
 to be locked, but solely fetching under lock makes no sense. It's probably a
 paranoid surge about having dynamically allocated slots, which won't happen
 anytime soon.
 
 Then you are fine with this patch?
 

Yes, only nitpicking about the else statement, but the logic is ok for me. While
your are at it, you may also want to move the other self-directed requests out
of the locked section.

 ---
  ChangeLog   |5 +
  ksrc/nucleus/registry.c |   20 
  2 files changed, 9 insertions(+), 16 deletions(-)
 
 Index: b/ChangeLog
 ===
 --- a/ChangeLog
 +++ b/ChangeLog
 @@ -1,3 +1,8 @@
 +2008-08-26  Jan Kiszka  [EMAIL PROTECTED]
 +
 + * ksrc/nucleus/registry.c (xnregistry_fetch): Remove pointless
 + locking.
 +
  2008-08-25  Jan Kiszka  [EMAIL PROTECTED]
  
   * include/asm-generic/wrappers.h: Provide atomic_long wrappings.
 Index: b/ksrc/nucleus/registry.c
 ===
 --- a/ksrc/nucleus/registry.c
 +++ b/ksrc/nucleus/registry.c
 @@ -1150,28 +1150,16 @@ u_long xnregistry_put(xnhandle_t handle)
  void *xnregistry_fetch(xnhandle_t handle)
  {
   xnobject_t *object;
 - void *objaddr;
 - spl_t s;
  
 - xnlock_get_irqsave(nklock, s);
 -
 - if (handle == XNOBJECT_SELF) {
 - objaddr = xnpod_primary_p()? xnpod_current_thread() : NULL;
 - goto unlock_and_exit;
 - }
 + if (handle == XNOBJECT_SELF)
 + return xnpod_primary_p()? xnpod_current_thread() : NULL;
  
   object = registry_validate(handle);
  
   if (object)
 - objaddr = object-objaddr;
 + return object-objaddr;
   else

-   else

 - objaddr = NULL;
 -
 -  unlock_and_exit:
 -
 - xnlock_put_irqrestore(nklock, s);
 -
 - return objaddr;
 + return NULL;
+   return NULL;
  }
  
  /[EMAIL PROTECTED]/
 


-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] xnregistry_fetch friends

2008-08-26 Thread Philippe Gerum
Jan Kiszka wrote:
 Philippe Gerum wrote:
 Jan Kiszka wrote:
 Philippe Gerum wrote:
 Jan Kiszka wrote:
 Philippe Gerum wrote:
 Jan Kiszka wrote:
 But then I wonder

  a) why xnregistry_fetch uses nklock at all (even for totally uncritical
 XNOBJECT_SELF!)

 registry_validate() returns a pointer we want to dereference; we'd 
 better keep
 this unpreemptable, although it's useless for the self-fetching op 
 (which is an
 unused calling mode so far). If using xnregistry_remove() while fetching 
 the
 object, the worst case is that your action ends up acting upon an object 
 of the
 same type, instead of the initially intended one. If that's a problem, 
 goto safe;
 I still don't see the benefit in picking up the object pointer under
 nklock compared to the overhead of acquiring and releasing that lock.
 That's all not truly safe anyway. Even if you ensure that handle and
 object match, someone may change that pair before we can do the lookup
 under nklock.
 Indeed, this is the whole point of the discussion unless I'm mistaken.

 In my understanding, registry slots either contain NULL or a pointer to
 some object. If that object is valid, that is checked _after_ releasing
 the lock, so we are unsafe again, _nothing_ gained.
 Yes, I agree. I was focusing on _put/_get which maintain the refcount and 
 ought
 to be locked, but solely fetching under lock makes no sense. It's probably 
 a
 paranoid surge about having dynamically allocated slots, which won't happen
 anytime soon.
 Then you are fine with this patch?

 Yes, only nitpicking about the else statement, but the logic is ok for me. 
 While
 your are at it, you may also want to move the other self-directed requests 
 out
 of the locked section.
 
 Updated patch below.
 
 Another question came up here: Why do we allocate hash table objects
 dynamically, why not putting the holding structure (probably only a
 next pointer) in xnobject_t directly?


Prehistoric requirement that does not make sense since many moons now; at some
point, we had to support N:1 relationship between keys and objects, and the key
holder once belonged to objhash. This is definitely useless now and should be
simplified.

 BTW, I'm also preparing a patch to overcome hash chain registrations for
 anonymous (handle-only) objects as I need more of them (one for each
 thread) for fast mutexes - to avoid fiddling with kernel pointers in
 userland.

Ok.

 
 Jan
 
 ---
  ChangeLog   |6 ++
  ksrc/nucleus/registry.c |   42 +-
  2 files changed, 19 insertions(+), 29 deletions(-)
 
 Index: b/ChangeLog
 ===
 --- a/ChangeLog
 +++ b/ChangeLog
 @@ -1,3 +1,9 @@
 +2008-08-26  Jan Kiszka  [EMAIL PROTECTED]
 +
 + * ksrc/nucleus/registry.c (xnregistry_fetch/get/put): Remove
 + pointless locking, move XNOBJECT_SELF lookups out of critical
 + sections.
 +
  2008-08-25  Jan Kiszka  [EMAIL PROTECTED]
  
   * include/asm-generic/wrappers.h: Provide atomic_long wrappings.
 Index: b/ksrc/nucleus/registry.c
 ===
 --- a/ksrc/nucleus/registry.c
 +++ b/ksrc/nucleus/registry.c
 @@ -1024,16 +1024,14 @@ void *xnregistry_get(xnhandle_t handle)
   void *objaddr;
   spl_t s;
  
 - xnlock_get_irqsave(nklock, s);
 -
   if (handle == XNOBJECT_SELF) {
 - if (!xnpod_primary_p()) {
 - objaddr = NULL;
 - goto unlock_and_exit;
 - }
 + if (!xnpod_primary_p())
 + return NULL;
   handle = xnpod_current_thread()-registry.handle;
   }
  
 + xnlock_get_irqsave(nklock, s);
 +
   object = registry_validate(handle);
  
   if (object) {
 @@ -1087,17 +1085,14 @@ u_long xnregistry_put(xnhandle_t handle)
   u_long newlock;
   spl_t s;
  
 - xnlock_get_irqsave(nklock, s);
 -
   if (handle == XNOBJECT_SELF) {
 - if (!xnpod_primary_p()) {
 - newlock = 0;
 - goto unlock_and_exit;
 - }
 -
 + if (!xnpod_primary_p())
 + return 0;
   handle = xnpod_current_thread()-registry.handle;
   }
  
 + xnlock_get_irqsave(nklock, s);
 +
   object = registry_validate(handle);
  
   if (!object) {
 @@ -1150,28 +1145,17 @@ u_long xnregistry_put(xnhandle_t handle)
  void *xnregistry_fetch(xnhandle_t handle)
  {
   xnobject_t *object;
 - void *objaddr;
 - spl_t s;
  
 - xnlock_get_irqsave(nklock, s);
 -
 - if (handle == XNOBJECT_SELF) {
 - objaddr = xnpod_primary_p()? xnpod_current_thread() : NULL;
 - goto unlock_and_exit;
 - }
 + if (handle == XNOBJECT_SELF)
 + return xnpod_primary_p()? xnpod_current_thread() : NULL;
  
   object = registry_validate(handle);
  
 - if (object)
 - objaddr = object-objaddr;
 - 

Re: [Xenomai-core] xnregistry_fetch friends

2008-08-26 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 BTW, I'm also preparing a patch to overcome hash chain registrations for
 anonymous (handle-only) objects as I need more of them (one for each
 thread) for fast mutexes - to avoid fiddling with kernel pointers in
 userland.

Ok. You will have a problem mangling a registry handle with the claimed
bit. Or maybe we can assume that the bit 31 is not used or something ?

And by the way, I had an idea of removing the duplication of the owner
field between an xnsynch and a mutex object, this would allow saving a
syscall when a situation of mutex stealing happened. Using a handle
would prevent this. But I am not so sure it is a good idea now (namely
because it would move some compare-and-swap the owner logic to the
xnsynch API).

-- 
 Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] xnregistry_fetch friends

2008-08-26 Thread Gilles Chanteperdrix
Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 BTW, I'm also preparing a patch to overcome hash chain registrations for
 anonymous (handle-only) objects as I need more of them (one for each
 thread) for fast mutexes - to avoid fiddling with kernel pointers in
 userland.
 Ok. You will have a problem mangling a registry handle with the claimed
 bit. Or maybe we can assume that the bit 31 is not used or something ?
 That's precisely my plan, use the highest bit (2^32 registry slots are
 fairly unlikely even on 64 bit).

 And by the way, I had an idea of removing the duplication of the owner
 field between an xnsynch and a mutex object, this would allow saving a
 syscall when a situation of mutex stealing happened. Using a handle
 would prevent this. But I am not so sure it is a good idea now (namely
 because it would move some compare-and-swap the owner logic to the
 xnsynch API).
 How could you save one of the two syscalls on lock stealing? By
 introducing another bit in the fast lock word that indicates something
 like XNWAKEN? On first sight, this shouldn't require moving everything
 into xnsynch (though generic fast locks were not that bad...) nor
 interfere with handle-based lock words.
 
 The problem is that when the thread that did the stealing unlocks the
 mutex, xnsynch_wakeup_one_sleeper must be called to set the xnsynch
 owner to NULL, so that the robbed thread will succeed in getting the
 xnsynch. But for xnsynch_wakeup_one_sleeper to be called, we must issue
 the syscall. If the owner was shared between the mutex and the xnsynch,
 the owner could be set to NULL by the mutex unlock from user-space.

That is what the sleepers field in the pse51_mutex_t structure is for.

-- 
 Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] xnregistry_fetch friends

2008-08-26 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
 Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 BTW, I'm also preparing a patch to overcome hash chain registrations for
 anonymous (handle-only) objects as I need more of them (one for each
 thread) for fast mutexes - to avoid fiddling with kernel pointers in
 userland.
 Ok. You will have a problem mangling a registry handle with the claimed
 bit. Or maybe we can assume that the bit 31 is not used or something ?
 That's precisely my plan, use the highest bit (2^32 registry slots are
 fairly unlikely even on 64 bit).

 And by the way, I had an idea of removing the duplication of the owner
 field between an xnsynch and a mutex object, this would allow saving a
 syscall when a situation of mutex stealing happened. Using a handle
 would prevent this. But I am not so sure it is a good idea now (namely
 because it would move some compare-and-swap the owner logic to the
 xnsynch API).
 How could you save one of the two syscalls on lock stealing? By
 introducing another bit in the fast lock word that indicates something
 like XNWAKEN? On first sight, this shouldn't require moving everything
 into xnsynch (though generic fast locks were not that bad...) nor
 interfere with handle-based lock words.
 The problem is that when the thread that did the stealing unlocks the
 mutex, xnsynch_wakeup_one_sleeper must be called to set the xnsynch
 owner to NULL, so that the robbed thread will succeed in getting the
 xnsynch. But for xnsynch_wakeup_one_sleeper to be called, we must issue
 the syscall. If the owner was shared between the mutex and the xnsynch,
 the owner could be set to NULL by the mutex unlock from user-space.
 
 That is what the sleepers field in the pse51_mutex_t structure is for.
 

OK, this all sounds like fast-lock awareness would have to be taught to
xnsynch first. But that would also open the chance to teach it that
handles are stored inside the lock word, no pointers.

However, that's stuff for a second round. I will have to focus on
getting the native skin straight. Well, maybe this will already
introduce thread handles to the POSIX skin (due to common
__xn_sys_current)...

Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] xnregistry_fetch friends

2008-08-26 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 OK, this all sounds like fast-lock awareness would have to be taught
 to xnsynch first. But that would also open the chance to teach it
 that handles are stored inside the lock word, no pointers.

That could be done too... But that would be really a critical change,
with updates in all skins.

 However, that's stuff for a second round.

Ok. No problem, but keep the pitfall in mind.

 I will have to focus on getting the native skin straight. Well, maybe
 this will already
 introduce thread handles to the POSIX skin (due to common 
 __xn_sys_current)...

Please, yes, do it, avoid implementing an __xn_sys_current_bis that
would return handles instead of pointers.

-- 
 Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] xnregistry_fetch friends

2008-08-26 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 BTW, I'm also preparing a patch to overcome hash chain registrations for
 anonymous (handle-only) objects as I need more of them (one for each
 thread) for fast mutexes - to avoid fiddling with kernel pointers in
 userland.
 
 Ok. You will have a problem mangling a registry handle with the claimed
 bit. Or maybe we can assume that the bit 31 is not used or something ?

That's precisely my plan, use the highest bit (2^32 registry slots are
fairly unlikely even on 64 bit).

 
 And by the way, I had an idea of removing the duplication of the owner
 field between an xnsynch and a mutex object, this would allow saving a
 syscall when a situation of mutex stealing happened. Using a handle
 would prevent this. But I am not so sure it is a good idea now (namely
 because it would move some compare-and-swap the owner logic to the
 xnsynch API).

How could you save one of the two syscalls on lock stealing? By
introducing another bit in the fast lock word that indicates something
like XNWAKEN? On first sight, this shouldn't require moving everything
into xnsynch (though generic fast locks were not that bad...) nor
interfere with handle-based lock words.

Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] xnregistry_fetch friends

2008-08-25 Thread Jan Kiszka
Hi,

trying to select a sane kernel-side looking scheme for fast native
mutexes, I had a closer look at the registry usage in that skin (and
many others). The typical pattern is

object = xnregistry_fetch(handle);
perform_operation(object);

There is no lock around those two, both services do nklock acquisition
only internally. So this is a bit racy against concurrent object
destruction and memory releasing / object reconstruction. Well, I guess
the rational is: we test against object magics and the underlying memory
is normally not vanishing (immediately) on destruction, right? Remains
just object reconstruction. Not a real-life issue?

But then I wonder

 a) why xnregistry_fetch uses nklock at all (even for totally uncritical
XNOBJECT_SELF!)

 b) what the ideas/plans on unused xnregistry_put/get are.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] xnregistry_fetch friends

2008-08-25 Thread Jan Kiszka
Philippe Gerum wrote:
 Jan Kiszka wrote:
 But then I wonder

  a) why xnregistry_fetch uses nklock at all (even for totally uncritical
 XNOBJECT_SELF!)

 
 registry_validate() returns a pointer we want to dereference; we'd better keep
 this unpreemptable, although it's useless for the self-fetching op (which is 
 an
 unused calling mode so far). If using xnregistry_remove() while fetching the
 object, the worst case is that your action ends up acting upon an object of 
 the
 same type, instead of the initially intended one. If that's a problem, goto 
 safe;

I still don't see the benefit in picking up the object pointer under
nklock compared to the overhead of acquiring and releasing that lock.
That's all not truly safe anyway. Even if you ensure that handle and
object match, someone may change that pair before we can do the lookup
under nklock.

In my understanding, registry slots either contain NULL or a pointer to
some object. If that object is valid, that is checked _after_ releasing
the lock, so we are unsafe again, _nothing_ gained.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] xnregistry_fetch friends

2008-08-25 Thread Philippe Gerum
Jan Kiszka wrote:
 Hi,
 
 trying to select a sane kernel-side looking scheme for fast native
 mutexes, I had a closer look at the registry usage in that skin (and
 many others). The typical pattern is
 
 object = xnregistry_fetch(handle);
 perform_operation(object);
 
 There is no lock around those two, both services do nklock acquisition
 only internally. So this is a bit racy against concurrent object
 destruction and memory releasing /

Nope.

 object reconstruction.

Yes, and no.

 Well, I guess
 the rational is: we test against object magics and the underlying memory
 is normally not vanishing (immediately) on destruction, right? 

We don't even care of that. The magic is intentionally garbled under nklock when
the object is freed, so it won't match.

Remains
 just object reconstruction. Not a real-life issue?
 

Not for userland code calling syscall wrappers that fetch objects addresses from
handles, since we can't lock around code in the application to always make sure
that kernel space will certainly operate on the intended object, I mean, without
explicit care taken at user-space level. What helps, is that the registry does
not recycle handle values immediately, which is not 100% reliable if the slot
table is almost full, but still better than a LIFO option.

safe:

If paranoid or have a valid case for more safety, call xnregistry_remove_safe()
when deleting the object, along with xnregistry_get/put() to maintain safe
references on it.

 But then I wonder
 
  a) why xnregistry_fetch uses nklock at all (even for totally uncritical
 XNOBJECT_SELF!)
 

registry_validate() returns a pointer we want to dereference; we'd better keep
this unpreemptable, although it's useless for the self-fetching op (which is an
unused calling mode so far). If using xnregistry_remove() while fetching the
object, the worst case is that your action ends up acting upon an object of the
same type, instead of the initially intended one. If that's a problem, goto 
safe;

  b) what the ideas/plans on unused xnregistry_put/get are.
 
 Jan
 
 
 
 
 
 ___
 Xenomai-core mailing list
 Xenomai-core@gna.org
 https://mail.gna.org/listinfo/xenomai-core


-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core