Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
> Gilles Chanteperdrix wrote:
>> Philippe Gerum wrote:
>>> Gilles Chanteperdrix wrote:
 Hi Jan,

 Please do not use my address at gmail, gna does not want me to post from
 this address:

 2008-08-23 12:10:19 1KWq4T-zD-9E ** xenomai-core@gna.org 
  R=dnslookup T=remote_smtp: SMTP error from remote mailer after RCPT 
> TO:>>> [EMAIL PROTECTED]>: host mail.gna.org [88.191.250.46]: 550 rejected 
 because gmail.com i
 s in a black list at dsn.rfc-ignorant.org

 so, here is a repost of my answer:

 Jan Kiszka wrote:
>> Hi Gilles,
>>
>> trying to understand the cb_read/write lock usage, some question came up
>> here: What prevents that the mutexq iteration in pse51_mutex_check_init
>> races against pse51_mutex_destroy_internal?
 Well, I am afraid the mechanism used is not 100% safe. Anyway, the aim
 is to catch most of invalid usages, it seems we can not catch them all.

>> If nothing, then I wonder if we actually have to iterate over the whole
>> queue to find out whether a given object has been initialized and
>> registered already or not. Can't this be encoded differently?
>>
>> BTW, shadow_mutex.mutex is a kernel pointer sitting in a user-reachable
>> memory region? Why not using a handle here, like the native skin does?
>> Won't that allow to resolve the issue above as well?
 This has been so from the beginning, and I did not change it.

>>> To get registry handles, you first need to register objects. The POSIX skin
>>> still does not use the built-in registry, that's why.
>> Well the registry is about associating objects with their name, and
>> since most posix skin objects have no name, I did not see the point of
>> using the registry. And for the named objects, the nucleus registry was
>> not compatible with the posix skin requirements, which is why I did not
>> use it...
> 
> The registry is also about providing user-safe handles for unnamed
> object - so that you don't risk accepting borken kernel pointers from
> user space.

Yes, and from a security point of view, accepting pointers from
user-space may help an ordinary user become root by passing cleverly
crafted kernel-space addresses.

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
> Philippe Gerum wrote:
>> Gilles Chanteperdrix wrote:
>>> Hi Jan,
>>>
>>> Please do not use my address at gmail, gna does not want me to post from
>>> this address:
>>>
>>> 2008-08-23 12:10:19 1KWq4T-zD-9E ** xenomai-core@gna.org 
>>> >>> R=dnslookup T=remote_smtp: SMTP error from remote mailer after RCPT 
 TO:>> [EMAIL PROTECTED]>: host mail.gna.org [88.191.250.46]: 550 rejected because 
>>> gmail.com i
>>> s in a black list at dsn.rfc-ignorant.org
>>>
>>> so, here is a repost of my answer:
>>>
>>> Jan Kiszka wrote:
> Hi Gilles,
>
> trying to understand the cb_read/write lock usage, some question came up
> here: What prevents that the mutexq iteration in pse51_mutex_check_init
> races against pse51_mutex_destroy_internal?
>>> Well, I am afraid the mechanism used is not 100% safe. Anyway, the aim
>>> is to catch most of invalid usages, it seems we can not catch them all.
>>>
> If nothing, then I wonder if we actually have to iterate over the whole
> queue to find out whether a given object has been initialized and
> registered already or not. Can't this be encoded differently?
>
> BTW, shadow_mutex.mutex is a kernel pointer sitting in a user-reachable
> memory region? Why not using a handle here, like the native skin does?
> Won't that allow to resolve the issue above as well?
>>> This has been so from the beginning, and I did not change it.
>>>
>> To get registry handles, you first need to register objects. The POSIX skin
>> still does not use the built-in registry, that's why.
> 
> Well the registry is about associating objects with their name, and
> since most posix skin objects have no name, I did not see the point of
> using the registry. And for the named objects, the nucleus registry was
> not compatible with the posix skin requirements, which is why I did not
> use it...

The registry is also about providing user-safe handles for unnamed
object - so that you don't risk accepting borken kernel pointers from
user space.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Gilles Chanteperdrix
Philippe Gerum wrote:
> Gilles Chanteperdrix wrote:
>> Philippe Gerum wrote:
>>> Gilles Chanteperdrix wrote:
 Hi Jan,

 Please do not use my address at gmail, gna does not want me to post from
 this address:

 2008-08-23 12:10:19 1KWq4T-zD-9E ** xenomai-core@gna.org 
  R=dnslookup T=remote_smtp: SMTP error from remote mailer after RCPT 
> TO:>>> [EMAIL PROTECTED]>: host mail.gna.org [88.191.250.46]: 550 rejected 
 because gmail.com i
 s in a black list at dsn.rfc-ignorant.org

 so, here is a repost of my answer:

 Jan Kiszka wrote:
>> Hi Gilles,
>>
>> trying to understand the cb_read/write lock usage, some question came up
>> here: What prevents that the mutexq iteration in pse51_mutex_check_init
>> races against pse51_mutex_destroy_internal?
 Well, I am afraid the mechanism used is not 100% safe. Anyway, the aim
 is to catch most of invalid usages, it seems we can not catch them all.

>> If nothing, then I wonder if we actually have to iterate over the whole
>> queue to find out whether a given object has been initialized and
>> registered already or not. Can't this be encoded differently?
>>
>> BTW, shadow_mutex.mutex is a kernel pointer sitting in a user-reachable
>> memory region? Why not using a handle here, like the native skin does?
>> Won't that allow to resolve the issue above as well?
 This has been so from the beginning, and I did not change it.

>>> To get registry handles, you first need to register objects. The POSIX skin
>>> still does not use the built-in registry, that's why.
>> Well the registry is about associating objects with their name, and
>> since most posix skin objects have no name, I did not see the point of
>> using the registry. And for the named objects, the nucleus registry was
>> not compatible with the posix skin requirements, which is why I did not
>> use it...
>>
> 
> The thing is that, without built-in registry support, you have no /proc export
> of any status data either.

Yes, I did not see the point at the time when the registry was added,
but I do see it now...

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
> Gilles Chanteperdrix wrote:
>> Gilles Chanteperdrix wrote:
>>> Jan Kiszka wrote:
 Gilles Chanteperdrix wrote:
> Jan Kiszka wrote:
>> Hi Gilles,
>>
>> trying to understand the cb_read/write lock usage, some question came up
>> here: What prevents that the mutexq iteration in pse51_mutex_check_init
>> races against pse51_mutex_destroy_internal?
>>
>> If nothing, then I wonder if we actually have to iterate over the whole
>> queue to find out whether a given object has been initialized and
>> registered already or not. Can't this be encoded differently?
> We actually iterate over the queue only if the magic happens to be
> correct, which is not the common case.
 However, there remains a race window with other threads removing other
 mutex objects in parallel, changing the queue - risking a kernel oops.
 And that is what worries me. It's unlikely. but possible. It's unclean.
>>> Ok. This used to be protected by the nklock. We should add the nklock again.
>> Well I do not think that anyone is rescheduling, so we could probably
>> replace the nklock with a per-kqueue xnlock.
> 
> If nklock or per queue - both will introduce O(n) at least local
> preemption blocking. That's why I was asking for an alternative
> algorithm than iterating over the whole list.

I insist:
- the loop does almost nothing, so n will have to become very large for
it to take a long time, and n is the number of mutexes allocated so far
in one application, or the number of shared mutexes, which is probably
even less.
- the loop happens if the magic happens to be good, so probably only if
you are calling pthread_mutex_init twice for the same mutex, the normal
use-case is to use memory from BSS to allocate mutex, so the magic of a
normal application calling pthread_mutex_init is always 0, and you do
not enter the loop.

Today, I consider it much more a problem that I can not call fork in a
xenomai application with opened descriptors and exit the parent
application without the file descriptors being closed in the child too.
And this will be the thing I will spend time to fix first. Using the
registry in the posix skin will only come next.

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Philippe Gerum
Gilles Chanteperdrix wrote:
> Philippe Gerum wrote:
>> Gilles Chanteperdrix wrote:
>>> Hi Jan,
>>>
>>> Please do not use my address at gmail, gna does not want me to post from
>>> this address:
>>>
>>> 2008-08-23 12:10:19 1KWq4T-zD-9E ** xenomai-core@gna.org 
>>> >>> R=dnslookup T=remote_smtp: SMTP error from remote mailer after RCPT 
 TO:>> [EMAIL PROTECTED]>: host mail.gna.org [88.191.250.46]: 550 rejected because 
>>> gmail.com i
>>> s in a black list at dsn.rfc-ignorant.org
>>>
>>> so, here is a repost of my answer:
>>>
>>> Jan Kiszka wrote:
> Hi Gilles,
>
> trying to understand the cb_read/write lock usage, some question came up
> here: What prevents that the mutexq iteration in pse51_mutex_check_init
> races against pse51_mutex_destroy_internal?
>>> Well, I am afraid the mechanism used is not 100% safe. Anyway, the aim
>>> is to catch most of invalid usages, it seems we can not catch them all.
>>>
> If nothing, then I wonder if we actually have to iterate over the whole
> queue to find out whether a given object has been initialized and
> registered already or not. Can't this be encoded differently?
>
> BTW, shadow_mutex.mutex is a kernel pointer sitting in a user-reachable
> memory region? Why not using a handle here, like the native skin does?
> Won't that allow to resolve the issue above as well?
>>> This has been so from the beginning, and I did not change it.
>>>
>> To get registry handles, you first need to register objects. The POSIX skin
>> still does not use the built-in registry, that's why.
> 
> Well the registry is about associating objects with their name, and
> since most posix skin objects have no name, I did not see the point of
> using the registry. And for the named objects, the nucleus registry was
> not compatible with the posix skin requirements, which is why I did not
> use it...
> 

The thing is that, without built-in registry support, you have no /proc export
of any status data either.

-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Philippe Gerum
Gilles Chanteperdrix wrote:
> Philippe Gerum wrote:
>> Gilles Chanteperdrix wrote:
>>> Hi Jan,
>>>
>>> Please do not use my address at gmail, gna does not want me to post from
>>> this address:
>>>
>>> 2008-08-23 12:10:19 1KWq4T-zD-9E ** xenomai-core@gna.org 
>>> >>> R=dnslookup T=remote_smtp: SMTP error from remote mailer after RCPT 
 TO:>> [EMAIL PROTECTED]>: host mail.gna.org [88.191.250.46]: 550 rejected because 
>>> gmail.com i
>>> s in a black list at dsn.rfc-ignorant.org
>>>
>>> so, here is a repost of my answer:
>>>
>>> Jan Kiszka wrote:
> Hi Gilles,
>
> trying to understand the cb_read/write lock usage, some question came up
> here: What prevents that the mutexq iteration in pse51_mutex_check_init
> races against pse51_mutex_destroy_internal?
>>> Well, I am afraid the mechanism used is not 100% safe. Anyway, the aim
>>> is to catch most of invalid usages, it seems we can not catch them all.
>>>
> If nothing, then I wonder if we actually have to iterate over the whole
> queue to find out whether a given object has been initialized and
> registered already or not. Can't this be encoded differently?
>
> BTW, shadow_mutex.mutex is a kernel pointer sitting in a user-reachable
> memory region? Why not using a handle here, like the native skin does?
> Won't that allow to resolve the issue above as well?
>>> This has been so from the beginning, and I did not change it.
>>>
>> To get registry handles, you first need to register objects. The POSIX skin
>> still does not use the built-in registry, that's why.
> 
> Well the registry is about associating objects with their name, and
> since most posix skin objects have no name, I did not see the point of
> using the registry. And for the named objects, the nucleus registry was
> not compatible with the posix skin requirements, which is why I did not
> use it...
> 

I understand that, but names could be generated internally from the object
address, or the registry updated to allow hashing bit patterns as well.

-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Gilles Chanteperdrix
Philippe Gerum wrote:
> Gilles Chanteperdrix wrote:
>> Hi Jan,
>>
>> Please do not use my address at gmail, gna does not want me to post from
>> this address:
>>
>> 2008-08-23 12:10:19 1KWq4T-zD-9E ** xenomai-core@gna.org 
>> >> R=dnslookup T=remote_smtp: SMTP error from remote mailer after RCPT 
>>> TO:> [EMAIL PROTECTED]>: host mail.gna.org [88.191.250.46]: 550 rejected because 
>> gmail.com i
>> s in a black list at dsn.rfc-ignorant.org
>>
>> so, here is a repost of my answer:
>>
>> Jan Kiszka wrote:
 Hi Gilles,

 trying to understand the cb_read/write lock usage, some question came up
 here: What prevents that the mutexq iteration in pse51_mutex_check_init
 races against pse51_mutex_destroy_internal?
>> Well, I am afraid the mechanism used is not 100% safe. Anyway, the aim
>> is to catch most of invalid usages, it seems we can not catch them all.
>>
 If nothing, then I wonder if we actually have to iterate over the whole
 queue to find out whether a given object has been initialized and
 registered already or not. Can't this be encoded differently?

 BTW, shadow_mutex.mutex is a kernel pointer sitting in a user-reachable
 memory region? Why not using a handle here, like the native skin does?
 Won't that allow to resolve the issue above as well?
>> This has been so from the beginning, and I did not change it.
>>
> 
> To get registry handles, you first need to register objects. The POSIX skin
> still does not use the built-in registry, that's why.

Well the registry is about associating objects with their name, and
since most posix skin objects have no name, I did not see the point of
using the registry. And for the named objects, the nucleus registry was
not compatible with the posix skin requirements, which is why I did not
use it...

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
> Gilles Chanteperdrix wrote:
>> Jan Kiszka wrote:
>>> Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
> Hi Gilles,
>
> trying to understand the cb_read/write lock usage, some question came up
> here: What prevents that the mutexq iteration in pse51_mutex_check_init
> races against pse51_mutex_destroy_internal?
>
> If nothing, then I wonder if we actually have to iterate over the whole
> queue to find out whether a given object has been initialized and
> registered already or not. Can't this be encoded differently?
 We actually iterate over the queue only if the magic happens to be
 correct, which is not the common case.
>>> However, there remains a race window with other threads removing other
>>> mutex objects in parallel, changing the queue - risking a kernel oops.
>>> And that is what worries me. It's unlikely. but possible. It's unclean.
>> Ok. This used to be protected by the nklock. We should add the nklock again.
> 
> Well I do not think that anyone is rescheduling, so we could probably
> replace the nklock with a per-kqueue xnlock.

If nklock or per queue - both will introduce O(n) at least local
preemption blocking. That's why I was asking for an alternative
algorithm than iterating over the whole list.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Gilles Chanteperdrix
Gilles Chanteperdrix wrote:
> Jan Kiszka wrote:
>> Gilles Chanteperdrix wrote:
>>> Jan Kiszka wrote:
 Hi Gilles,

 trying to understand the cb_read/write lock usage, some question came up
 here: What prevents that the mutexq iteration in pse51_mutex_check_init
 races against pse51_mutex_destroy_internal?

 If nothing, then I wonder if we actually have to iterate over the whole
 queue to find out whether a given object has been initialized and
 registered already or not. Can't this be encoded differently?
>>> We actually iterate over the queue only if the magic happens to be
>>> correct, which is not the common case.
>> However, there remains a race window with other threads removing other
>> mutex objects in parallel, changing the queue - risking a kernel oops.
>> And that is what worries me. It's unlikely. but possible. It's unclean.
> 
> Ok. This used to be protected by the nklock. We should add the nklock again.

Well I do not think that anyone is rescheduling, so we could probably
replace the nklock with a per-kqueue xnlock.

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
> Gilles Chanteperdrix wrote:
>> Jan Kiszka wrote:
>>> Hi Gilles,
>>>
>>> trying to understand the cb_read/write lock usage, some question came up
>>> here: What prevents that the mutexq iteration in pse51_mutex_check_init
>>> races against pse51_mutex_destroy_internal?
>>>
>>> If nothing, then I wonder if we actually have to iterate over the whole
>>> queue to find out whether a given object has been initialized and
>>> registered already or not. Can't this be encoded differently?
>> We actually iterate over the queue only if the magic happens to be
>> correct, which is not the common case.
> 
> However, there remains a race window with other threads removing other
> mutex objects in parallel, changing the queue - risking a kernel oops.
> And that is what worries me. It's unlikely. but possible. It's unclean.

Ok. This used to be protected by the nklock. We should add the nklock again.

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Philippe Gerum
Gilles Chanteperdrix wrote:
> Hi Jan,
> 
> Please do not use my address at gmail, gna does not want me to post from
> this address:
> 
> 2008-08-23 12:10:19 1KWq4T-zD-9E ** xenomai-core@gna.org 
> > R=dnslookup T=remote_smtp: SMTP error from remote mailer after RCPT 
>> TO: [EMAIL PROTECTED]>: host mail.gna.org [88.191.250.46]: 550 rejected because 
> gmail.com i
> s in a black list at dsn.rfc-ignorant.org
> 
> so, here is a repost of my answer:
> 
> Jan Kiszka wrote:
>>> Hi Gilles,
>>>
>>> trying to understand the cb_read/write lock usage, some question came up
>>> here: What prevents that the mutexq iteration in pse51_mutex_check_init
>>> races against pse51_mutex_destroy_internal?
> 
> Well, I am afraid the mechanism used is not 100% safe. Anyway, the aim
> is to catch most of invalid usages, it seems we can not catch them all.
> 
>>> If nothing, then I wonder if we actually have to iterate over the whole
>>> queue to find out whether a given object has been initialized and
>>> registered already or not. Can't this be encoded differently?
>>>
>>> BTW, shadow_mutex.mutex is a kernel pointer sitting in a user-reachable
>>> memory region? Why not using a handle here, like the native skin does?
>>> Won't that allow to resolve the issue above as well?
> 
> This has been so from the beginning, and I did not change it.
> 

To get registry handles, you first need to register objects. The POSIX skin
still does not use the built-in registry, that's why.

-- 
Philippe.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
> Jan Kiszka wrote:
>> Hi Gilles,
>>
>> trying to understand the cb_read/write lock usage, some question came up
>> here: What prevents that the mutexq iteration in pse51_mutex_check_init
>> races against pse51_mutex_destroy_internal?
>>
>> If nothing, then I wonder if we actually have to iterate over the whole
>> queue to find out whether a given object has been initialized and
>> registered already or not. Can't this be encoded differently?
> 
> We actually iterate over the queue only if the magic happens to be
> correct, which is not the common case.

However, there remains a race window with other threads removing other
mutex objects in parallel, changing the queue - risking a kernel oops.
And that is what worries me. It's unlikely. but possible. It's unclean.

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
> Hi Gilles,
> 
> trying to understand the cb_read/write lock usage, some question came up
> here: What prevents that the mutexq iteration in pse51_mutex_check_init
> races against pse51_mutex_destroy_internal?
> 
> If nothing, then I wonder if we actually have to iterate over the whole
> queue to find out whether a given object has been initialized and
> registered already or not. Can't this be encoded differently?

We actually iterate over the queue only if the magic happens to be
correct, which is not the common case.

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Gilles Chanteperdrix
Gilles Chanteperdrix wrote:
> Hi Jan,
> 
> Please do not use my address at gmail, gna does not want me to post from
> this address:
> 
> 2008-08-23 12:10:19 1KWq4T-zD-9E ** xenomai-core@gna.org 
> > R=dnslookup T=remote_smtp: SMTP error from remote mailer after RCPT 
>> TO: [EMAIL PROTECTED]>: host mail.gna.org [88.191.250.46]: 550 rejected because 
> gmail.com i
> s in a black list at dsn.rfc-ignorant.org
> 
> so, here is a repost of my answer:
> 
> Jan Kiszka wrote:
>>> Hi Gilles,
>>>
>>> trying to understand the cb_read/write lock usage, some question came up
>>> here: What prevents that the mutexq iteration in pse51_mutex_check_init
>>> races against pse51_mutex_destroy_internal?
> 
> Well, I am afraid the mechanism used is not 100% safe. Anyway, the aim
> is to catch most of invalid usages, it seems we can not catch them all.

No, it works, because pthread_mutex_destroy will not be able to get the
write lock is the lock is read-locked.

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Gilles Chanteperdrix
Hi Jan,

Please do not use my address at gmail, gna does not want me to post from
this address:

2008-08-23 12:10:19 1KWq4T-zD-9E ** xenomai-core@gna.org 
 R=dnslookup T=remote_smtp: SMTP error from remote mailer after RCPT 
> TO:: host mail.gna.org [88.191.250.46]: 550 rejected because 
gmail.com i
s in a black list at dsn.rfc-ignorant.org

so, here is a repost of my answer:

Jan Kiszka wrote:
> > Hi Gilles,
> > 
> > trying to understand the cb_read/write lock usage, some question came up
> > here: What prevents that the mutexq iteration in pse51_mutex_check_init
> > races against pse51_mutex_destroy_internal?

Well, I am afraid the mechanism used is not 100% safe. Anyway, the aim
is to catch most of invalid usages, it seems we can not catch them all.

> > 
> > If nothing, then I wonder if we actually have to iterate over the whole
> > queue to find out whether a given object has been initialized and
> > registered already or not. Can't this be encoded differently?
> > 
> > BTW, shadow_mutex.mutex is a kernel pointer sitting in a user-reachable
> > memory region? Why not using a handle here, like the native skin does?
> > Won't that allow to resolve the issue above as well?

This has been so from the beginning, and I did not change it.

-- 
Gilles.

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Racy pse51_mutex_check_init?

2008-08-23 Thread Jan Kiszka
Hi Gilles,

trying to understand the cb_read/write lock usage, some question came up
here: What prevents that the mutexq iteration in pse51_mutex_check_init
races against pse51_mutex_destroy_internal?

If nothing, then I wonder if we actually have to iterate over the whole
queue to find out whether a given object has been initialized and
registered already or not. Can't this be encoded differently?

BTW, shadow_mutex.mutex is a kernel pointer sitting in a user-reachable
memory region? Why not using a handle here, like the native skin does?
Won't that allow to resolve the issue above as well?

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core