Re: [Xenomai-core] Enhanced RTDM device closure

2007-02-25 Thread Jan Kiszka
Jan Kiszka wrote:
 Hi,
 
 a few changes of the RTDM layer were committed to trunk recently. They
 make handling of RTDM file descriptors more handy:
 
  o rt_dev_close/POSIX-close now polls as long as the underlying device
reports -EAGAIN. No more looping inside the application is required.
This applies to the usual non-RT invocation of close, the corner
case close from RT context can still return EAGAIN.
 
  o Automatic cleanup of open file descriptors has been implemented. This
is not yet the perfect design (*), but a straightforward approach to
ease the cleanup after application crashes or other unexpected
terminations.

 o Report file descriptor owner via /proc:

   # cat /proc/xenomai/rtdm/open_fildes
   Index   Locked  Device  Owner [PID]
   0   0   rttest0 latency [973]
   1   0   rtser0  cross-link [981]
   2   0   rtser1  cross-link [981]

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] Enhanced RTDM device closure

2007-02-21 Thread Jan Kiszka
Hi,

a few changes of the RTDM layer were committed to trunk recently. They
make handling of RTDM file descriptors more handy:

 o rt_dev_close/POSIX-close now polls as long as the underlying device
   reports -EAGAIN. No more looping inside the application is required.
   This applies to the usual non-RT invocation of close, the corner
   case close from RT context can still return EAGAIN.

 o Automatic cleanup of open file descriptors has been implemented. This
   is not yet the perfect design (*), but a straightforward approach to
   ease the cleanup after application crashes or other unexpected
   terminations.

The code is still young, so testers are welcome.

Jan


(*) Actually, I would like to see generic per-process file descriptor
tables one day, used by both the POSIX and the RTDM skin. The FD table
should be obtained via xnshadow_ppd_get(). But first this requires
lock-less xnshadow_ppd_get() based on ipipe_get_ptd() to keep the
overhead limited. Yet another story.



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Enhanced RTDM device closure

2007-02-21 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 Hi,
 
 a few changes of the RTDM layer were committed to trunk recently. They
 make handling of RTDM file descriptors more handy:
 
  o rt_dev_close/POSIX-close now polls as long as the underlying device
reports -EAGAIN. No more looping inside the application is required.
This applies to the usual non-RT invocation of close, the corner
case close from RT context can still return EAGAIN.
 
  o Automatic cleanup of open file descriptors has been implemented. This
is not yet the perfect design (*), but a straightforward approach to
ease the cleanup after application crashes or other unexpected
terminations.
 
 The code is still young, so testers are welcome.
 
 Jan
 
 
 (*) Actually, I would like to see generic per-process file descriptor
 tables one day, used by both the POSIX and the RTDM skin. The FD table
 should be obtained via xnshadow_ppd_get().

I agree for the file descriptor table, but I do not see why it should be
bound to xnshadow_ppd_get. The file descriptor table could be
implemented in an object like fashion, where the caller is responsible
to pass the same pointer to the creation, use and desctruction routines.
This would allow, for example, to have a descriptor table for
kernel-space threads. Another feature that would be interesting for the
posix skin would be to have a callback called at process fork time in
order to duplicate the fd table.


 But first this requires
 lock-less xnshadow_ppd_get() based on ipipe_get_ptd() to keep the
 overhead limited. Yet another story.

xnshadow_ppd_get is already lockless, usual callers have to hold the
nklock for other reasons anyway.

-- 
 Gilles Chanteperdrix

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Enhanced RTDM device closure

2007-02-21 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 Hi,

 a few changes of the RTDM layer were committed to trunk recently. They
 make handling of RTDM file descriptors more handy:

  o rt_dev_close/POSIX-close now polls as long as the underlying device
reports -EAGAIN. No more looping inside the application is required.
This applies to the usual non-RT invocation of close, the corner
case close from RT context can still return EAGAIN.

  o Automatic cleanup of open file descriptors has been implemented. This
is not yet the perfect design (*), but a straightforward approach to
ease the cleanup after application crashes or other unexpected
terminations.

 The code is still young, so testers are welcome.

 Jan


 (*) Actually, I would like to see generic per-process file descriptor
 tables one day, used by both the POSIX and the RTDM skin. The FD table
 should be obtained via xnshadow_ppd_get().
 
 I agree for the file descriptor table, but I do not see why it should be
 bound to xnshadow_ppd_get. The file descriptor table could be
 implemented in an object like fashion, where the caller is responsible
 to pass the same pointer to the creation, use and desctruction routines.

But where to get this pointer from when I enter, say, rtdm_ioctl on
behalf of some process? The caller just passes an integer, the file
descriptor.

 This would allow, for example, to have a descriptor table for
 kernel-space threads. Another feature that would be interesting for the

I don't see the need to offer kernel threads private fd tables. They can
perfectly continue to use a common, then kernel-only table. There are
too few of those threads, and there is no clear concept of a process
boundary in kernel space.

 posix skin would be to have a callback called at process fork time in
 order to duplicate the fd table.

Ack. IIRC, this callback could also serve to solve the only consistency
issue of the ipipe_get_ptd() approach.

 
 
  But first this requires
 lock-less xnshadow_ppd_get() based on ipipe_get_ptd() to keep the
 overhead limited. Yet another story.
 
 xnshadow_ppd_get is already lockless, usual callers have to hold the
 nklock for other reasons anyway.
 

OK, depends on the POV :). Mine is that the related RTDM services do not
hold nklock and will never have to. Moreover, there is no need for
locking design-wise, because per-process data cannot vanish under the
caller unless the caller vanishes. The need currently only comes from
the hashing-based lookup (reminds me of the WCET issues kernel futexes
have...).

Jan



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Enhanced RTDM device closure

2007-02-21 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 Gilles Chanteperdrix wrote:
 
Jan Kiszka wrote:

Hi,

a few changes of the RTDM layer were committed to trunk recently. They
make handling of RTDM file descriptors more handy:

 o rt_dev_close/POSIX-close now polls as long as the underlying device
   reports -EAGAIN. No more looping inside the application is required.
   This applies to the usual non-RT invocation of close, the corner
   case close from RT context can still return EAGAIN.

 o Automatic cleanup of open file descriptors has been implemented. This
   is not yet the perfect design (*), but a straightforward approach to
   ease the cleanup after application crashes or other unexpected
   terminations.

The code is still young, so testers are welcome.

Jan


(*) Actually, I would like to see generic per-process file descriptor
tables one day, used by both the POSIX and the RTDM skin. The FD table
should be obtained via xnshadow_ppd_get().

I agree for the file descriptor table, but I do not see why it should be
bound to xnshadow_ppd_get. The file descriptor table could be
implemented in an object like fashion, where the caller is responsible
to pass the same pointer to the creation, use and desctruction routines.
 
 
 But where to get this pointer from when I enter, say, rtdm_ioctl on
 behalf of some process? The caller just passes an integer, the file
 descriptor.

Yes, the pointer would be obtained via xnshadow_ppd_get, but it does not
have to be built-in the nucleus, this can be done by the skins.

 
 
This would allow, for example, to have a descriptor table for
kernel-space threads. Another feature that would be interesting for the
 
 
 I don't see the need to offer kernel threads private fd tables. They can
 perfectly continue to use a common, then kernel-only table. There are
 too few of those threads, and there is no clear concept of a process
 boundary in kernel space.

I mean having one descriptor table for the kernel space as a whole, but
the kernel space descriptor table does not have to be of a different
type from the user-space descriptor tables.

 
 
posix skin would be to have a callback called at process fork time in
order to duplicate the fd table.
 
 
 Ack. IIRC, this callback could also serve to solve the only consistency
 issue of the ipipe_get_ptd() approach.
 
 

 But first this requires

lock-less xnshadow_ppd_get() based on ipipe_get_ptd() to keep the
overhead limited. Yet another story.

xnshadow_ppd_get is already lockless, usual callers have to hold the
nklock for other reasons anyway.

 
 
 OK, depends on the POV :). Mine is that the related RTDM services do not
 hold nklock and will never have to. Moreover, there is no need for
 locking design-wise, because per-process data cannot vanish under the
 caller unless the caller vanishes. The need currently only comes from
 the hashing-based lookup (reminds me of the WCET issues kernel futexes
 have...).

I have to have a closer look at the code. But you are right, since the
ppd cannot vanish under our feet, maybe is it possible to call
xnshadow_ppd_get without holding the nklock at all. We only have to
suppose that the lists manipulation routines will never set the list to
an inconsistent state.

Something else that I would like is that the fd table be bound to the
nucleus registry. This would allow to factor the registry implementation.

-- 
 Gilles Chanteperdrix

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Enhanced RTDM device closure

2007-02-21 Thread Gilles Chanteperdrix
Jan Kiszka wrote:
 Gilles Chanteperdrix wrote:
I have to have a closer look at the code. But you are right, since the
ppd cannot vanish under our feet, maybe is it possible to call
xnshadow_ppd_get without holding the nklock at all. We only have to
suppose that the lists manipulation routines will never set the list to
an inconsistent state.
 
 
 As long as process A's ppd can take a place in the same list as process
 B's one, you need locking (or RCU :-/). That's my point about the hash
 chain approach.
 
 I can only advertise the idea again to maintain the ppd pointers as an
 I-pipe task_struct key. On fork/clone, you just have to make sure that
 the child either gets a copy of the parent's pointer when it will share
 the mm, or its key is NULL'ified, or automatic Xenomai skin binding is
 triggered to generate in a new ppd.

I agree with the idea of the ptd. Nevertheless, I think it is possible
to access an xnqueue in a lockless fashion. Concurrent insertions and
deletions only matter if they take place before (in list order) the
target. When we are walking the list, only the next pointers matters.
Now, if we look at the next pointers in the insertion routine, we see:

   holder-next = head-next;
   head-next = holder;

So, maybe we just need to add a compiler barrier, but it looks like we
can never see a wrong pointer when walking the list.

-- 
 Gilles Chanteperdrix

___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] Enhanced RTDM device closure

2007-02-21 Thread Jan Kiszka
Gilles Chanteperdrix wrote:
 Jan Kiszka wrote:
 Gilles Chanteperdrix wrote:
 I have to have a closer look at the code. But you are right, since the
 ppd cannot vanish under our feet, maybe is it possible to call
 xnshadow_ppd_get without holding the nklock at all. We only have to
 suppose that the lists manipulation routines will never set the list to
 an inconsistent state.

 As long as process A's ppd can take a place in the same list as process
 B's one, you need locking (or RCU :-/). That's my point about the hash
 chain approach.

 I can only advertise the idea again to maintain the ppd pointers as an
 I-pipe task_struct key. On fork/clone, you just have to make sure that
 the child either gets a copy of the parent's pointer when it will share
 the mm, or its key is NULL'ified, or automatic Xenomai skin binding is
 triggered to generate in a new ppd.
 
 I agree with the idea of the ptd. Nevertheless, I think it is possible
 to access an xnqueue in a lockless fashion. Concurrent insertions and
 deletions only matter if they take place before (in list order) the
 target. When we are walking the list, only the next pointers matters.
 Now, if we look at the next pointers in the insertion routine, we see:
 
holder-next = head-next;
head-next = holder;
 
 So, maybe we just need to add a compiler barrier, but it looks like we
 can never see a wrong pointer when walking the list.
 

But not having to walk some chain, even if it's lock-less then, can also
save us from potential cache misses on accessing those memory chunks... :)



signature.asc
Description: OpenPGP digital signature
___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core