Re: [Xenomai] shared memory compatibility - advice sought

2013-04-16 Thread Michael Haberler

I've gotten the char shm driver to work flawlessly as suggested, and am now 
looking into dumping the rest of the SysV IPC legacy code in the linuxcnc code 
base and replace it by shm_open/mmap

reading up on the Xenomai Posix skin a bit late (ahem) it occurs to me the best 
solution would have been to use the Xenomai Posix skin throughout, kernel and 
userland, dump my little driver, and be done with it

the catch, of course, is that the Xenomai Posix skin is available on, well, 
Xenomai only, which leaves out the RTAI and vanilla kernels cases, and any 
other flavor downstream

reading up on ksrc/skins/posix/shm.c it occurs to me it isnt exactly a copy  
paste job getting that to run without the Xenomai environment

so I'm really just asking so I dont write off an option prematurely... this is 
whacky, right?

- Michael


Am 11.04.2013 um 21:28 schrieb Gilles Chanteperdrix 
gilles.chanteperd...@xenomai.org:

 Of course, this is where the down-side is, you will have to maintain
 your little kernel module with kernel version changes.
..
 
 
 -- 
Gilles.


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] shared memory compatibility - advice sought

2013-04-16 Thread Michael Haberler
Gilles,

Am 16.04.2013 um 20:08 schrieb Gilles Chanteperdrix 
gilles.chanteperd...@xenomai.org:

 On 04/16/2013 02:59 PM, Michael Haberler wrote:
...
 reading up on ksrc/skins/posix/shm.c it occurs to me it isnt exactly
 a copy  paste job getting that to run without the Xenomai
 environment
 
 
 The reason is that the posix shm interface is not exactly simple. By
 defining your own API, you can do something much simpler. You them
 implement the API directly in kernel-space with return memory, and in
 user-space by using the ioctls provided by the driver.

I felt so.. writing off a wild phantasy.

The current approach works perfectly fine and already supports clean regression 
test runs on RTAI kernel threads, as well as Xenomai kernel and userland RT 
threads. The rest is harmless.

Thanks a lot for the excellent advice!

- Michael


 
 
 -- 
Gilles.


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] shared memory compatibility - advice sought

2013-04-11 Thread Gilles Chanteperdrix
On 04/11/2013 02:42 AM, Gilles Chanteperdrix wrote:

 Let us talk concretely. If I assume you want to share the same piece of
 memory in all the cases (which I originally did not). You would have a
 common kernel module able to allocate a piece of memory and associate it
 with an identifier (a string, ala shm_open/sem_open, for instance).
 
 Then an rtdm module, with an ioctl allowing to retrieve that piece of
 memory or allocate it given the id, and if called from user-space use
 rtdm_mmap_to_user to put it in the process adress-space, if called from
 kernel-space, return the memory directly. The same RTDM code can be
 compiled both for RTAI and Xenomai and covers 4 cases. And RTDM drivers
 can be called as well from kernel space as from user-space, if I
 remember correctly.
 
 Then another linux module, with an ioctl and an mmap call allowing to
 retrieve the same piece of memory with the same ID and map it in the
 process user-space.


That is still way too complicated. The only thing you need is a plain
Linux character device driver, with an ioctl and mmap method allowing to
map so kernel memory in user-space, and a kernel-space API giving access
to the kernel memory.

The kernel API would be used in the module initialization/cleanup
routines of your kernel mode applications. The user API would be used by
the non real-time part initialization of user applications in any mode.

As a starting point, you can have a look at the linux device drivers
book examples.

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] shared memory compatibility - advice sought

2013-04-11 Thread Michael Haberler
Gilles,

I think I see the light although I'm not totally clear on all details yet:

Am 11.04.2013 um 02:42 schrieb Gilles Chanteperdrix:

 On 04/11/2013 01:39 AM, Michael Haberler wrote:
 
 Gilles,
 
 thank you for your detailed answer.
 
 I'll concentrate on the RTDM suggestion because I'm after a long-term
 stable solution, and also because I feel RTAI needs a similar
 approach
 
 please let me make sure I fully understand your suggestions, see
 below inline:
 
 Am 11.04.2013 um 00:31 schrieb Gilles Chanteperdrix:
 
 On 04/10/2013 11:24 PM, Michael Haberler wrote:
 
 I am building an RT application which is portable across RTAI, 
 Xenomai/userland threads, Xenomai/kernel threads, RT-preempt and 
 vanilla kernels (modulo timing restrictions). The xenomai kernel 
 threads build is on a deprecation path but build for coverage
 reasons atm.
 
 The application already does support several instances on one 
 machine, for instance one instance could be Xenomai kernel
 threads, a second one Xenomai user threads, a third one Posix
 threads (thats an example and doesnt make sense, just pointing
 out whats possible)
 
 The userland threads instances uses sysvipc shm; RTAI instance
 uses rtai_malloc/rtai_kmalloc; Xenomai kernel uses 
 rt_heap_create/rt_heap_alloc.
 
 --
 
 A requirement has come up to enable access of shared memory
 between instances and that's where I dont know how to proceed -
 the issues I have are:
 
 - incompatible shared memory models between RTAI, Xenomai and 
 shmctl() - sequencing imposed by kernel threads models - shared 
 memory must be created in-kernel and can attached to in userland
 but not vice versa
 
 I know it is a faint hope, let me try nevertheless:
 
 - is there a way to make Xenomai kernel threads use shared
 memory created in userland by shmctl(2) or mmap for that matter
 
 
 RTDM skin (the future-proof way): rtdm_mmap_to_user will allow you
 to map a piece of memory (obtained for instance with kmalloc or
 even vmalloc in kernel-space), in a process user-space. You have to
 devise the interactions between user and kernel-space through a
 driver if you want the user-space to seem to do the allocation
 first.
 
 I understand this to mean:
 
 - in case the application runs on Xenomai, either thread style
 (xenomai user, xenomai kernel, posix): - in this case shared memory
 creation and attaching would go through a xenomai-dependent layer
 handled by an RTDM device driver - this driver can allocate memory
 (or return a reference to an existing memory area) and return a
 handle which can be used in userland similar to a sysvip segment
 
 
 I would forget about sysv ipcs, and think more POSIX.

that's the legacy code I inherited, but not a big change to adapt.

 
 - it
 would provide the same function to kernel threads modules whishing to
 attach a shared memory segment - whoever initiated its creation
 
 Does this sound about right?
 
 
 Let us talk concretely. If I assume you want to share the same piece of
 memory in all the cases (which I originally did not).

yes, that is the requirement

 You would have a
 common kernel module able to allocate a piece of memory and associate it
 with an identifier (a string, ala shm_open/sem_open, for instance).

fine (we're using instance id/shm id tuples but conceptually similar)

 Then an rtdm module, with an ioctl allowing to retrieve that piece of
 memory or allocate it given the id, and if called from user-space use
 rtdm_mmap_to_user to put it in the process adress-space, if called from
 kernel-space, return the memory directly. The same RTDM code can be
 compiled both for RTAI and Xenomai and covers 4 cases. And RTDM drivers
 can be called as well from kernel space as from user-space, if I
 remember correctly.

I understand this is more or less the old captain.at RTDM driver modified to 
use the module above instead of directly doing a kmalloc() (it seems to have 
vanished from the Internet-accessible earth but I think I have collected the 
pieces needed to resurrect it)

are the workarounds mentioned here still needed? 
http://www.xenomai.org/pipermail/xenomai/2008-October/014958.html


 Then another linux module, with an ioctl and an mmap call allowing to
 retrieve the same piece of memory with the same ID and map it in the
 process user-space.

the point where I am lost is this one, and the suggestion you state at the 
bottom:

 the common API already exists and is POSIX, simply use POSIX, and you do not 
 need a useless abstraction layer.

are you suggesting this array of modules will plug underneath the userland 
Posix shared memory routines unchanged?

 
 
 I can imagine funneling all shm-type calls through say a shared
 object which is dlopen'd by the using layer after autodetection of
 the running environment; thats a vehicle we#ve been using sucessfully
 so far
 
 
 I am not sure I understand why you need that level of complication. What
 you can dlopen is simply something which defines wrappers for posix
 services, ioctl, 

Re: [Xenomai] shared memory compatibility - advice sought

2013-04-11 Thread Gilles Chanteperdrix
On 04/11/2013 09:00 AM, Michael Haberler wrote:

 Gilles,
 
 I think I see the light although I'm not totally clear on all details yet:


There is a very simple solution, see my other mail:
http://www.xenomai.org/pipermail/xenomai/2013-April/028169.html

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] shared memory compatibility - advice sought

2013-04-11 Thread Michael Haberler

Am 11.04.2013 um 09:05 schrieb Gilles Chanteperdrix:

 On 04/11/2013 09:00 AM, Michael Haberler wrote:
 
 Gilles,
 
 I think I see the light although I'm not totally clear on all details yet:
 
 
 There is a very simple solution, see my other mail:
 http://www.xenomai.org/pipermail/xenomai/2013-April/028169.html

Well, 'simple' is just my kind of keyword, that one I'd manage 

I understand there are no dumb questions but rather inquisitive idiots, so let 
me rattle off my remaining topics in advance:

- I read this to mean: forget about rt_heap* and rtai_(k)malloc altogether: if 
that is the case - any downsides in the crystal ball?
- if it is that simple, why isnt everybody using this scheme over rt_heap_* and 
rtai_(k)malloc and friends which has the sequencing restriction?


The overall approach looks a bit like this project: 
http://sourceforge.net/projects/mbuff/ and the author mentions an interesting 
point in the FAQ file:

 WARNING
 
 All versions od mbuff have a known bug occuring when a program having mapped
 areas forks. Do not do it for now. Attach to shared memory areas after 
 the fork in parent and child if neccesary.

I do for now take that as a restriction which can be dealt with if it is known 
to exist even if I dont fully understand where it comes from; some old code 
remark in the RTAI userland code workaround in LinuxCNC hints that RTAI 
rtai_malloc()'d memory suffers, or suffered from a similar issue


- Michael


 
 -- 
Gilles.


___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] shared memory compatibility - advice sought

2013-04-11 Thread Gilles Chanteperdrix
On 04/11/2013 10:35 AM, Michael Haberler wrote:

 
 Am 11.04.2013 um 09:05 schrieb Gilles Chanteperdrix:
 
 On 04/11/2013 09:00 AM, Michael Haberler wrote:

 Gilles,

 I think I see the light although I'm not totally clear on all details yet:


 There is a very simple solution, see my other mail:
 http://www.xenomai.org/pipermail/xenomai/2013-April/028169.html
 
 Well, 'simple' is just my kind of keyword, that one I'd manage 
 
 I understand there are no dumb questions but rather inquisitive idiots, so 
 let me rattle off my remaining topics in advance:
 
 - I read this to mean: forget about rt_heap* and rtai_(k)malloc altogether: 
 if that is the case - any downsides in the crystal ball?
 - if it is that simple, why isnt everybody using this scheme over rt_heap_* 
 and rtai_(k)malloc and friends which has the sequencing restriction?


Well, that is the way Xenomai shared memories are implemented, and RTAI
too I guess. The linux native shared memories are implemented
differently, but rely on mmap as well. The point is that instead of
using three different implementations of the same thing which can not
communicate together, and put an abstraction layer on top to try and get
them to communicate together, it may be simpler to provide a fourth
implementation, which you know will always be available in all the cases
you describe. What makes Xenomai implementation a little bit more
complicated is all the part of supporting many kernel API revisions, the
current version of xenomai is still compiling with 2.4 kernels.
Obviously if you do this now you will not have this issue, and the
kernel API for mmap implementation is unlikely to change as much now as
it has changed in the past.

Of course, this is where the down-side is, you will have to maintain
your little kernel module with kernel version changes.

 
 
 The overall approach looks a bit like this project: 
 http://sourceforge.net/projects/mbuff/ and the author mentions an interesting 
 point in the FAQ file:
 
 WARNING

 All versions od mbuff have a known bug occuring when a program having mapped
 areas forks. Do not do it for now. Attach to shared memory areas after 
 the fork in parent and child if neccesary.
 
 I do for now take that as a restriction which can be dealt with if it is 
 known to exist even if I dont fully understand where it comes from; some old 
 code remark in the RTAI userland code workaround in LinuxCNC hints that RTAI 
 rtai_malloc()'d memory suffers, or suffered from a similar issue


I would say have a look at the Linux device drivers chapter about
mmap, maybe the answer is there.


-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] shared memory compatibility - advice sought

2013-04-10 Thread Gilles Chanteperdrix
On 04/10/2013 11:24 PM, Michael Haberler wrote:

 I am building an RT application which is portable across RTAI,
 Xenomai/userland threads, Xenomai/kernel threads, RT-preempt and
 vanilla kernels (modulo timing restrictions). The xenomai kernel
 threads build is on a deprecation path but build for coverage reasons
 atm.
 
 The application already does support several instances on one
 machine, for instance one instance could be Xenomai kernel threads, a
 second one Xenomai user threads, a third one Posix threads (thats an
 example and doesnt make sense, just pointing out whats possible)
 
 The userland threads instances uses sysvipc shm; RTAI instance uses
 rtai_malloc/rtai_kmalloc; Xenomai kernel uses
 rt_heap_create/rt_heap_alloc.
 
 --
 
 A requirement has come up to enable access of shared memory between
 instances and that's where I dont know how to proceed - the issues I
 have are:
 
 - incompatible shared memory models between RTAI, Xenomai and
 shmctl() - sequencing imposed by kernel threads models - shared
 memory must be created in-kernel and can attached to in userland but
 not vice versa
 
 I know it is a faint hope, let me try nevertheless:
 
 - is there a way to make Xenomai kernel threads use shared memory
 created in userland by shmctl(2) or mmap for that matter


RTDM skin (the future-proof way):
rtdm_mmap_to_user will allow you to map a piece of memory (obtained for
instance with kmalloc or even vmalloc in kernel-space), in a process
user-space. You have to devise the interactions between user and
kernel-space through a driver if you want the user-space to seem to do
the allocation first.


Posix skin (probably the easiest but deprecated way):
The Xenomai posix skin shared memories are useful expressly for that
(corner) case, which is the reason you will find them usually disabled
in your kernel configuration. If you enable them, the API is the POSIX
shared memory API, that is shm_open/ftruncate/mmap. The first shm_open
with O_CREAT creates the shared memory, whether in kernel-space or
user-space.

Note that if you need to share mutexes or semaphores between kernel and
user-space, the anonymous sem_t and pthread_mutex_t you put on the
shared memory can be shared too. You can also use named semaphores
(sem_open).

At the time when you drop kernel-space applications (as opposed to
drivrs), you disable the posix skin shared memory option in kernel
configuration, and Xenomai posix skin user-space threads will use Linux
regular shared memory, without even needing a recompilation of the
application.


Native skin (another a bit less easy deprecated way):
In the same vein, rt_heap_create can be used both from kernel and
user-space, and will create a shared memory if you pass the H_SHARED
parameter. rt_heap_bind can only be called from user-space, so, if you
want to seem to create the heap in user-space, you have to devise
interactions between kernel and user most probably through an RTDM driver.


Note 1: the POSIX API is available in all the cases you want to cover,
maybe the shm_open/ftruncate/mmap API is missing in RTAI case, but you
would probably be better implementing this missing support by wrapping
existing RTAI shared memory services, rather than inventing an
abstraction layer above all the cases. But I may be biased because I
wrote part of the POSIX API in Xenomai.

Note 2: Xenomai takes care of the architecture dependent troubles you
get by sharing memory between kernel-space and user-space on an ARM
processor with VIVT cache.

-- 
Gilles.

___
Xenomai mailing list
Xenomai@xenomai.org
http://www.xenomai.org/mailman/listinfo/xenomai


Re: [Xenomai] shared memory compatibility - advice sought

2013-04-10 Thread Michael Haberler
Gilles,

thank you for your detailed answer.

I'll concentrate on the RTDM suggestion because I'm after a long-term stable 
solution, and also because I feel RTAI needs a similar approach

please let me make sure I fully understand your suggestions, see below inline:

Am 11.04.2013 um 00:31 schrieb Gilles Chanteperdrix:

 On 04/10/2013 11:24 PM, Michael Haberler wrote:
 
 I am building an RT application which is portable across RTAI,
 Xenomai/userland threads, Xenomai/kernel threads, RT-preempt and
 vanilla kernels (modulo timing restrictions). The xenomai kernel
 threads build is on a deprecation path but build for coverage reasons
 atm.
 
 The application already does support several instances on one
 machine, for instance one instance could be Xenomai kernel threads, a
 second one Xenomai user threads, a third one Posix threads (thats an
 example and doesnt make sense, just pointing out whats possible)
 
 The userland threads instances uses sysvipc shm; RTAI instance uses
 rtai_malloc/rtai_kmalloc; Xenomai kernel uses
 rt_heap_create/rt_heap_alloc.
 
 --
 
 A requirement has come up to enable access of shared memory between
 instances and that's where I dont know how to proceed - the issues I
 have are:
 
 - incompatible shared memory models between RTAI, Xenomai and
 shmctl() - sequencing imposed by kernel threads models - shared
 memory must be created in-kernel and can attached to in userland but
 not vice versa
 
 I know it is a faint hope, let me try nevertheless:
 
 - is there a way to make Xenomai kernel threads use shared memory
 created in userland by shmctl(2) or mmap for that matter
 
 
 RTDM skin (the future-proof way):
 rtdm_mmap_to_user will allow you to map a piece of memory (obtained for
 instance with kmalloc or even vmalloc in kernel-space), in a process
 user-space. You have to devise the interactions between user and
 kernel-space through a driver if you want the user-space to seem to do
 the allocation first.

I understand this to mean:

- in case the application runs on Xenomai, either thread style (xenomai user, 
xenomai kernel, posix):
- in this case shared memory creation and attaching would go through a 
xenomai-dependent layer handled by an RTDM device driver 
- this driver can allocate memory (or return a reference to an existing memory 
area) and return a handle which can be used in userland similar to a sysvip 
segment
- it would provide the same function to kernel threads modules whishing to 
attach a shared memory segment - whoever initiated its creation

Does this sound about right?

I can imagine funneling all shm-type calls through say a shared object which is 
dlopen'd by the using layer after autodetection of the running environment; 
thats a vehicle we#ve been using sucessfully so far

in the xenomai case the shm functions in this object would go through the steps 
outlined above
in the userland/sysvipc/vanilla kernel they would just use sysvipc shm and no 
kernel driver

I know I'm barking up the wrong tree here but you mentioned wrapping RTAI shm 
services:

do you suggest to make RTAI shm work with the RTDM model (I'm fuzzy how that 
would work) or is it a vanilla device driver approach you're suggesting?

I assume in the RTAI case there would need to be a similar driver to do 
rtai_kmalloc() in-kernel and eventually rtai_malloc() when returned on behalf 
of the using layer

I might be overlooking something obvious, but atm I see this panning out to 
virtualizing the shared memory creation/attachment layer across platforms - 
that's fine, Private Haberler just needs to understand the General's commands ;)


- Michael


 Posix skin (probably the easiest but deprecated way):
 The Xenomai posix skin shared memories are useful expressly for that
 (corner) case, which is the reason you will find them usually disabled
 in your kernel configuration. If you enable them, the API is the POSIX
 shared memory API, that is shm_open/ftruncate/mmap. The first shm_open
 with O_CREAT creates the shared memory, whether in kernel-space or
 user-space.
 
 Note that if you need to share mutexes or semaphores between kernel and
 user-space, the anonymous sem_t and pthread_mutex_t you put on the
 shared memory can be shared too. You can also use named semaphores
 (sem_open).
 
 At the time when you drop kernel-space applications (as opposed to
 drivrs), you disable the posix skin shared memory option in kernel
 configuration, and Xenomai posix skin user-space threads will use Linux
 regular shared memory, without even needing a recompilation of the
 application.
 
 
 Native skin (another a bit less easy deprecated way):
 In the same vein, rt_heap_create can be used both from kernel and
 user-space, and will create a shared memory if you pass the H_SHARED
 parameter. rt_heap_bind can only be called from user-space, so, if you
 want to seem to create the heap in user-space, you have to devise
 interactions between kernel and user most probably through an RTDM driver.
 
 
 Note 1: the 

Re: [Xenomai] shared memory compatibility - advice sought

2013-04-10 Thread Gilles Chanteperdrix
On 04/11/2013 01:39 AM, Michael Haberler wrote:

 Gilles,
 
 thank you for your detailed answer.
 
 I'll concentrate on the RTDM suggestion because I'm after a long-term
 stable solution, and also because I feel RTAI needs a similar
 approach
 
 please let me make sure I fully understand your suggestions, see
 below inline:
 
 Am 11.04.2013 um 00:31 schrieb Gilles Chanteperdrix:
 
 On 04/10/2013 11:24 PM, Michael Haberler wrote:
 
 I am building an RT application which is portable across RTAI, 
 Xenomai/userland threads, Xenomai/kernel threads, RT-preempt and 
 vanilla kernels (modulo timing restrictions). The xenomai kernel 
 threads build is on a deprecation path but build for coverage
 reasons atm.
 
 The application already does support several instances on one 
 machine, for instance one instance could be Xenomai kernel
 threads, a second one Xenomai user threads, a third one Posix
 threads (thats an example and doesnt make sense, just pointing
 out whats possible)
 
 The userland threads instances uses sysvipc shm; RTAI instance
 uses rtai_malloc/rtai_kmalloc; Xenomai kernel uses 
 rt_heap_create/rt_heap_alloc.
 
 --
 
 A requirement has come up to enable access of shared memory
 between instances and that's where I dont know how to proceed -
 the issues I have are:
 
 - incompatible shared memory models between RTAI, Xenomai and 
 shmctl() - sequencing imposed by kernel threads models - shared 
 memory must be created in-kernel and can attached to in userland
 but not vice versa
 
 I know it is a faint hope, let me try nevertheless:
 
 - is there a way to make Xenomai kernel threads use shared
 memory created in userland by shmctl(2) or mmap for that matter
 
 
 RTDM skin (the future-proof way): rtdm_mmap_to_user will allow you
 to map a piece of memory (obtained for instance with kmalloc or
 even vmalloc in kernel-space), in a process user-space. You have to
 devise the interactions between user and kernel-space through a
 driver if you want the user-space to seem to do the allocation
 first.
 
 I understand this to mean:
 
 - in case the application runs on Xenomai, either thread style
 (xenomai user, xenomai kernel, posix): - in this case shared memory
 creation and attaching would go through a xenomai-dependent layer
 handled by an RTDM device driver - this driver can allocate memory
 (or return a reference to an existing memory area) and return a
 handle which can be used in userland similar to a sysvip segment


I would forget about sysv ipcs, and think more POSIX.

 - it
 would provide the same function to kernel threads modules whishing to
 attach a shared memory segment - whoever initiated its creation
 
 Does this sound about right?


Let us talk concretely. If I assume you want to share the same piece of
memory in all the cases (which I originally did not). You would have a
common kernel module able to allocate a piece of memory and associate it
with an identifier (a string, ala shm_open/sem_open, for instance).

Then an rtdm module, with an ioctl allowing to retrieve that piece of
memory or allocate it given the id, and if called from user-space use
rtdm_mmap_to_user to put it in the process adress-space, if called from
kernel-space, return the memory directly. The same RTDM code can be
compiled both for RTAI and Xenomai and covers 4 cases. And RTDM drivers
can be called as well from kernel space as from user-space, if I
remember correctly.

Then another linux module, with an ioctl and an mmap call allowing to
retrieve the same piece of memory with the same ID and map it in the
process user-space.

 
 I can imagine funneling all shm-type calls through say a shared
 object which is dlopen'd by the using layer after autodetection of
 the running environment; thats a vehicle we#ve been using sucessfully
 so far


I am not sure I understand why you need that level of complication. What
you can dlopen is simply something which defines wrappers for posix
services, ioctl, mmap, and simply use libpthread_rt.so when a xenomai
user-space application is wanted.

 
 in the xenomai case the shm functions in this object would go through
 the steps outlined above in the userland/sysvipc/vanilla kernel they
 would just use sysvipc shm and no kernel driver
 
 I know I'm barking up the wrong tree here but you mentioned wrapping
 RTAI shm services:
 
 do you suggest to make RTAI shm work with the RTDM model (I'm fuzzy
 how that would work) or is it a vanilla device driver approach you're
 suggesting?


RTAI definitely has RTDM, but I do not know how far the integration goes
and if rtdm_mmap_to_user is available. If it is not available it seems
simple to add it.

 
 I assume in the RTAI case there would need to be a similar driver to
 do rtai_kmalloc() in-kernel and eventually rtai_malloc() when
 returned on behalf of the using layer
 
 I might be overlooking something obvious, but atm I see this panning
 out to virtualizing the shared memory creation/attachment layer
 across platforms - that's fine,