Re: [Xenomai-core] [Xenomai-help] RTIPC protocol driver set

2009-09-11 Thread Philippe Gerum
On Fri, 2009-09-11 at 17:25 +0200, Philippe Gerum wrote:
> * IDDP stands for "intra-domain datagram protocol", i.e. a
> Xenomai-to-Xenomai real-time datagram channel. This protocol may not be
> as flexible as POSIX message queues (does not support message priority
> but does out-of-bound sending though)

Oops, I did it again. Please read "out-of-band".

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] rtcan: Add support for CAN PCI cards from ESD

2009-09-11 Thread Philippe Gerum
On Tue, 2009-09-08 at 16:39 +0200, Sebastian Smolorz wrote:
> This patch adds support for SJA1000 based PCI CAN interface cards from 
> electronic system design gmbh.
> 
> The following list of boards are supported:
> 
> CAN-PCI/200 (tested)
> CAN-PCI/266
> CAN-PMC266
> CAN-PCIe/2000
> CAN-CPCI/200
> CAN-PCI104
> 
> The patch is based on the Socket-CAN driver for those boards by Matthias 
> Fuchs.
> 
> Signed-off-by: Sebastian Smolorz 
> Acked-by: Wolfgang Grandegger 

Merged, thanks.

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] [PATCH] rtcan: Add support for CAN PCI cards from ESD

2009-09-11 Thread Wolfgang Grandegger
Sebastian Smolorz wrote:
> This patch adds support for SJA1000 based PCI CAN interface cards from 
> electronic system design gmbh.
> 
> The following list of boards are supported:
> 
> CAN-PCI/200 (tested)
> CAN-PCI/266
> CAN-PMC266
> CAN-PCIe/2000
> CAN-CPCI/200
> CAN-PCI104
> 
> The patch is based on the Socket-CAN driver for those boards by Matthias 
> Fuchs.
> 
> Signed-off-by: Sebastian Smolorz 
Acked-by: Wolfgang Grandegger 

Sorry for delay.

Wolfgang.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


[Xenomai-core] RTIPC protocol driver set

2009-09-11 Thread Philippe Gerum

In the wake of a recent discussion about Xenomai 3, the requirement to
find a substitute for the native message pipes interface (i.e. RT_PIPE)
was pointed out.

The real-time side of this new interface would have to be available from
kernel space to RTDM drivers as well, so that people adopting a clean
split model like RTDM-drivers <-> userland applications, would not be
left in the cold, with no replacement for the legacy RT_PIPE API in
kernel space, which will be phased out in Xenomai 3.

This question, and a few others, may have found an answer with the
recent merging of the so-called RTIPC framework, for Xenomai 2.5.x.
RTIPC is an RTDM-based "meta-driver", on top of which one may stack
protocol drivers, exporting a socket interface to the real-time users,
running in primary mode within the Xenomai domain. The point of RTIPC
being precisely that such users won't want to leave the real-time mode
for sending/receiving data to/from other destinations/sources.

So far, I have merged three protocols along with the RTIPC framework,
namely XDDP, IDDP and BUFP.

* XDDP stands for "cross-domain datagram protocol", i.e. to exchange
datagrams between the Xenomai (primary) real-time domain, and the Linux
realm. This is what the message pipe fans may want to have a look at.
Basically, it connects a real-time RTDM socket to one of the /dev/rtp*
pseudo-devices. The network port used on the socket side matches the
minor device number used on the non RT side. The added bonus of XDDP is
that people relying on the POSIX skin may now have access to the message
pipe feature, without dragging in bits of the native skin API for that
purpose.

* IDDP stands for "intra-domain datagram protocol", i.e. a
Xenomai-to-Xenomai real-time datagram channel. This protocol may not be
as flexible as POSIX message queues (does not support message priority
but does out-of-bound sending though), but exports a socket interface,
which is surely better for your brain than mq_*() (ask Gilles). The
basic idea behind it is that anything you could do based on AF_UNIX
sockets in the Linux realm, should be (mostly) doable with AF_RTIPC+IDDP
in the Xenomai domain. However, we use numeric port numbers or label
strings, and not socket paths to bind sockets in the Xenomai namespace.

* BUFP stands for "buffer protocol", probably the most naive of all, but
likely the best fit when you don't care for message boundaries, and just
want an efficient IPC to send a byte stream from a producer to a
consumer thread, without leaving the Xenomai domain. This protocol is
the exact equivalent of the RT_BUFFER API that came to light earlier in
the 2.5.x series, but again, exporting a socket interface to the
real-time application.

The fact that all RTIPC protocols are RTDM-based, means that one can
reach the socket API from kernel space as well, using the inter-driver
RTDM interface, see:
http://www.xenomai.org/documentation/xenomai-head/html/api/index.html

A few examples illustrating usage of those protocols from user-space are
available from my GIT tree, see there:
http://git.xenomai.org/?p=xenomai-head.git;a=tree;f=examples/rtdm/profiles/ipc;h=a352a07067bfbad44d4d13cc386d567ca1911bb0;hb=HEAD

So far, reasonable testing took place for all protocols using the RTDM
socket interface in user-space, albeit the RTIPC framework itself should
still be considered as work in progress. However, in-kernel calls via
RTDM's inter-driver interface remain to be validated, although RTDM does
most of the job here, so the remaining issues should not be complex ones
if any.

-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core


Re: [Xenomai-core] nucleus/pipe.c patch: check for lingering close at xnpipe_connect()

2009-09-11 Thread Philippe Gerum
On Thu, 2009-09-10 at 12:49 -0400, Andreas Glatz wrote:
> Hi,
> 
> 
> > 
> > Whenever possible, please post patches inline (I had to manually copy it
> > to allow commenting).
> 
> Sure, no problem.
> 
> 
> > 
> > You cannot bluntly call release() here. You may not run in Linux context
> > while that handler demands this. 
> 
> It's also called in xnpipe_disconnect() (if I follow 'goto cleanup' and
> the user-space isn't connected). So is there a different policy here?
> I mean should xnpipe_connect() just be called from rt context whereas
> xnpipe_disconnect() has to be called from non-rt context?
> 
> 
> > Moreover, releasing the state is the
> > job of the NRT user that opened it and obviously still has a hand on it.
> 
> Yeah, that's the tricky part. I left that part out yet ...
> 
> > 
> > (And the locking is imbalanced here, but that is shadowed by the other
> > flaws.)
> 
> Agree.
> 
> If we had that new pipe behaviour it could significantly 
> speed-up debugging for us because we wouldn't have to stop (and
> eventually start) all NRT applications before restarting the RT
> application.
> 
> Thanks for commenting on my quick and dirty patch. I think, now
> I know where to continue.
> 

Allowing the RT side to be re-opened multiple times won't fly, I'm
afraid. The reason we have the linger-on-close behavior is to prevent
the NRT side to refer to stale memory whenever the RT side preempts and
disconnects. Excluding RT from preempting the write/read calls could
cause high latencies (NRT may sleep anyway, so locking would not even be
a safe option there).

Maybe you should move the RT_PIPE descriptor in a separate, standalone
module from your driver to keep the connection alive, and ignore the
-EBUSY status upon failed attempt to connect to the same minor from your
driver? That descriptor is shareable in kernel space so there is no
restriction in using it from another module, and this would allow your
driver module to be unloaded without actually tearing down the data
channel with your userland app.

> Andreas
> 
> ___
> Xenomai-core mailing list
> Xenomai-core@gna.org
> https://mail.gna.org/listinfo/xenomai-core
-- 
Philippe.



___
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core