On 8/13/07, Jan Kiszka <[EMAIL PROTECTED]> wrote:
>
> juanba romance wrote:
> > Hello, all,
> > I am currently developing a RTDM/xenomai driver for the CANbus chipset
> 82527
> > that i think it could have some interest
> > it has the next features:
>
> Thanks for moving our private thread here! See, now we know that
> Wolfgang is already working on 82527 support for RT-Socket-CAN -
> something I wasn't aware of as well.
>
> >
> >    1. Specific management for the remote frames CANbus feasibility, it
> >    couple the real-time data bus flow with a user software feedback to
> >    handshake remote frames and update mailbox callback for the
> auto-replied
> >    messages
>
> Mind to elaborate what you precisely gain here compared to "open-coded"
> designs (loop closed over the application)? Can you quantify the
> improvements?


After review your current user interface i can not understand how a RF cycle
flows through the user application
holding as much as possible the latency at the receiver side. Maybe it's my
own misunderstanding.
The point is one node requests an information to another one issuing a RF,
the CAN specification says that the RF receptor will handshake the cycle
issuing the corresponding DF, and right here is when/where i am fuzzy. We
use this capability using real time as much as possible only relying on the
CANbus network load, i mean we perform the RF handshake using the RF
receptor mailbox auto-reply capability, feedbacking the user software only
when the DF handshake is decoded at the network, this event will trigger the
user actions i.e. the message data update with the new local variables
state. This feature is requested through  the configuration stage, this kind
of information is labeled as "quick.ack" responses , cause are not related
with software at all. The RF requester has the guarantee  that the
information is sampled with any jitter software coupled. The typical
approach found in other stacks is labelled as "slow.ack", it avoids to
response the RF-request up to reach any software area (kernel/user spaces)
that explicitly issue the data-frame as usual, this is how can-festival
currently works.

Both operations are included in the proposal.

>    2. Transparent use to push/suck data from the driver using a common
> >    data format
> >    3. Capability to push a bunch of CANbus messages in a single system
> >    call. The bunch is copied to a kernel domain ring buffer to guarantee
> low
> >    latencies at the user side. A specific kernel thread  sucks the ring
> pushing
> >    the user request into the chipset
>
> That was discussed before in the context of Socket-CAN. My feeling is
> that it /could/ be useful in case you have to issue longer streams of
> CAN frames at high rates, and specifically if your CAN hardware can
> handle these streams autonomously. Is the 82527 able to do so?
>
> In any case, this would complicate the existing stack and driver and
> would first require careful evaluation of the achievable improvement
> (lower latency, lower system load?).

The i82527 has 15 mailboxes with fixed priority, the lowest one is hardwared
to the RX operation. So theoretically  you can pipeline up to 14 TX
messages. When the stuff is full, we are labeled it as a  "pileup"  because
the hardware handler has to wait up to get some free one, this operation is
performed in our case through the either the mailbox-alarm mechanism or the
ISR transmission side . I have mention the "low latency" term, cause i have
decoupled the loopback-tx feedback from the ISR to a kernel RT thread/task
so the ISR only cleans/stops the mailbox software/hardware resources. The
user call is only blocked the time required to push the message bunch into
the transmission ring. The physical user transmission is performed in
open-loop if no error/alarm is sampled..


> >    4. Driver readout using a native RT message queue where the control
> >    and data flow is published
>
> And this way you make your driver unportable, e.g. to move it over the
> RTDM layer Wolfgang wrote for the -rt kernel. RTDM drivers are ought to
> use RTDM services (or Linux ones), not other skins. If a generally
> useful service is lacking, we need to think about adding it - to RTDM.
>
Fully deliberated. this is one the reason cause i labelled the stuff as
"xenomai-RTDM" instead of "RTDM". I assume that the native layer is
available to be used at all. My first intention is not to build something
fully compliance with the RTDM layer, this is a second step from my point of
view. I need ASAP the driver ready to be used in a Xenomai framework  where
our applications are running..

>    5. Multichipset capabilities, right now a commercial PC104 board with
> >    two devices is used. The on board CPU is a SBC VIA C3 1GHz processor
> >    softwared with the stack xenomai-2.3,1/vanilla-2.6.20-15/Adeos-
> >    ipipe-1.7-03
> >    6. board monitoring through the /proc file system entry
> >    7. Local Data Transfers controlled with RT-alarms
>
> Another violation - but this one is easily avoidable with RTDM timers
> that come with API revision 6 (upcoming Xenomai 2.4).

Same as above


>    8. Virtual support to check applications/driver usage/design, right
> >    now only the chipset is virtualised, but plans to have network
> transactions
> >    are on going
> >    9. ISR hardware optimizations focused on the network readout to
> >    gurantee low latencies
>
> Any numbers?

Right now i am on holidays and i can't not run any scope test, but i
remember that the worst case was around to 100usec to fully read the mailbox
plenty of 8 bytes. It is fully coupled to the hardware ISA mapping, every
chipset register read cycle requires three io operations to write the
addressed register, perform a dummy read  and the valid read one. This
killer takes 500nsec to each chipset select activation, but the most burner
is 1000nsec between each in,out IO address space instruction so around
4usec/sucked byte,
We have implementing the chipset clearing and data sucking with ~ 20 io
cycles , so the numbers fit quite with the xenomai-i386 latencies.

>    10. Easy porting to other i82527 based on boards
> >    11. Full transmission operation handling the 16 message object set
> >
> > We have in plan also
> >
> >    1. Capabilities to filtering/masking the incoming flow at the driver
> >    stage allowing that the same context, using the "xenomai
> nomenclature" feed
> >    specific threads using some kind of binding/configuration process.
> This is
> >    an open issue cause i don't have a clear approach to follow..
> >    2. can-festival coupling
>
> Look, with Socket-CAN, you would now already have CAN-Festival binding. :)

Yes, i know it's clear motivation to use it ;-)


> But maybe this library scenario can be used to explain why you need to
> do things in a special way and what you can gain that way. Looking
> forward!

>From my point of view the RTDM layout is ideal to perform linux porting
The fact related on with the missed chipset support and the gained
experience developing standard linux drivers using this chipset biased my
approach a lot. For sure, that if the chipset support were provided in time
we will re-consider the stuff to re-usage/patch the official stack if the
latencies are similar..

>
> > I think this is the full picture, i look forward..
> >
> > Best regards..
> >
> Jan
>

IHTH..
Best regards..

Juanba
_______________________________________________
Xenomai-core mailing list
Xenomai-core@gna.org
https://mail.gna.org/listinfo/xenomai-core

Reply via email to