[EMAIL PROTECTED] wrote:
> 
>>>> Has anyone achieved TDMA cycles below 200us so far? Is short, I need to
>>>
>>> I don't think so.
>>>
> 
> That's what I suspected :(
> 
>>>> serve 8 slaves within 165us (~6kHz) and I was wondering if it's
>>>> possible
>>>> using RTnet. The amount of data is very small, let's say 16bits per
>>>> slot
>>>> (what a waste using GBit Ethernet ;) but, it's got to be _fast_
>>>
>>> You would be surprised about how much 0.1 vs. 1 GBit/s actually matters
>>> with minimal packet sizes - compared to the remaining latencies and
>>> jitters of your whole system...
>>>
>>>> Those also reading the RTAI list might have seen my latency/jitter
>>>> benchmarking with a very powerful Core 2 Quadcore machine. This machine
>>>> shall serve as the RTnet master, equipped with an Intel Pro/1000 NIC
>>>> for
>>>> RT communication.
>>>
> 
> I tried the more sophisticated ones form showroom but had some problems
> iterpreting the results. Will go back and do more research there.
> 
>>> [ Reading the mail. ] You've done the standard timer jitter test, maybe
>>> also not yet with optimal load (see [1]). Things become far uglier when
>>> you start using periphery, eg. the PCI bus.
>>>
> 
> Yes, I've seen that happening...
> 
>>>> Jan once mentioned freqs somewhere below 10kHz should be possible using
>>>> a decent machine. well... is it doable?
>>>
>>> Maybe, but likely not without careful tweaking of the involved
>>> subsystems. Specifically, your application should perform 1-to-n
>>> communication where the server collects all states via unicasts from the
>>> slave and distributes updates via a single broadcast. And don't do other
>>> traffic on the line (no non-RT tunneling, no RTmac heartbeats after
>>> startup).
> 
> The final setup foresees a pure RT network w/o tunneling.
> 
>>> That should scale quite well. So a simple 2-nodes test may
>>> already give you an impression of what is possible with your hardware
>>> and what not.
> 
> I've set up that scenario (two machines, one cable) and started playing
> some days ago but this will need more time. The bandwidth of deviation
> regarding the synchronisation timestamps was quite large 200 ~ 4000 ns
> but still adequate IMO(?)

4 us worst case jitter? If that is actually true (can't believe,
PCI-related jitters alone can go up to several 10 us), then it would be
fairly decent.

> 
>> And, of course, if you have some SMP box, tuning IRQ and task affinities
>> for both the RT side as well as Linux is highly recommended. You gain a
>> lot if your RT-NIC IRQ can only be disturbed by the unavoidable RT-timer
>> IRQ.
> 
> Can you point me where I can find more information about that?

Regarding Linux:
 - linux/Documentation/cpusets.txt
 - /proc/irq/*/smp_affinity
And there have been a lot of postings on LKML at the beginning of this
year about how to improve the isolation.

Regarding the RT domain: RTnet so far has no explicit CPU affinity
controls built in, it rather relies on managing mechanisms the
underlying RTOS provides. I can't help with information on RTAI in this
respect, though. The Xenomai use case I know of set
/proc/xenomai/affinity before starting up RTnet.

> 
>> Just don't try running some RT task as a busy loop on an "isolated" core
>> - you will lock up your non-RT side sooner or later as Linux is not yet
>> prepared to do complete CPU isolation.
> 
> This is a good hint! Thx!
> 
> 

Jan

Attachment: signature.asc
Description: OpenPGP digital signature

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
RTnet-users mailing list
RTnet-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rtnet-users

Reply via email to