On Thu, Apr 23, 2009 at 2:06 PM, Jan Kiszka <jan.kis...@web.de> wrote:

> Hi Rob,
>
> Rob Gubler wrote:
> > Hey Guys,
> >
> > After reading the device driver porting guide, the task of porting the
> > device driver looks to be fairly straight forward.  That being said
> though
> > the recent discovery I made that my Intel chipset (82575) is not
> currently
> > supported by RTnet prompted a few questions for me.  I spent some time
> > reading up on the Xenomai 3 plans, and how they relate the
> CONFIG_PREEMPT_RT
> > patch, and also RTDM.  It is easy to see the advantages of using RTDM and
> > RTnet when I'm not running with PREEMPT_RT, but I am left wondering, if I
> > were to use this patch and use an unmodified Linux NIC device driver
> would I
> > be any worse off than if I were to port the driver to RTDM / RTnet?
>
> Yes. PREEMPT_RT does not improve the predictability of the networking
> stack. Mayor issues are:
>  - no resource guarantees (packet buffers are allocated on demand, no
>   reservations possible)
>  - no protection against priority inversion when using the stack in
>   parallel with "normal" users
>
> You can mitigate the second issue a bit by enforcing strict CPU
> isolation. But that means no non-RT application or kernel service should
> send or receive packets on the same CPUs your RT application is bound to.
>
> Additionally, you may face latency problems with mainline NIC drivers as
> they are typically designed for throughput.
>
> >
> > Is there special consideration in RTDM or the RTnet stack that benefit
> from
> > ported device drivers?  I read in the document Jan wrote that one benefit
> of
> > RTDM was that already existing drivers will be easily moved over to new
> > incarnations of PREEMPT_RT.  I like the idea of a uniform real-time
> driver
> > API, but I curious about performance advantages.
>
> The benefit of porting Linux drivers to RTnet is that you may spot
> latency issues at this chance and drop features that are unneeded or
> problematic /wrt latencies.
>
> Basically, the only API-related benefit comes from RTnet's pool-based
> buffer management. The RTDM interface itself has no inherent advantage
> over other interfaces. It provides separation from Linux in order to
> allow a different underlying implementation, specifically as long as
> Linux cannot offer sufficient guarantees. On the other hand, there is
> already a Xenomai sub-project that ported a RTDM subset over Linux
> (preferably PREEMPT_RT). This will allow us to migrate drivers like
> RTnet also over a native real-time Linux.
>
> >
> > The Xenomai site makes some statements about future RTDM plans.  They
> sound
> > like they are improvements in the area of scalability and performance,
> but I
> > don't how (or if) it compares to native Linux running PREEMPT_RT.
> > Specifically these were the plans:
> >
> >    - RTDM: timer API
>
> done
>
> >    - RTDM: handle unsynched timestamps from different CPUs
>
> dropped, we only support synchronous per-CPU clocks
>
> >    - RTDM: CPU affinity for driver threads (likely part of a broader SMP
> API
> >    redesign)
>
> can be controlled indirectly via /proc/xenomai/affinity ATM, but
> otherwise still a to-do
>
> >    - RTDM: threaded IRQs
>
> to-do (urgent use cases may accelerate such feature...)
>
> >    - RTDM: user-space driver environment ----- This sounds great for
> >    developing new device drivers in an environment where gdb is easily
> used.
>
> Alexis posted a first proposal a few months ago (which I still have to
> look at in details - mmpf...), see mailing list
>
> >
> > Some quick background information about my project:  I am executing on an
> > x86 (multi-core) platform from within a Xenomai userspace thread, using
> the
> > POSIX skin.  And, I am sending and receiving raw ethernet frames
> (PF_PACKET,
> > SOCK_RAW).
>
> At what rates (packets/s, size of packets)? What are your worst-case
> latency requirements?
>
> >
> > I am curious to hear your input if you guys have time.  Thanks!
> >
> > -Rob
> >
>
> Hope I was able to clarify some aspects.
>
> Jan



Yep!  That helps a lot.

Regarding your question...

I have a network of nodes all sending traffic to a central machine that is
running RTnet.  All of the nodes involved are marching along on the same
TDMA schedule.  The goal is to scale to ~ 100 nodes, on a 5 millisecond TDMA
slot schedule.  In a (very) worst case scenario all nodes would transmit 2
messages per 5 millisecond slot, where each message is 1500 bytes, directed
to the central machine.

So that being said,

packets/s = (2 messages) * (100 nodes) * (1000 milliseconds / 5
milliseconds) = 40,000
size of packets = 1500

That's approximately 487 Mbps

Well, as far as latency requirements go.. The central machine needs to read
the packets, do some processing, and send a much smaller number of packets
back out, all in that TDMA slot.

On a related note I was looking into the Intel 82575 driver and there is a
variable I can configure via modprobe that will rate limit the number of
interrupts per second that are triggered.  That might be something
interesting for me to look at once I get it ported.

Thanks again for the info.

-Rob
------------------------------------------------------------------------------
Crystal Reports &#45; New Free Runtime and 30 Day Trial
Check out the new simplified licensign option that enables unlimited
royalty&#45;free distribution of the report engine for externally facing 
server and web deployment.
http://p.sf.net/sfu/businessobjects
_______________________________________________
RTnet-users mailing list
RTnet-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rtnet-users

Reply via email to