Hi Mike,

I'm attaching the makefile I used. It's the makefile provided by synapticon: ...
cool - many thanks; this will certainly be helpful!!


Can you answer to any of my questions? :)

I wanted to thank you all for your valuable advice. Now I'm acheiving near 3.5 kHz loop rate without a single datagram loss. My configured system is (just for reference): Ubuntu 16.04 with kernel 4.8.15-rt10(preempt_rt patched) and support for native driver. I would like to ask another question: In the dc_user example I don't understand why the latency is measured that way. Could someone make it more clear? This latency is the latency of our application (the time the scheduler does to put our process to running, so that our application can process the newly arrived datagram)? What quantity is measured exactly with this latency? Bonus: Can we measure the time the datagram does to leave and come back to the master, with wireshark accurately?

I would love to -however, only a user myself and not really deep into the protocol stack (never even tried debugging the datagrams on the wire (debug interface, Wireshark (but I want to!))

I will also share -once I get it all working to good satisfaction also share my setup/steps to epiphany. fyi- I am not sure if it is really worth doing the entire Xenomai dance - shouldn't latency (on a modern and fast enough system (such as your laptop)) mostly be determined by the quality of the ethernet driver and of course the chipset? Just saying this, because I think the "preempt" seems pretty much standard and is really simple to patch and compile.

Thanks!! Jürgen

_______________________________________________
etherlab-users mailing list
[email protected]
http://lists.etherlab.org/mailman/listinfo/etherlab-users

Reply via email to