Hi, Some more interesting findings (no i-pipe trace yet though).
> Hmm, this doesn't convince me yet. Such skews during startup may as well > be triggered by unusual load during runtime (non-RT activity or new RT > components). Did you put your system under adequate non-RT load as well > while measuring the outputs? Running latencytest with my application shows an average latency of about 40 and a max of 200ns. This was rather shocking so I turned off rtcan in my application. Now the max latecy is 60ns. Turn off EML and turn on rtcan, max latecy is 230ns. How is that for strange? But since I can see the scope output bobbing with 200ns during the latency test, I can also see that if I run my application without the latency test the huge max latency disappears entirely. Maybe it is time for the trace but then again I am still using CAN over the parallel port so will see what it does on a machine with a PCI CAN adaptor first. Because I think I know what happens: Due to the external loading the CAN recv interrupt triggers the Rx ISR briefly before the 1ms task period ends. Due to the priority of the ISR (huge debate over this) and its atomicness (if I remember correctly) the reading out of the slow hardware delays the start of the new task period. Just thought it was interesting to mention. Btw when the latency appears there are no overflow messages or anything like that which support the theory I have about the cause. Btw2 the 200ns latency spikes do not cause the scope to loose lock on the saw-tooth so whatever causes that problem is of a different nature still. Regards, Roland. > >> I will keep the check disabled but for the EML chaps I do think this is >> a point of interest. I would be very interested how this index shift >> occurs and why it is persistent after occurring once. >> >> Sorry for the pragmatic qualifications here but in the end its the >> stability of the outputs that will determine the behaviour of the >> machine so its not a bad way to assess the software. :) > > A problem isn't solved until it is also understood. > >>> If the problem persists (or your _really_ want to understand what >>> happens), you could try to put an xntrace_user_freeze(0, 1) before the >>> line which emits that EML warning, turn on the I-pipe tracer, set a >>> large back_trace_points value (a few thousand), enable verbose mode, and >>> grab what /proc/ipipe/trace/frozen reports after the hick-up. See [1] >>> for more howtos. >> Done this before so it should not be a problem. Don't think it is > > In that case, I would even more suggest to collect the data, maybe now > about the fragile startup case. > >> necessary quite yet as the behaviour at the moment looks good. > > Jan > ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ _______________________________________________ RTnet-users mailing list RTnet-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/rtnet-users