-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 11/11/2012 1:22 PM, Kent A. Reed wrote:
> On 11/11/2012 1:38 PM, Michael Haberler wrote:
>> Am 11.11.2012 um 18:43 schrieb Kent A. Reed: <...>
>>> Curiously, I get better jitter results on the quadcore if I run
>>> both latency-test threads on cpu 0, e.g., a core that is
>>> exposed to the Linux scheduler, than on isolated cpus 2 and 3.
>>> On the other hand, latency-test running entirely on cpu 0 is
>>> somewhat more sensitive to me starting other processes, like
>>> glxgears, but the effect seems to be restricted to a few
>>> microseconds accumulated the first time I start up glxgears
>>> (maybe due to fetching glxgears from the disk?---I should try 
>>> again with a solid-state disk).
>> 
>> these results puzzles me as well.
> 
>> one thing I cant make sense of yet is:
>> 
>> isolcpus=1 on an atom
>> 
>> bind base-thread in latency-test to cpu1 (the isolated one) and
>> servo-thread to cpu0 (where linux runs too) and I get
>> consistently lower latencies on servo-thread than on fast-thread
>> 
>> making sense? none I can see..
> 
> I could explain both your observation and mine if I could come up
> with a mechanism that causes the jitter to be higher on a lightly
> loaded cpu than on a heavily loaded one. In both our situations,
> cpu0 is running other processes besides our RT threads.

When I see results like these, I think of power-saving modes and CPU
frequency switches.  Make sure you disable *ALL* available
power-saving features in BIOS and everywhere else possible.  "Back in
the day", I would see 100uS 'blackouts' (unable to even move data
around with a hardware bus-mastering PCIe controller) when the CPU
switched frequencies to speed up or slow down based on load.

The newer CPUs are quicker at this now (20-50 uS IIRC), but you still
want to make sure you're not triggering any of the fancy speed/power
saving features if you want best possible latency.

My typical test for this is to run enough dummy processes to peg the
CPU load (something like "while : ; do echo KeepBusy > /dev/null ;
done".  If the latency numbers are consistently better than without
the load, some background 'helpful' power-saving routines are likely
causing you problems.  That's how I determined I needed to disable the
C states on my test platform (went from 442708 nS jitter to 3159 nS
under RTAI).

> In the early days of EMC2, "the cache" was proposed as the problem
> with a lightly loaded cpu, where "other processes had enough time
> to replace the RT code in cache between invocations of the thread."
> Improved real-time performance with cpu hogs running was  taken as
> proof of the conjecture. It made sense to me at the time.
> 
> I don't know what to make of this conjecture in the case of
> isolated cpus. I thought if they were isolated from the Linux
> scheduler, then no processes would be loaded to them unless we
> loaded them explicitly. Is there something going on in Xenomai that
> could be causing this? (Note that I haven't tried this with the
> RT-PREEMPT alternative). Could it be that my naive picture of the
> multicore chip architecture in which each core has its own cache is
> wrong and the cores share common cache so that activity on one core
> could cause another's idle process to be flushed? (Note to self: go
> find out!)

Modern multi-core processors have complex multi-level caches, with the
biggest cache being shared among all on-chip CPU cores and relatively
smaller per-CPU caches (typically with a range of sizes vs. access
speed).  With an isolated CPU it may be possible to fit all the HAL
code (particularly for software stepping) into the per-CPU cache, but
this would be very system and configuration dependent.

Given the bandwidth available on most memory controllers today, I
think it's probably much more important to avoid entering any power
saving modes than to insure the real-time code stays in the cache (you
can read a *LOT* of data from memory in the 20uS the CPU will be doing
nothing if you let it switch operating frequencies on the fly).

- -- 
Charles Steinkuehler
[email protected]
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAlCgE8oACgkQLywbqEHdNFyPGwCdF6jf/HZFNXND2T4Y2/fNnCF0
WwsAn0wlrgdex9+pwLKJsq2TjOe+5ow7
=lVzM
-----END PGP SIGNATURE-----

------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_nov
_______________________________________________
Emc-developers mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/emc-developers

Reply via email to