This was not the culprit. Same results.

Does Xenomai replace the memcpy() call with an own implementation ? (I don't 
think so.)

What about trashing of cash lines through context switches ? But then if we run 
it on Linux alone we should also have trashed cache lines. There should not be 
any difference.
Is maybe the presence of a Xenomai POSIX thread cause a lot of ctx switches, 
even if only a memcpy is executed inside the thread ? Shouldn't Xenomai threads 
run totally uninterrupted if they have the highest prio ?

Please could somebody actually run this test on his hardware and see if these 
differences between Xenomai POSIX skin and Linux native are happening there as 
well ?


Best regards,

Daniel Schnell


-----Original Message-----
From: Gilles Chanteperdrix [mailto:[EMAIL PROTECTED] 
Sent: 15. maĆ­ 2007 12:16
To: Daniel Schnell
Cc: [email protected]
Subject: Re: [Xenomai-help] memcpy performance on Xenomai


Improving clock_gettime overhead by reading directly the tsc is my very next 
task. If you want to check if the effect you measure is the result of 
clock_gettime overhead, you can measure the duration of memcpy with the native 
api service rt_timer_tsc, and convert the tsc difference with rt_timer_tsc2ns.

-- 
                                                 Gilles Chanteperdrix

_______________________________________________
Xenomai-help mailing list
[email protected]
https://mail.gna.org/listinfo/xenomai-help

Reply via email to