Some guy wrote his own "measurement OS" for studying CPU behavior in light of Spectre type information leak vulnerabilities. I forget the name of it at the moment. I do not think he open sourced it, though.
Anyway, if you don't have something like that handy and you don't want to chase performance ghosts, what I would recommend is an L-average of the M-min of the N-max (basically like my `memlat` code) where you tune the loops sizes to get small standard deviations in the outermost and N is probably adjusted based on L1/L2/L3/DIMM levels. If you use the smallest max loop that can measure what you're after that will "multiply", meaning smaller min and averaging loops will get you stable numbers. Even in single user mode with no network cable, **_and_** with low standard deviations in the outermost loop, depending on how many milliseconds the max loop takes, you might still be measuring "time you want" \+ "a not varying much scheduled time/L3 sharing impact of the max number of Linux kernel threads can line up against my program at once during my max loop". You may be able to use `chrt -r 99 myprogram` or `taskset` to minimize that. Chances are that if the inner max loop is pretty short lived, "not varying much competers" is unlikely, and so a small standard deviation is unlikely to be a false flag. Past that step, there is also a way to boot a Linux kernel with some CPU cores "taken out of scheduler availability" (`isolcpus`) and then when you use `taskset` to assign some process to this isolated core that is the only thing that runs on that core, period. That probably reduces the "what am I measuring?" uncertainty to virtual memory system and L3 sharing impacts. I doubt many/any Linux kernel threads are very VM/L3 intensive when in single-user mode. So, that's probably about as close as you could get, and actually probably pretty close. It's admittedly all a bit of a PITA, but probably less so than buying a real-time OS or writing your own. :-) Anyway, maybe everyone knew all this already, but it seemed worth saying in light of that code @miran linked to.
