There’s also a graphical view along with a predictor of where the numbers
converge to in future years:
https://people.eecs.berkeley.edu/~rcs/research/interactive_latency.html
Sent from my iPhone
> On 25 Sep 2017, at 14:18, Richard Warburton
> wrote:
>
> Hi,
>
> On 19 Nov 2017, at 16:32, John Hening wrote:
>
> I am analyzing the following Java code:
>
> class Main {
>
> private static class C{
> private C ref;
> }
>
>
> private static void foo(C c, C c2){
>c.ref = c2;
> }
> }
>
> On 3 Feb 2018, at 23:31, John Hening wrote:
>
> public static int dontOptimize;
>
>
> public static int simple(){
> return dontOptimize;
> }
>
> And JITed version of that:
>
>
> 0x7fdc75112aa0: mov%eax,-0x14000(%rsp)
> 0x7fdc75112aa7: push %rbp
>
Every native operating system has some form of thread identifier which can be
acquired by calling e.g. pthread_self(). The thread data structure stored in
the thread’s local area has more information (e.g. the limits of the current
tlab) and the associated Java thread object. For efficiency in
There’s a couple of overlapping questions here. I hope I can answer them, if
not necessarily the right order.
Reads are used rather than writes because reads don’t incur cross-CPU cache
invalidations. If threads were writing to a page, then cache invalidation
traffic would be sent between
> On 29 May 2020, at 17:46, Steven Stewart-Gallus
> wrote:
>
> Okay I have an idea.
> I can't shake the idea you could do fun tricks with thread local executable
> pages.
>
> The theoretically fastest way of safepoint polling is inserting a trap
> instruction.
Under what basis are you