Hi,

>From my observations there appears to be some race conditions in the hotspot 
>compilations that can affect hot/cold path decisions during warmup. If the 
>race wins in your favor, all is well, if not…. Also memory layout of the JVM 
>will have some impact on what optimizations are applied. If you’re in low ram 
>you’ll get different optimizations than if you’re running in high ram. I’d 
>suggest you run your benches in a highly controlled environment to start with 
>and then afterwards you can experiment to understand what environmental 
>conditions your bench maybe sensitive to.

Kind regards,
Kirk


> On Aug 1, 2017, at 7:26 PM, Roger Alsing <[email protected]> wrote:
> 
> Some context: I'm building an actor framework, similar to Akka but 
> polyglot/cross-platform..
> For each platform we have the same benchmarks, where one of them is an in 
> process ping-pong benchmark.
> 
> On .NET and Go, we can spin up pairs of ping-pong actors equal to the number 
> of cores in the CPU and no matter if we spin up more pairs, the total 
> throughput remains roughly the same.
> But, on the JVM. if we do this, I can see how we max out at 100% CPU, as 
> expected, but if I instead spin up a lot more pairs, e.g. 20 * core_count, 
> the total throughput tipples.
> 
> I suspect this is due to the system running in a more steady state kind of 
> fashion in the latter case, mailboxes are never completely drained and actors 
> don't have to switch between processing and idle.
> Would this be fair to assume?
> This is the reason why I believe this is a question for this specific forum.
> 
> Now to the real question.. roughly 60-40 when the benchmark is started, it 
> runs at 250 mil msg/sec. steadily and the other times it runs at 350 mil 
> msg/sec.
> The reason why I find this strange is that it is stable over time. if I don't 
> stop the benchmark, it will continue at the same pace.
> 
> If anyone is bored and like to try it out, the repo is here:
> https://github.com/AsynkronIT/protoactor-kotlin
> and the actual benchmark here: 
> https://github.com/AsynkronIT/protoactor-kotlin/blob/master/examples/src/main/kotlin/actor/proto/examples/inprocessbenchmark/InProcessBenchmark.kt
> 
> This is also consistent with or without various vm arguments.
> 
> I'm very interested to hear if anyone has any theories what could cause this 
> behavior.
> 
> One factor that seems to be involved is GC, but not in the obvious way, 
> rather reversed.
> In the beginning, when the framework allocated more memory, it more often ran 
> at the high speed.
> And the fewer allocations I've managed to do w/o touching the hot path, the 
> more the benchmark have started to toggle between these two numbers.
> 
> Thoughts?
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] 
> <mailto:[email protected]>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to