> > > > Also another one (https://lkml.org/lkml/2005/11/4/173): > Current AMD Opteron(tm) and Athlon(tm)64 processors provide power > management mechanisms that independently adjust the performance state > ("P-state") and power state ("C-state") of the processor[1][2]; these > state changes can affect a processor core's Time Stamp Counter (TSC) > which some operating systems may use as a part of their time keeping > algorithms. > *Most modern operating systems are well aware of **the effect of these > state changes on the TSC and the potential for TSC drift[3] across multiple > processor cores **and properly account for it*. > > I don't think these apply for current systems though (the above posts are > really old) and I don't even see definitive answer for old systems -- > unfortunately Björn is not here to ask him. > > All my machines have power saving turned off. The intel boxes run with hyperthreading disabled ..
> Tuning the dispatcher is not skewing the benchmarks. The whole idea of > dispatchers is that you can tune subsystems of your actor system to > particular load characteristics. The default throughput setting hits a > particular point in the fairness-throughput tradeoff spectrum, which is not > the best for batch workloads. > I agree. I just wanted to state, that I am not interested in presenting "Bad Akka" as some of the comments looked like you fell offended ;-). > > >> >> >>> >>> | Single-machine performance is only interesting if you are after >>> single points of failure. >>> >>> Both things are important: single machine performance AND remote >>> messaging throughput + latency. >>> >> >> Yep, my argument was that without remote you have a spof. >> >> >>> Regarding remoting/failover there are much faster options than >>> actors/Akka today. >>> >> > As for failover, if you are limited to software implementation the speed > of remote failover is bounded by a timeout period, it does not matter what > software framework (Akka or other) is used. If you have any side-channel > information, maybe hardware solutions e.g. link failure notifications or > hardware watchdogs the game is different -- but that is apples to oranges. > Disagree. You can run systems redundantly with total message ordering and always get the fastest response. This is zero latency failover. Needs a decent reliable UDP messaging stack ofc. > > >> >> >>> I appreciate your vision of making this transparent to the application. >>> Its a great idea, but I think your are still not there for the very high >>> end kind of application, no offence. I have built large high performance >>> distributed systems, so I know what I am talking bout. >>> >> >> > It is a bit of a strawman. For any kind of *particular *use-casethe > fastest implementation is a custom hand-tuned one designed by an expert and > I don't doubt that you can beat Akka in many particular scenarios. In fact, > for every system there is always one more benchmark that you cannot beat. > It all depends how much resources you have to throw against your problem > (and maintaining it over time). > > Mostly agree. However there's no excuse in not using the fastest possible option in basic mechanics like queued message dispatch. I have reasonable suspicion this is the case (will have to investigate). > >> >>> However regarding concurrent programming, actors can improve performance >>> and maintainability today, that's why i am currently >>> investigating/benchmarking local performance only. >>> I'll will incorporate your proposals into the test. >>> >> >> > We could in theory play around with the example and fine-tune (I am very > tempted to try it now), but the problem is that we are preparing a release > and we cannot really allocate any time to this particular benchmark. Play > around with the dispatcher settings a bit and see how it works out -- try > tuning the throughput setting in particular. > > As long the benchmark processes 1 million of independend Pi computation slices concurrently, any tuning would be fair (and welcome). I am not so sure regarding "batching" optimizations as this actually reduces the number of messages processed. However an adaptive batching dispatcher could boost a lot (i know this from my network related work), but at the cost of increased latency. This test is not about batching but about processing many tiny units of work e.g. market data ;-) regards, rüdiger -- >>>>>>>>>> Read the docs: http://akka.io/docs/ >>>>>>>>>> Check the FAQ: http://akka.io/faq/ >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups "Akka User List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/groups/opt_out.
