On Fri, Aug 11, 2017 at 2:22 PM Chris Hopkins <cbehopk...@gmail.com> wrote:
> .... The microsecond or so of cost you see I understood was *not* due to
> there being thousands of operations needed to run the channel, but the
> latency added by the stall, and scheduler overhead.
One particular case, which many benchmarks end up doing is that they run a
single operation through the system which in turn pays all the context
switching overhead for that operation. But channels pipeline. If you start
running a million operations, then the switching overhead amortizes over
the operations if your system is correctly asynchronous and tuned.
I think most message passing languages add some kind of atomics in order to
track counters and like stuff without resorting to sending around
microscopic messages all the time.
You received this message because you are subscribed to the Google Groups
To unsubscribe from this group and stop receiving emails from it, send an email
For more options, visit https://groups.google.com/d/optout.