On Monday, 4 August 2014 at 16:38:24 UTC, Russel Winder via Digitalmars-d-learn wrote:
Modern default approach is to have amount of "worker" threads identical or close to amount of CPU cores and handle internal scheduling manually via fibers or some similar solution.

I have no current data, but it used to be that for a single system it was best to have one or two more threads than the number of cores. Processor architectures and caching changes so new data is required. I
am sure someone somewhere has it though.

This is why I had "or close" remark :) Exact number almost always depends on exact deployment layout - i.e. what other processes are running in the system, how hardware interrupts are handled and so on. It is something to decide for each specific application. Sometimes it is even best to have amount of worker threads _less_ than amount of CPU cores if affinity is to be used for some other background service for example.

If you are totally new to the topic of concurrent services, getting familiar with http://en.wikipedia.org/wiki/C10k_problem may be useful :)

I thought they'd moved on the the 100k problem.

True, C10K is a solved problem but it is best thing to start with to understand why people even bother with all the concurrency complexity - all details can be a bit overwhelming if one starts completely from scratch.

There is an issue here that I/O bound concurrency and CPU bound
concurrency/parallelism are very different beasties. Clearly tools and
techniques can apply to either or both.

Actually with CSP / actor model one can simply consider long-running CPU computation as form of I/O an apply same asynchronous design techniques. For example, have separate dedicated thread running the computation and send input there via message passing - respond message will act similar to I/O notification from the OS.

Choosing optimal concurrency architecture for application is probably even harder problem than naming identifiers.

Reply via email to