On Saturday, 28 March 2015 at 19:16:32 UTC, Russel Winder wrote:
If you write your software to fit a particular platform, including hardware features, then you are writing an operating system dedicated
to that one specific platform and no other.

Yes and I believe writing dedicated specialized operating systems for certain network services ("OS as a library") is the future of high load server programming - at least in domains where you can afford the investment.

If the idea is to write
portable applications then:

"portable" and "fast network service" aren't really best friends :( I have to encounter a single project that even tries to achieve portability of server code..

The very fact that people are writing in D (or C++, Java, C#,…) means you have accepted some abstraction – otherwise you would be writing in assembly language. Having made the jump to high-level languages why baulk at a small overhead to abstract concurrency and parallelism?

1) "some abstractions" != "any abstractions". One of reasons to use D as opposed to Java or C# is exactly because latter force you into overly expensive abstractions. D in its current state is closer to C++ in this context and this is huge selling point.

2) So far my experience has shown that overhead is not small at all. It depends on type of application of course.

Making tasks lightweight processes rather than maintaining shared
memory, and using channels to move data around rather than using
shared memory and locks, makes applications' concurrency and
parallelism easier to construct and manage (*).

This comment makes me wonder if we really speak about the same things. Concurrency model based on pinned fibers/threads is not the same thing as getting back to 90s shared memory multi-threading madness.

"lightweight processes" - yes, pinned fibers are very lightweight
"channels to move data around" - message passing between worker threads

At no point I have proposed to use shared memory and locks, there is no objection here.

If we are prepared to
accept overhead for stack and heap management, we must accept the overhead of processor management via thread pooling with work stealing.

Custom allocators exist pretty much for the very reason that in certain cases heap management overhead cannot be accepted. For concurrency primitives stakes are much higher.

Reply via email to