On Sunday, 24 September 2017 at 08:08:35 UTC, Petar Kirov [ZombineDev] wrote:
On Saturday, 23 September 2017 at 22:07:58 UTC, bitwise wrote:

[...]

Can you give a bit more details? What kind of architectures do you mean (hardware, software, ..)? What was your use case? IO-multiplexing, coarse-grained high-level cooperative multi-tasking, or range-like data processing state-machines?

Probably not without embarrassing myself ;)
Even now, server tech isn't really my domain, and it was a long time ago.

The effort was mainly a learning experience to better understand what's going on under the hood when working on multiplayer games, web-apps, etc..

All of my implementations were built around the same code though, except request scheduling, where I first tried some kind of state machine, and then switched to fibers afterward, which totally destroyed it's already mediocre performance.

Very interesting, I would like to hear more about your approach. I have kind of the opposite experience with C# v7.1. When writing synchronous pull-style code I constantly miss the power of D's ranges

I'm not talking about using IEnumerators as generators. I mean using them for user-level threads. So basically, an IEnumerator method being "ticked" regularly by a scheduler/engine could return a scalar or predicate that specified when execution should resume, or it could return another IEnumerator method that the scheduler would run to completion before returning to the first one.

Simplified pseudo-example:

`
IEnumerator Attack(enemy) {
    // attack until enemy dead or escaped
    // maybe the enemy parameter should be 'scope' XD
}

IEnumerator Idle() {
    enemy = null;

    while(true) {
        position += wanderOffset();

        if(enemy = getEnemiesInRange()) {
            yield Attack(enemy);
            enemy = null;
            yield 2.seconds;
        }

        yield;
    }
}

engine.runCoroutine(Idle());

while(true) { // 60 FPS+
    engine.tickCoroutines();
    // ...
    engine.draw();
}
`

I think it's easy to imagine how using fibers would be much more expensive in the above example as it scales in number of AI characters and complexity of behavior.

About push-style asynchronous code, I find async/await unusable for anything more than the absolute basic stuff. For 95% of my code in this area I use RX.NET (http://reactivex.io/). I'm sure they probably use async/await somewhere down in their implementation, but I just find that it scales very poorly when complexity increases. My time "lost" by going from Task<T> to IObservable<T> (yes, even for single items) is immediately regained by using a few powerful operators like Buffer, CombineLatest and Switch.

I'm not sure I understand exactly what you're talking about here, but I know that basic applications get A LOT easier with async/await. For GUI-based applications with network connectivity, the whole concept of threading basically disappears. No need for locks, no need to dispatch UI updates to the main thread. Just code naturally and expect functions that make network calls to just take as long as they need without ever blocking the UI thread.

As far as using async/await at scale, I would blame C# and it's standard library before than the underlying concept itself. I don't think there's anything inherently slow about an async/await type framework, but if you look at microsoft's reference source for C# online, you can see that it's designed with productivity in mind, and that performance takes a back seat. I'm sure the situation in C++ will be very different if stackless resumables are accepted.

Reply via email to