On Friday, 27 March 2015 at 16:06:55 UTC, Dicebot wrote:
On Friday, 27 March 2015 at 15:28:31 UTC, Ola Fosheim Grøstad wrote:
No... E.g.:

On the same thread:
1. fiber A receives request and queries DB (async)
2. fiber B computes for 1 second
3. fiber A sends response.

Latency: 1 second even if all the other threads are free.

This is a problem of having blocking 1 second computation in same fiber pool as request handlers -> broken application design. Hiding that issue by moving fibers between threads is just making things worse.

Not a broken design. If I have to run multiple servers just to handle an image upload or generating a PDF then you are driving up the cost of the project and developers would be better off with a different platform?

You can create more complicated setups where multiple 200ms computations cause the same latency when the CPU is 90% idle. This is simply not good enough, if fibers carry this cost then it is better to just use an event driven design.

Reply via email to