Am 28.03.2015 um 13:32 schrieb "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= <>":
On Saturday, 28 March 2015 at 11:52:34 UTC, Sönke Ludwig wrote:
You can access TLS from an event callback just as easy as from a fiber.

Yes, but it is much easier to verify that you don't hold onto references
to TLS if get rid of arbitrary call stacks when moving to a new thread.

It's not mainly about holding references to TLS data, but about program correctness. You store something in a TLS variable and the next time you read it, there is something different in it. This is not only an issue for your own code, but also for external libraries that you have no control or even insight of.

Apart from that, what is stopping you to hold such references implicitly in a callback closure?

And why can't you do the same with fibers and schedule the fibers

You could, but that's even more work since you then need to encode
progress in a way the scheduler can use to estimate when the task can
complete and when it must complete.

The fiber part is purely additive. Anything you can to to schedule events in an event based programming model, you can do in a fiber backed one, too. You just have the additional state of the fiber that gets carried around, nothing more.

It's you who brought up the randomization argument. Tasks are assigned
to a more or less random thread that is currently in the scheduling
phase, so that your constructed situation is simply *highly* unlikely.

I don't understand how randomization can help you here.

Your constructed case will simply not happen in practice.

They *do* get balanced over threads, just like requests get balanced
over instances by the load balancer, even if requests

A good load balancer measure back-pressure (load information from the
instance) and fire up new instances.

That depends a lot on your infrastructure, but is irrelevant to the point. Tasks get distributed among threads with a sufficiently large number of tasks (like it would happen for a busy web service), you'll have a high load on all threads, so it simply doesn't matter if you move tasks between threads.

If you have a low number of requests you may be able to avoid some bad corner cases, but only if you did something stupid in the first place, like mixing long CPU computations without any yield() calls with I/O processing tasks in the same thread (since you seem like a smart person I'll leave it up to you construct cases where moving between threads doesn't help either).

are not moved between instances. But IMO it doesn't make sense to go
further with this argument with some actual benchmarks. It's not at
all as clear as you'd like what the effects on overall performance and
on average/~maximum latency are in practice for different applications.

This is something you can do on paper. A language feature should support
a wide set of applications.

*I* can't do that on paper. I invite you to do it and then we can verify your claims with actual measurements. If you don't, this is nothing more than hot air.

Reply via email to