On Mon, Jul 16, 2018 at 10:31 AM, Nathaniel Smith <n...@pobox.com> wrote:
> On Sun, Jul 8, 2018 at 11:27 AM, David Foster <davidf...@gmail.com> wrote:
>> * The Actor model can be used with some effort via the “multiprocessing”
>> module, but it doesn’t seem that streamlined and forces there to be a
>> separate OS process per line of execution, which is relatively expensive.
>
> What do you mean by "the Actor model"? Just shared-nothing
> concurrency? (My understanding is that in academia it means
> shared-nothing + every thread/process/whatever gets an associated
> queue + queues are globally addressable + queues have unbounded
> buffering + every thread/process/whatever is implemented as a loop
> that reads messages from its queue and responds to them, with no
> internal concurrency. I don't know why this particular bundle of
> features is considered special. Lots of people seem to use it in
> looser sense though.)

Shared-nothing concurrency is, of course, the very easiest way to
parallelize. But let's suppose you're trying to create an online
multiplayer game. Since it's a popular genre at the moment, I'll go
for a battle royale game (think PUBG, H1Z1, Fortnite, etc). A hundred
people enter; one leaves. The game has to let those hundred people
interact, which means that all hundred people have to be connected to
the same server. And you have to process everyone's movements,
gunshots, projectiles, etc, etc, etc, fast enough to be able to run a
server "tick" enough times per second - I would say 32 ticks per
second is an absolute minimum, 64 is definitely better. So what
happens when the processing required takes more than one CPU core for
1/32 seconds? A shared-nothing model is either fundamentally
impossible, or a meaningless abstraction (if you interpret it to mean
"explicit queues/pipes for everything"). What would the "Actor" model
do here?

Ideally, I would like to be able to write my code as a set of
functions, then easily spin them off as separate threads, and have
them able to magically run across separate CPUs. Unicorns not being a
thing, I'm okay with warping my code a bit around the need for
parallelism, but I'm not sure how best to do that. Assume here that we
can't cheat by getting most of the processing work done with the GIL
released (eg in Numpy), and it actually does require Python-level
parallelism of CPU-heavy work.

ChrisA
_______________________________________________
Python-ideas mailing list
Python-ideas@python.org
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to