From: "Paul Rubin" <no.email@nospam.invalid> I don't think Python threads are the answer. You want a separate interpreter per user, which is annoying to do with CPython. Do you have concrete performance expectations about the sharing of data between interpreter sessions? Those old mainframes were very slow compared to today's machines, even counting Python's interpretation overhead.
Without separate context it sounds like Python threads are moderately useless except in special circumstances, and this certainly isn't one of those circumstances.
As for performance, especially with shared data, it is an interesting question, but one not easily answered at the moment. The VM does a lot of interesting JIT type stuff, so gets performance that easily exceeds the most recent hardware implementation of the machine, which was only made a few years ago. Yes, a raw PC can do things faster (obviously). But people like the mainframe environment for the same reason people like Python over C: it does what they want, easier. They are willing to pay some performance for this -- but only some. They still have network response times to meet and need to get the monthend batch processing done before midnight.
In our case, shared data is no slower to access than process-unique data. Well, maybe a little slower if you have to acquire and release a lock, because that takes a few nanoseconds or so. So I'd prefer that Python shared data access was not a lot slower than non-shared access. My impression is that all data is in a common heap, so a lock is required to access anything. I'd think that makes all data access equally slow or equally fast.
I haven't found much data on multiple separate interpreters within a single process space.
What exactly do you get that is unique to each interpreter? Separate heaps? Seperate GC locks? Separate process threads? I would guess maybe separate builtins.
Does multiple interpreters make it harder to share data between the interpreters?
Loren
-- https://mail.python.org/mailman/listinfo/python-list