On Tue, 18 May 2010 11:39:04 +0100, Daniel Ruoso <dan...@ruoso.com> wrote:

This is the point I was trying to address, actually. Having *only*
explicitly shared variables makes it very cumbersome to write threaded
code, specially because explicitly shared variables have a lot of
restrictions on what they can be (this is from my experience in Perl 5
and SDL, which was what brought me to the message-passing idea).

Well, do not base anything upon the restrictions and limitations of the Perl 5 threads/shared modules. They are broken-by-design in so many ways that they are not a good reference point. That particular restriction--what a :shared var can and cannot hold--is in some cases just an arbitrary restriction for no good reason that I can see.

For example: file handles cannot be assigned to :shared vars is totally arbitrary. This can be demonstrated in two ways:

1) If you pass the fileno of the filehandle to a thread and have it dup(2) a copy, then it can use it concurrently with the originating thread without problems--subject to the obvious locking requirements.

2) I've previously hacked the sources to bypass this restrict by adding SVt_PVGV to the switch in the following function:

SV *
Perl_sharedsv_find(pTHX_ SV *sv)
    MAGIC *mg;
    if (SvTYPE(sv) >= SVt_PVMG) {
        switch(SvTYPE(sv)) {
        case SVt_PVAV:
        case SVt_PVHV:
        case SVt_PVGV: // !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
            if ((mg = mg_find(sv, PERL_MAGIC_tied))
                && mg->mg_virtual == &sharedsv_array_vtbl) {
                return ((SV *)mg->mg_ptr);
            /* This should work for elements as well as they
             * have scalar magic as well as their element magic
            if ((mg = mg_find(sv, PERL_MAGIC_shared_scalar))
                && mg->mg_virtual == &sharedsv_scalar_vtbl) {
                return ((SV *)mg->mg_ptr);
    /* Just for tidyness of API also handle tie objects */
    if (SvROK(sv) && sv_derived_from(sv, "threads::shared::tie")) {
        return (S_sharedsv_from_obj(aTHX_ sv));
    return (NULL);

And with that one change, sharing file/directory handles in Perl 5 became possible and worked.

The problem is, GVs can hold far more than just those handles. And many of the glob-modules utilise the other slots in a GV (array/hahs scalaer etc.) for storing state and bless them as objects. At that point--when I tried the change--the was a conflict between the blessing that Shared.XS uses to make sharing working and any other type of blessing. The net result was that whilst the change lifted the restriction upon simple globs, it still didn't work with many of the most useful glob-based module--IO::Socket::*; HTTP::Deamon; etc. I guess that now the sharing of blessed objects has been mage possible, I shoudl try the hack again a see if it would allow those blessed globs to work.

Anyway, the point is that the limitations and restrictions of the Perl5 implementation of the iThreads model, should not be considered as fundamental problems with with the iThreads model itself. They aren't.

However, interpreters already have to detect closed over variables in
order to 'lift' them and extend their lifetimes beyond their natural

Actually, the interpreter might choose to to implement the closed-up
variables by keeping that entire associated scope when it is still
referenced by another value, i.e.:

 { my $a;
   { my $b = 1;
     { $a = sub { $b++ } } }

this would happen by the having every lexical scope holding a reference
to its outer scope, so when a scope in the middle exits, but some
coderef was returned keeping it as its lexical outer, the entire scope
would be kept.

This means two things:

1) the interpreter doesn't need to detect the closed over variables, so
even string eval'ed access to such variables would work (which is, imho,
a good thing)

You'd have to explain further for me to understand why it is necessary to keep whole scopes around:
- in order to make closures accessible from string-eval;
- and why that is desirable?

2) all the values in that lexical scope are also preserved with the
closure, even if they won't be used (which is a bad thing).

Please no! :)

This is essentially the biggest problem with the Perl 5 iThreads implementation. It is the *need* (though I have serious doubts that it is actually a need even for Perl 5), to CLONE entire scope stacks every time you spawn a thread that makes them costly to use. Both because of the time it takes to perform the clone at spawn time; and the memory used to keep copies of all that stuff that simply isn't wanted; and in many cases isn't even accessible. AFAIK going by what I can find about the history of iThreads development, this was only done in Perl 5 in order to provide the Windows fork emulation.

But as a predominently windows user I can vouch that that emulation is almost completely useless. It doesn't allow for portability of forking code, because most every forking program also makes use of other POSIX concepts--like signals, exec, etc.--that windows does not support, and for which the Perl 5 emulations are entirely inadequate to allow portability. Far better to simply accept that fork, exec, signals etc. simply do not work on Windows and move on.

Removing the emulation code from the core would simplify everyones life. And if kernel threads are available with in the core, without all the emulation wrappers and sharing restrictions, external modules can provide POSIX-like capabilities without hamstringing the native use of kernel threading.

It doesn't seem it would be any harder to lift them to shared
variable status, moving them out of the thread-local lexical pads and into
the same data-space as process globals and explicitly shared data.

It is still possible to do the detection on the moment of the runtime
lookup, tho...

I realise that CPS gives the ability to keep/maintain entire scope frames alive after their natural end, but: - at what cost in terms of memory? If Perl 5s trhead-cloning is anything to go by: expensive! - is there any guarentee that *the* Perl interpreter (if there ever is such a thing) will be CPS-based?

Looking at the history of Parrot it seems to impose huge development costs.

Whereas, detecting--at runtime--that a sub references one or more variables from earlier scopes seems almost trivial. And lifting those into the "global scope" seems both relatively trivial and natural.

Better surely that one or two closed-over vars persists (inaccessibly) slightly beyond their strightly required lifetimes; than whole rafts of unneeded, inaccessible scopes (and all the variables; stack frames; lexpads etc.) they contain, persist for the life of entire threads, just because one or two of the vars they contain were closed over?

My currently favoured mechanism for handling shared data, is via
message-passing, but passing references to the shared data, rather than
the data itself. This seems to give the reason-ability, compose-ability
and controlled access of message passing whilst retaining the efficiency
of direct, shared-state mutability.

That was part of my idea too, I wasn't trying to address remote
processes or anything like that, I was considering doing the queues in
shared memory for its efficiency.

There are only two ways I am aware of, of implementing inter-thread(kernel or user-space) queues(whether wrapped over as message passing or some other abstraction or not):

- shared memory.
- serialised streams: pipes or sockets.

And both the latter are simple shared memory disguised. The difference is that the shared memory in these cases is *kernel* shared memory. Which, in addition to the overhead of serialising & deserialising transmissions, means that every access has to involve a ring 3->ring 0->ring 3 transition cycle. To see what that does for efficiency, take a look at benchmarks of SysV shared memory APIs. They're horrible!

Process shared memory queues are almost trivial to implement and extremely efficient. If (as I suggest above) they are constrained to holding references--which are all the same size), then a simple C-style array of memory (the size of the queue) plus two pointers to the head and tail are all that is required. Organised as a simple ring buffer, no blocking is required unless the head meets the tail. With judiious use of CAS, they can even be made lock-free.

Only the code that declares the shared
data, plus any other thread it choses to send a handle to, has any
knowledge of, and therefore access to the shared state.

If we can overcome the limitations we have in Perl 5 shared values, I'm
entirely in agreement with the above statement (assuming closed-over
values become shared transparently)

We can! :) And I see no reason that lifting closed-overs to global scope pad shouldn't be done? And if the global-scope pad is done using that wait-free hash table implementation I linked to earlier, then another source of locking and syncing bites the dust.

Effectively, allocating a shared entity returns a handle to the underlying
state, and only the holder of that handle can access it. Such handles
would be indirect references and only usable from the thread that creates
them. When a handle is passed as a message to another thread, it is
transformed into a handle usable by the recipient thread during the
transfer and the old handle becomes invalid. Attempt to use an old handle
after it has been sent result in a runtime exception.

This is exactly what I meant by RemoteValue, RemoteInvocation and
InvocationQueue in my original idea.

I apologise for misconstruing the meaning of "Remote" in those method names.


Reply via email to