Carl Friedberg <[EMAIL PROTECTED]> writes:
> I've always fought hard to get the amount of buffer space in mailboxes up as
> high as possible, both the size of each individual mailbox, and the number
> of them. ...
> I would just say, if it is possible to let this be "configurable" by calling
> some kind of {begin} or whatever before using any pipes, or whatever, that
> would be nice. Thus, the defaults would be conservative, assuring (during
> the build, test, and install) that perl would behave very nicely. But, for
> jerks like me, it would be nice to allow for some tailoring where I can jack
> various items up a bit, if I find that I'm hitting mwaits (if I could even
> figure that out). Does this make any sense? It's late enough that I'm not
> sure it's at all helpful.
Well, it seemed to me that this is the kind of thing that is most naturally
configurable via a logical...and if you want to change it "on the fly"
from inside Perl you just set $ENV{'PERL_MBX_SIZE'}.
The code I'm testing now sets a lower limit on mailbox size of 128 and
a maximum from the sysgen parameter MAXBUF. If you don't set PERL_MBX_SIZE
you get the stdio.h value of BUFSIZ (i.e., default behavior same as previous)
On the issue of "print error messages/don't print error messages" I think
it can be resolved with a 'use vmsish' option...how does "messages" sound?
(with a "message" synonym for us forgetful folks).
So if you do :
use vmsish 'messages';
exit 44;
you get the usual "%SYSTEM-F-ABORT" message, but otherwise the messages
are turned off. We'll need some minor changes in the t/vmsish.t tests, but
it's pretty straightforward.
By the way, the flag that controls whether the message is printed is
bit 27 of the status code....so when get a completion status from a
subprocess I'm masking off the control bits 27-31 so that routines
that test the status won't have to.
> EVerything else you've suggested sounds terrific. I love doing asynchronous
> I/O (using ASTs) but there are times when you expose OS bugs, because that
> is not the most-stressed part of VMS. It is definitely Doing The Right
> Thing, however, and I'm completely in favor of it.
Well, I may have to break down and do a "threads" build to see if the
ASTs affect the threading code adversely...I'm *sure* that Dan has a bit
of concern about this particular issue.
I've dealt with similar "threads+ASTs" code (my Crinoid server) and generally
it works out okay...about the only place they interact is via the
mailboxes and that helps a lot.
The process_completion/my_pclose/waitpid interactions are probably the
trickiest and most likely to cause a problem with threading. The
*old* code had timing holes that I removed; the new code shouldn't be
noticeably less reliable. ("Noticably" because you may have to do 100's
of subprocesses very quickly to get one that finds a timing hole even
with the old code).
--
Drexel University \V --Chuck Lane
----------------->--------*------------<[EMAIL PROTECTED]
(215) 895-1545 / \ Particle Physics [EMAIL PROTECTED]
FAX: (215) 895-5934 /~~~~~~~~~~~ [EMAIL PROTECTED]