On Sun, Jun 17, 2001 at 10:32:16PM -0400, Ronald J. Yacketta wrote:
> Folks,
>
> Can someone shed some light on the ability to kick off simultaneous process
> at once within perl. I would like to kick off two system level commands at
> the same time, as I mentioned before I am populating 2 arrays with filenames
> to be processed. I would like to kick them both off at the same time.. any
> help would be greatly appreciated
>
> Ron
Ron,
the code for implementing what you are talking about is non-trivial.
i am new to perl but i will try to present to you the various issues
involved. this is my first shot at such code in perl. so i would like it if
the resident gurus would point out various flaws in code.
the code here is for a routine which exec's all its parameters and returns
the exit code and output for that command. the arguments expected are
references to arrays. after the function returns the 2nd and 3rd elements will
contain the exit code and output. remember the output could be the error
message instead of the program output you desire. check the exit code before
using the output. the output is in a scalar form, so split at '\n'.
some of the issues involved here are:
1) you need some form of IPC to pass the output of the child to the parent.
i have chosen pipes.
2) we need to harvest the exit codes of our children. the signal handler for
SIGCHLD only increments $atomic_var. this is the standard way of implementing
signal handlers. anything more, and you are inviting problems if you are not
careful. inspite of this simple handler, the routine blocks delivery of
SIGCHLD when updating $atomic_var, so that there are no race conditions. i
hope there are none :-).
is there any way to make sure the type of $atomic_var is really atomic ?
i think not. can anybody shed further light on this ?
3) originally i had made the read descriptors non-blocking. but this seems
to make read misbehave in some way. could anybody help ? having non-blocking
descriptors is advantageous in that you will not be blocked in a call to
read. this could be long if the child attached to the other end of the pipe
is a long running process. instead the loop can read as much as is available
and go back to blocking on select.
4) originally i had coded the select loop like this
do {
select ...;
} while ($read_descriptors);
even after clearing all the descriptors which had been set in the fork
loop $read_descriptor does not evaluate to false ??. so i put in the counter.
as it is the code is rather complicated. i would advice you to go through
Advanced Programming in the Unix Environment by Richard Stevens, if you wish
to learn more.
anyway here is the code.
/kk
use strict;
use warnings;
use Fcntl;
use POSIX qw (:sys_wait_h :signal_h);
# arguments are reference to arrays. the first element contains the command
# to execute. the second and third will contain the ret value of that command
# and the output/error msg from that command.
sub execute_tasks (@)
{
my $BUFSIZ = 1024;
my $atomic_var = 0;
# reap the children. we dont want to leave zombies around
$SIG{CHLD} = sub { $atomic_var++; };
my $read_descriptors = "";
my $cmds = 0;
my $oldmask = POSIX::SigSet->new ();
my $blockmask = POSIX::SigSet->new (SIGCHLD);
foreach (@_) {
my ($readend, $writeend);
# make a pipe
pipe ($readend, $writeend);
# parent
if (my $pid = fork ()) {
# make the read_descriptor non-blocking
# fcntl ($readend, F_SETFL, O_NONBLOCK);
# store the pid of child
$_->[1] = $pid;
# store the read end of the pipe
$_->[3] = $readend;
# close write end or we wont get EOF
close ($writeend);
# this is for the select down below
vec ($read_descriptors, fileno ($readend), 1) = 1;
# use this as a ctr for the select loop below
$cmds++;
next;
}
# child. dup the write-end of the pipe to STDOUT, STDERR
elsif ($pid == 0) {
# close readend of pipe.
close ($readend);
# isnt there a better way to dup ?
open (STDOUT, sprintf (">& %d", fileno ($writeend))) || die
("open (dup): $!\n");
open (STDERR, sprintf (">& %d", fileno ($writeend))) || die
("open (dup): $!\n");
exec ($_->[0]) || die ("exec: $!\n");
}
else {
warn ("fork: $!\n");
}
}
while ($cmds) {
my $ready;
if (select ($ready = $read_descriptors, undef, undef, undef) > 0) {
foreach (@_) {
# if file descriptor for this command is ready to be
read
if (vec ($ready, fileno ($_->[3]), 1)) {
my ($output, $ret);
# read as much as is available
while (($ret = sysread ($_->[3], $output,
$BUFSIZ)) > 0) {
$_->[2] .= $output;
}
# clear from bit array if EOF. we have read
all there is to
# for this descriptor
if ($ret == 0) {
vec ($read_descriptors, fileno
($_->[3]), 1) = 0;
$cmds--;
}
# handle other kinds of error from read.
# is there a possibility of the sysread being
interrupted
# by SIGCHLD. does perl do automatic restart
of interrupted
# syscalls ??
}
}
}
if ($atomic_var) {
# block delivery of SIGCHLD
sigprocmask (SIG_BLOCK, $blockmask, $oldmask) ||
die ("sigprocmask (SIG_BLOCK): $!\n");
my $child;
while (1) {
$child = waitpid (-1, WNOHANG);
# break out if there are no dead children or no
children left
last
if ($child == 0 || $child == -1);
# search for that pid and store the exit status of the
child
foreach (@_) {
$_->[1] = $?
if ($_->[1] == $child);
}
}
# clear
$atomic_var = 0;
# allow delivery again
sigprocmask (SIG_SETMASK, $oldmask) ||
die ("sigprocmask (SIG_SETMASK) $!\n");
}
}
# this is not really necessary. or is it ? we read till out children
# close their end of the pipes. it might so happen that we dont receive
# a SIGCHLD until we are out of the select loop. so do a blocking
# waitpid and retreive exit status of all children
while (1) {
my $child = waitpid (-1, 0);
last
if ($child == -1);
foreach (@_) {
$_->[1] = $?
if ($_->[1] == $child);
}
}
return ();
}