Hi All,

Hope this isn't off-topic..

I want to:

*invoke multiple sub-jobs from a script, limited by the amount of
computers I have available/can send those jobs to.
*when I've launched as many jobs as I have machines, wait until one
completes and send that machine another job, and so as others free up
until no more jobs.

Seems like what I want is something along the lines of fork /
child_pids, and 'wait'.

I've played around with wait, but what I can't figure out is, if I
launch multiple jobs, how to react to the fact that one, but not all
have completed.

If I launch multiple jobs up to max_machines as it were, and use 'wait',
it appears to only wait for one job - the first - and then the others
are on their own.  (e.g. the 'post-wait' print below is hit after the
first pid ends).
And if I 'waitpid', it serializes the jobs, defeating the parallelism I
so need!

Hope I'm making sense.  Any help would be appreciated!

Cheers,

Mark

--------------

Code below using sleep to emulate dispatched, long living jobs.  Waitpid
functionality commented out.


  #!/usr/bin/perl -w
  
  foreach $val ( 20, 25, 30 ) {
  
    print "value is $val\n";
  
     if (!defined($child_pid = fork())) {
       die "cannot fork: $!";
     } elsif ($child_pid) {
       # I'm the parent
       unshift ( @pids, $child_pid );
       #print "waiting for $val..\n";
       #waitpid($child_pid, 0);
       #print "done waiting for $val..\n";
     } else {
       # I'm the child
       #print "sleeping for $val seconds.\n";
       system ( "sleep $val" );
       print "$val is done sleeping, exiting.\n";
       print return
       exit;
  
     }
  
  }
  print "outside of loop.\n";
  print join ( "\n", @pids );
  print "pre-wait..\n";
  wait;
  print "post-wait..\n";

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>


Reply via email to