OK, here's what I'm trying to do:

I am trying to write a script that will be given a command, and that script 
is to run that command on a number of other remote machines in parallel, then 
collect the STDOUTs from these remote executions and report back to the user 
running my script.

Let's assume the remote executions themselves are being done with ssh.

If I didn't want to run the various remote executions in parallel, I could 
simply use backticks to execute ssh with appropriate arguments plus the 
command, then wait for the backticks to complete.  But I need multiple ssh's 
to be run in parallel.

My first try was to perform each ssh execution in a separate thread.  I then 
used open() with a pipe to execute ssh and read the remote execution's STDOUT.

Then it slowly dawned on me, open() with a pipe symbol causes a fork() behind 
the scenes.  That would mean I have overhead of first starting a thread then 
the thread causing a fork().  That doesn't sound good.

I thought about forking, then having the fork exec() ssh, so the full Perl 
interpreter in the fork would be replaced by just the executing ssh 
process.  However, I don't think I could get the STDOUT of the executing 
process back.

Should I spawn a thread from each remote execution, then have the thread use 
backticks to execute ssh?

My head is beginning to hurt.  Does anyone have any suggestions?

By the way, I know about how to present the arguments to system()/exec() so 
that Perl doesn't have to spawn a shell to execute the command.

Merrill 
_______________________________________________
Perl-Unix-Users mailing list
[EMAIL PROTECTED]
To unsubscribe: http://listserv.ActiveState.com/mailman/mysubs

Reply via email to