ANJAN PURKAYASTHA wrote:
hi,

Hello,

here is the problem that i am trying to solve:
i have a file of 1000 DNA sequences.
for each sequence i run a program X (which comes with a perl wrapper);
capture the output of program X in a file; process the file to get some
information.

now, running X and processing the output file are time-consuming steps, so
if i were to work through my list of 1000 sequences thus:
run program X on sequence1
1. process output file1
2. run program X on sequence2
3. process output file2
.
.
.
(you get the idea)

then the program would take a very long time to run.

is there any way i can do this:
1. run program X on sequence1
2. process output file1      run program X on sequence2
3. process output file2     run program X on sequence3

in other words, run program X and also process the output file of the
previous X run simultaneously?

all advice will be appreciated.

It depends on how many CPU's you have and whether the processes will be using mostly CPU or IO to do their tasks. If each process is CPU intensive then running each process on its own CPU would probably help. If however each process is IO intensive then it would probably only help if each process had its own IO bus as well.


John
--
Perl isn't a toolbox, but a small machine shop where you
can special-order certain sorts of tools at low cost and
in short order.                            -- Larry Wall

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
http://learn.perl.org/


Reply via email to