From: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> I am wondering if it is more processor intensive to open the 42
> separate files at one time, parse the data, and then close all the
> files, or if I should try to re-write the parse to open the correct
> file, dump the data, and then close that file, then repeat the
> process.  I know programmatically it is probably better to open and
> close the files as there would be no more copy and pasting, but was
> thinking processor intensive.  As it is right now it takes the script
> about 5 seconds to parse the 557 lines of data.

If you have this small data it would be best to  in the memory, into 
42 separate strings (in an array or hash of course!) and then at the 
end loop through them and flush their contents to the files.

If you expect more data in the future you'd better open all 42 files 
in the beginning, put their handles into an array or hash, then print 
the lines as you go and close all files at the end.

Otherwise you spend most of the time opening and closing files. IMHO 
of course.


something like

        my @filees = ('102','104',118');

        my %handles;
        foreach my $fileno (@filees) {
                my $FH;
                open $FH, '>', "/home/multifax/$fileno" or die "Can't open $fileno 
!";
                $handles{$fileno} = $FH;
        }

        ...
        if (grep $fields[4] eq $_, @filees) {
                print {$handles{$_}} "[EMAIL PROTECTED]";
        }
        ...

        foreach my $FH{values %handles} {
                close $FH;
        }


Jenda
=========== [EMAIL PROTECTED] == http://Jenda.Krynicky.cz ==========
There is a reason for living. There must be. I've seen it somewhere.
It's just that in the mess on my table ... and in my brain
I can't find it.
                                        --- me


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>


Reply via email to