tom arnall am Mittwoch, 3. Mai 2006 22.10:

[big snip of history]

> I should have made clear my basic purpose in trying to use file-tied
> scalars. I wanted to put a very large file (.5GB) into a scalar in a way
> that has the system keeping most of the data on disk and putting in
> physical memory only what I need at any given moment. The problem statement
> is s'thing like
>
>       $bigGulp = ` cat bigFile.txt`

This is, imo, a problem for several reasons:

- the construct is system dependent
- backticks create a separate process
- there's no error checking
- potentially bad memory footprint / scalability (as you mentioned)

[[
What you tried with backticks could be done in pure perl:

my $bigGulp;

{ # scope block for local $/;
  open my $f, '<', 'bigFile.txt' or die $!;

  # slurp in whole file, see perldoc perlvar
  local $/;
  $bigGulp=<$f>;

  close $f or die $!;
}
# use the $bigGulp;
]]

> in order to do multi-line regexing. My first try with the application froze
> the rest of the system. I then configured linux with less virtual memory,
> and got an 'out of memory' error.  i then turned -- naively it seems now --
> to 'tie' etc. But after finding an implementation module for a file-tied
> scalar, I don't see now a solution via this route either, so I am resigned
> to using s'thing along the lines of File::Stream.

Maybe there is a way to do the thing by reading the 
file line after line (or chunk after chunk) and accumulating intermediate 
strings, or by choosing an appropriate $/, but that depends from the 
details...

Have you many regexes?
Could you post a regex, a file snippet, and a file format description?

(the find method of File::Stream, which I never used, could end up in holding 
most of the file content in $pre_match)

[...]

Dani

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>


Reply via email to