I thought I'd try this code but it doesn't work with very large files (>4
GB).

On Wed, Aug 26, 2009 at 11:46 AM, R.E. Boss <[email protected]> wrote:

> > Chris Burke wrote:
> > > Matthew Brand wrote:
> > > Thanks for the links. I tried the fapplylines adverb but the computer
> > > grinds along for 30 minutes or so before I pulled the plug. It ends up
> > > using 10Gb of (mainly virtual) memory. There are 40M lines in my file.
> > >
> > > I will use the unix split command to make lots of little files and
> > > (myverb fapplylines)&.> fname to solve the problem.
> >
> > There should be little difference between processing lots of small
> > files, and one big file in chunks.
> >
> > What processing is being done? What result is being accumulated?
> >
> > Why not test on a small file first and find out what is taking time -
> > and only then try on the full file?
>
>
> My guess is we can improve the efficiency of your code by at least a factor
> 2 (= Hui's constant).
>
>
> R.E. Boss
>
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
>



-- 
Devon McCormick, CFA
^me^ at acm.
org is my
preferred e-mail
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to