On 5/20/07, Joey K Tuttle <[EMAIL PROTECTED]> wrote:
At 18:13  -0400 2007/05/20, Terrence Brannon wrote:
>The shootout specs for the sum-file benchmark
>http://shootout.alioth.debian.org/debian/benchmark.php?test=sumcol&lang=all
>require the use of line-oriented I/O... Raul's solution involved
>reading the whole file in.
>
>Line-oriented I/O is important - what happens when a file is larger
>than memory?

Terrence,

On a lot of modern machines, that is mostly moot.

What is mostly moot and why?

And can you ignore the fact that there are text files larger than memory?

You can also
map files and not "read" them directly.

The shootout problem takes input on stdin, not from a file, so I'm not
sure this mapping technique will work.



You
can see that j used 87 megabytes (not a big load on my 1.5G
iMac) to process the 227,944,072 byte file.

wow, that defies logic. a 227 megabyte file in an eager language only
used 87 megabytes???


The thing is that benchmarks which read a line at a time as in
the example that started your questions are just "not done that
way in j" -

yes, the shootout requirements are very rigid. They do not allow each
language to "strut its stuff" - they try to force all languages to do
things in a certain way.

In a sense, J has to read the file line-by-line as part of the mapping
process, so I suppose that counts as line-oriented.

The real advantage is terse programs that subsume
detail much more than most programming languages.

Well, if you take a look at most entries for this benchmark, they were
much simpler  than the J code.
   http://shootout.alioth.debian.org/debian/benchmark.php?test=sumcol&lang=all

But maybe I can change that with jmf.
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to