Yeah, with most modern OSes you'd probably do fine to just load the file in one chunk and access the parts you need, as long as the file is no larger than about 1GB or so (or less if you need multiple copies in memory at a time).

put URL ("file:" & filePath) into myVar

repeat for each line x of myVar
  -- do something with x
end repeat


On Nov 23, 2004, at 12:12 PM, Richard Gaskin wrote:

Rob Beynon wrote:
Greetings all,
I have failed to discover how to read a file one line at a time. The
file is a text file, and is large (84MB) but I need to process it on a
line by line basis, rather than read the whole file into a field (not
even sure I can!).
I thought of
read from myFile until EOF for one line
but that doesn't work
Help would be greatly appreciated here! Would you be good enough to
mail to my work address, as I have yet to work out how to access the
list effectively

Your post made it here, so I'm assuming you worked that out. :)

The above doesn't work only because you're asking Rev to do two different things, to read until the end of the file AND to read until the end of the next line. You gotta choose which one you want. If you want to read the next line use:

  read from file MyFile until cr

That assumes the file was opened for text read, causing the engine to automatically translate line endings from platform-specific conventions to the Unix standard line-ending ASCII 10.

If you file is opened as binary and your line-endings use the Windows convention, use:

  read from file MyFile until CRLF -- constant for Win line-ending

But have you tested reading the entire file? It may sound crazy, and depending on what you want to do with the data it may not be optimal, but I've successfully read much larger files without difficulty in Rev.

Big files can be slow, depending on available RAM, but in my experience the only platform on which it's a show-stopper is Mac Classic; OS X, Linux, and XP have very efficient memory systems which allow some amazing things if you're used to Classic's limitations.

Not long ago I had a customer send me a 250MB Gzipped file, and I used it as a test case for Rev's built-in decompess function -- it took a minute or so, but the entire file was successfully decompressed to its original 580MB glory and written back out to disk without error. When you consider that the decompress function requires both the gzipped and decompressed data to be in memory at the same time, along with everything else the engine and the IDE needs, that's pretty impressive.

The system I ran that on has only 1GB physical RAM, and I had a few other apps open. Thank goodness for modern VM systems. :)

--
 Richard Gaskin
 Fourth World Media Corporation
 __________________________________________________
 Rev tools and more: http://www.fourthworld.com/rev
_______________________________________________
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


-----------------------------------------------------------
Frank D. Engel, Jr.  <[EMAIL PROTECTED]>

$ ln -s /usr/share/kjvbible /usr/manual
$ true | cat /usr/manual | grep "John 3:16"
John 3:16 For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life.
$




___________________________________________________________
$0 Web Hosting with up to 120MB web space, 1000 MB Transfer
10 Personalized POP and Web E-mail Accounts, and much more.
Signup at www.doteasy.com

_______________________________________________
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution

Reply via email to