You are right, but logging still shows a cumulative slowdown as each chunk
is 'read', and the computer slows to a crawl. Using 'read from ... for ...'
is even slower, however. (The source file is a 1 GIG binary text file)
Given tFilepath, write out 1Mb files sequentially numbered...
put the hilite of btn "Binary" into isBinary
if isBinary then open file tFilePath for binary read
else open file tFilePath for text read
set the numberFormat to "####" --| So file names have leading zeroes
seek to 0 in file tFilePath
repeat
set the cursor to busy
add 1 to n
--| seek relative 0 in file tFilePath --| Redundant
read from file tFilePath for 1000000
put the result="eof" into isEOF
if (it="") then exit repeat
if isBinary then put it into URL("binfile:"& tDir&"/" &n& ".txt")
else put it into URL("file:"& tDir&"/" &n& ".txt")
if (isEOF OR the result <>"") then exit repeat
end repeat
close file tFilePath
Any further insights would be truly welcomed.
/H
----------------------------------------------
Hugh, it strikes me that the "seek relative 0" might be redundant -
and may be slowing things down.
Best,
Mark
_______________________________________________
use-revolution mailing list
[email protected]
Please visit this url to subscribe, unsubscribe and manage your subscription
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution