Hi all,

I have a 5MB file with about 550000 lines that need to be processed by a
script. A simple script that deletes a line if the previous line has the
same contents. That takes more than 60 hours to complete. So I thought I
divide the file into smaller files of about one 60th of the total number of
lines. But instead of the expected hour of processing time, it took 2
minutes for each file to complete.

I understand processes are faster with less data in memory, but I never
would have thought the difference would be this big.

Any thoughts on how this is possible and what we can learn from it when
making programs?

Terry

_______________________________________________
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution

Reply via email to