At 8:24 AM -0500 11/10/00, Gregory Lypny wrote:

>calculations are done using variables with the screen locked.  It takes
>about 5 seconds to execute the handler for 1,000 data points, which I
>think is fine given my primitive scripting skills; but when I increase
>the size of the data set to 5,000, it takes about 160 seconds or 32 times
>longer.  Would anyone care to share their thoughts on the exponential
>increase in processing time?  (Incidentally, I'm working with a 350 mHz
>iMac, if that is relevant.)

I guess we'd have to see the scripts to give a clear answer. But a 
possible suspect would be any repeat loops that make use of 
item/line/word/etc. references.

For example, imagine you have list of data (in a variable called 
tData) that is 1000 lines long, and you have to do some processing of 
each line. The following script can turn out be very inefficient:

put the number of lines of tData into tNumLines
repeat with i = 1 to tNumLines
  ##do something to line i of tData
end repeat

The problem is that Metacard has to count out the lines until it 
finds the right one *each time* through the loop. (At least, that's 
my understanding.) So as i gets bigger, it takes longer to find the 
line on each loop, and so an exponential time increase.

A better approach is to use the "repeat for each" structure like this:

repeat for each line tLine in tData
   ##do something to tLine
end repeat

Some empirical data:

Script 1

on mouseUp
   put 1000 into tNumLines
   repeat tNumLines
     put "aaaaaaa" & cr after tData ##build list
   end repeat
   delete char -1 of tData ##dangling return
   put the milliseconds into tStartTime
   repeat with i = 1 to tNumLines
     put line i of tData into x
   end repeat
   put the milliseconds - tStartTime
end mouseUp

This takes about 125 milliseconds on my machine. When tNumLines is 
increased to 5000, it takes about 3210 milliseconds, which matches 
your experience.

Script 2

on mouseUp
   put 1000 into tNumLines
   repeat tNumLines
     put "aaaaaaa" & cr after tData
   end repeat
   delete char -1 of tData ##dangling return
   put the milliseconds into tStartTime
   repeat for each line tLine in tData
     put tLine into x
   end repeat
   put the milliseconds - tStartTime
end mouseUp

This takes around 2 milliseconds. Increasing tNumLines to 5000 gives 
a result of about 12 milliseconds, which, given the overhead for 
building the data list, is about what you'd expect.

On the other hand, this may not be the reason for your problem. But I 
like showing off the numbers. They always impress the hell out me. :)

Cheers
Dave Cragg

Archives: http://www.mail-archive.com/[email protected]/
Info: http://www.xworlds.com/metacard/mailinglist.htm
Please send bug reports to <[EMAIL PROTECTED]>, not this list.

Reply via email to