Dear Vixens and Reynards:

I am modifying a somewhat complex financial analysis report. It processes thousands of records. Since it can take several minutes to crunch the numbers when dealing with a large dataset, I want to cut down the processing time.

A large part of the processing is refiguring the invoicing under a different assumption. It is this area that is puzzling me.

One of the options is whether to process non-contract work orders. When they are processed, there are 82581 records to refigure in my test data. When only contract work order transactions are refigured, there are only 73073. These test records are in a cursor. Due to the differing record counts, I expect that there should be about a 10% difference in the times, but the processing time for the two situations is nearly identical! What gives?

When I abbreviate the data selected to about 9,000 records, the processing is correspondingly faster, but why the plateau with the larger datasets?

Sincerely,

Gene Wirchenko


_______________________________________________
Post Messages to: [email protected]
Subscription Maintenance: http://mail.leafe.com/mailman/listinfo/profox
OT-free version of this list: http://mail.leafe.com/mailman/listinfo/profoxtech
Searchable Archive: http://leafe.com/archives/search/profox
This message: http://leafe.com/archives/byMID/profox/
** All postings, unless explicitly stated otherwise, are the opinions of the 
author, and do not constitute legal or medical advice. This statement is added 
to the messages for those lawyers who are too stupid to see the obvious.

Reply via email to