On 9/11/06, Lew Schwartz <[EMAIL PROTECTED]> wrote:
I spent a while trying to follow your data diagram, and I'm still not sure I get it. A concrete example with a couple of rows might make sense. That said, I don't think you have a true cross-tab, or I'd recommend the built-in one or Val Matison's killer replacement.
The problem is to make this process take place a quickly as possible. We're talking 300K rows of data and an upload time of 1.5 - 3 hours.
That does seem slow. Are the source and target both local, and both on the same disk? Is there room for this amount of data with lots of room for temp files and swap? When you say "upload" you mean the time for the process to complete? That is really slow, if I understand what you're doing.
So far, code which iterates through the target 1 column at a time with a replace all... performs better than sql updates
SQL is efficient for standalone queries, but needs to be tuned for DP applications. It's more likely xBase commands will out-perform. Have you indexes on your target files? Drop them.
and *much* better than code which constructs an entire (or partial) row of data and updates the entire row in 1 shot. (I don't understand this last result).
That does seem non-intuitive. Typically, the costliest operation in a data transform like this is I/O to disk. Are you buffering? Consider it, with a flush (TableUpdate()) at a programmatic interval (after every row is filled, every 10, 100, 1000, etc.) to see if there's an effect. -- Ted Roche Ted Roche & Associates, LLC http://www.tedroche.com _______________________________________________ Post Messages to: [email protected] Subscription Maintenance: http://leafe.com/mailman/listinfo/profox OT-free version of this list: http://leafe.com/mailman/listinfo/profoxtech ** All postings, unless explicitly stated otherwise, are the opinions of the author, and do not constitute legal or medical advice. This statement is added to the messages for those lawyers who are too stupid to see the obvious.

