On 9/11/06, Lew Schwartz <[EMAIL PROTECTED]> wrote:

        Sorry, it's 3 million (3000K) rows in the source.

Well, in that case, the time period may be more reasonable.

Here's a concrete example.

That helps a lot.

This is a cross tab. Want proof?

**********************************************************************
* Program....: LEWSXTAB.PRG
* Version....: 1.0
* Author.....: Ted Roche
* Date.......: 12 September 2006, 08:57:49
* Compiler...: Visual FoxPro 08.00.0000.3117 for Windows
* Purpose....: Create an efficient cross-tab process
**********************************************************************
LOCAL lcSourceData, laData[1,4]

CREATE CURSOR Source (cAcct c(10), cType c(10), cDate c(4), yAmount Y)
INSERT INTO Source VALUES ("My__Acct", "Savings","0190",5.00)
INSERT INTO Source VALUES ("My__Acct", "Check"  ,"0190",4.00)
INSERT INTO Source VALUES ("My__Acct", "Invest" ,"0190",0.20)
INSERT INTO Source VALUES ("WifeAcct", "Savings","0190",15.00)
INSERT INTO Source VALUES ("WifeAcct", "Check"  ,"0190",-5.0)
INSERT INTO Source VALUES ("WifeAcct", "Invest" ,"0190", 0.50)
INSERT INTO Source VALUES ("My__Acct", "Savings","0290",5.00)
INSERT INTO Source VALUES ("My__Acct", "Check"  ,"0290",4.00)
INSERT INTO Source VALUES ("My__Acct", "Invest" ,"0290",0.20)
INSERT INTO Source VALUES ("WifeAcct", "Savings","0290",15.00)
INSERT INTO Source VALUES ("WifeAcct", "Check"  ,"0290",-5.0)
INSERT INTO Source VALUES ("WifeAcct", "Invest" ,"0290", 0.50)

SELECT cAcct, PADR(LEFT(cType,6)-cDate,10) as cTypeDate, yAmount ;
from Source ;
INTO cursor Intermediate


DO (_GENXTAB) WITH "Result", .T. , .F. , .T. , 1 , 2 , 3 , .f. , .f. , 0

SELECT Result
BROWSE


Everything's on the same disk & the client will not be able to alter
these resources (target machines are laptops).


Well, that means you can SET EXCLUSIVE ON and/or USE ... NOMODIFY for
incremental speed increases. And I would shop for the fastest
cross-tab program.

I'm not ready to drop the tags yet since set relation ... is
outperforming scan <source> seek <Source.key> in target.

Just one of several suggestions. As always with these questions, the
optimal answer can only be found by benchmarking against your data and
with your hardware.

Seems like there should be something faster, but darned if I can find
it. I'll give table buffering a try, but since my task manager's page
file usage history graph isn't showing much activity I figured the
process is pretty much buffered on the windows level anyway.

Page File Usage means that the application and its memory are running
in real memory rather than virtual. You want to look at I/O
saturation. If you're doing 250 inserts/updates or replaces for each
row of date, you are beating up on the disk controller. If you can
write a row once, you reduce 3 million writes to 12 thousand - that
ought to improve performance, *IF* I/O is the bottleneck.

--
Ted Roche
Ted Roche & Associates, LLC
http://www.tedroche.com


_______________________________________________
Post Messages to: [email protected]
Subscription Maintenance: http://leafe.com/mailman/listinfo/profox
OT-free version of this list: http://leafe.com/mailman/listinfo/profoxtech
** All postings, unless explicitly stated otherwise, are the opinions of the 
author, and do not constitute legal or medical advice. This statement is added 
to the messages for those lawyers who are too stupid to see the obvious.

Reply via email to