Edward Brown wrote:
I don't agree. Disk access is inherently slower than RAM access.
Therefore a process that makes efficient use of RAM will be faster than
an equivalent algorithm making efficient use of disk.

In your case, it's just a matter of scale:

50 million records at (lets say) 14 bytes per ID plus the multivalue
marker needed to build up the dynamic array.

15 * 50,000,000 = 750,000,000 bytes.

That's 732,422KB,  715MB

If your process is running on a modern server then this kind of op
becomes practical.

Assumptions:
 - that the dynamic array isn't using Unicode. If it is then memory
reqirements double.
 - That you select every record - normally (presumably) it would be just
a fraction?


In fact isn't all of this theoretical? Using the index select / readfwd
/ own tests method, there's no need to build workfiles or dynamic arrays
at all - simply do the tests as each record is retrieved with readfwd
and then create the report / do the processing all within the same loop?

Ed

The only real reason I can see for _really_ needing the extra data structure or workfile is if you have post sorting, maybe on a calculated value, say sorting on a totaled value. You need a pass through the data first to get that value.
_______________________________________________
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users

Reply via email to