Timothy Snyder wrote:

Richard Taylor <[EMAIL PROTECTED]> wrote on 05/13/2005 02:05:51 PM:



... Why take a flexible, dynamic database system and force it to
be fixed length. This is what you are doing using dimensioned arrays


and


MATREADs. The most common justification I have heard is performance and
this simply doe not hold water.



Have you ever compared performance between dynamic and dimensioned arrays, or are you just saying that you've never notice problems but have never tried dimensioned arrays? I've seen it make a HUGE difference in Pick, UniVerse, and UniData. If you reference many elements of a dynamic array many times, you'll burn a lot of CPU cycles just to locate the data. When you reference an element of a dimensioned array, it's stored in separate address space, and is immediately referenced.


I have a standard way to avoid problems with the last attribute folding into the highest array element. Just dimension the array one element larger than the highest attribute you reference in the program. So if the highest attribute you reference is number 72, dimension the array at 73 or higher. Where I used to work, we had an automated process that created file definitions, including standard equates and the code to dimension arrays. We always created the arrays at one more than the highest attribute, and never had problems. This won't be necessary in environments where the extra attributes are placed on element zero, but it won't hurt anything, either. That way your code will be portable.



Payback during 2nd generation Pick was 10-20 attributes. Back then, the problem was to not oversize because it slowed down the read/writing of the blank attributes.

Didn't we hear/read recently that the new compiler and/or run time machine is keeping track of individual attribute marks in dynamic arrays, so that a full string search is not necessary every time?


Roger ------- u2-users mailing list [email protected] To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to