Richard Taylor wrote:

>> Certainly not enough to justify throwing away one of the 
>> best features of the Pick database, or having to code 
>> work-arounds to deal with the short-comings of dimensioned 
>> arrays. 

I'm maintaining a system that was originally developed on Prime Information
in the mid-80's, when performance between dimensioned and dynamic arrays WAS
an issue.  It uses dimensioned arrays and matread/matwrite, but the way it
was designed, none of the "short-comings" you mention are really an issue.

Every file in the system has an abbreviated name.  For example, the
abbreviation for the customer master file (CUST.MST) is CM.  There is a
utility program that selects every 'D' item from the dictionary, and builds
an $INCLUDE file for all or selected files, named "DIM.(filename)" (eg
DIM.CUST.MST).

This DIM.xx file is included in every program that needs to access the
customer master file, and includes the following statements:

DIM D.CM(X) ; MAT D.CM = ''    ;* Where X = number of fields in the file
EQU CM.CUST.NAME TO D.CM(1)
... And so forth for every field in the file

NOWHERE in any of the code is the customer name referenced as D.CM(1) or
CM<1> or anything similar.  It is ALWAYS referenced as CM.CUST.NAME.
Sub-valued fields are refenced as CM.ADDR<1,x>, for example.

As new fields are added to the file, the inserts are re-created.  Because
"extra" fields are stored as a dynamic array in D.CM(0), programs that don't
use the new fields don't need to be recompiled.

Yes, I realize that the same thing can be accomplished with dynamic arrays
(ie EQU CM.CUST.NAME TO CM<1>), but as I mentioned in the beginning of this
post, this software was originally written back when there WAS a performance
difference between using dimensioned vs dynamic arrays ... At least that's
what the conventional wisdom told us at the time.

Larry Hiscock
Western Computer Services
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to