Larry,
Well, if you have to work in dimensioned arrays that would be the way to
do it. Unfortunately, the code base I am working off of took this idea
and completely messed it up. It is to the point that the dictionaries can
not be trusted to truly represent the data. We are spending a great deal
of time just dealing with this.
As to the original topic, I will still stand by my earlier remarks, but I
will qualify it by saying that performance (in ANY system) has a lot to do
with how the system is designed in the first place. The code base that I
came from previously was also of late 80s vintage and we did not see any
benefit in moving to dimensioned arrays. I think that the difference is
that we had records of a fairly manageable field count, but we used lots
of value and even sub-value marked data. Dimensioned arrays don't really
help you too much with that.
We also sold systems based on the flexibility of the database and
dimensioned arrays, even with tools like you describe, does lessen that
flexibility (IMHO)
Rich Taylor | Senior Programmer/Analyst| VERTIS
250 W. Pratt Street | Baltimore, MD 21201
P 410.361.8688 | F 410.528.0319
[EMAIL PROTECTED] | http://www.vertisinc.com
Vertis is the premier provider of targeted advertising, media, and
marketing services that drive consumers to marketers more effectively.
"The more they complicate the plumbing
the easier it is to stop up the drain"
- Montgomery Scott NCC-1701
-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Larry Hiscock
Sent: Friday, May 13, 2005 5:57 PM
To: [email protected]
Subject: RE: [U2] I'm in an Array quandry, any suggestions...
Richard Taylor wrote:
>> Certainly not enough to justify throwing away one of the
>> best features of the Pick database, or having to code
>> work-arounds to deal with the short-comings of dimensioned
>> arrays.
I'm maintaining a system that was originally developed on Prime
Information
in the mid-80's, when performance between dimensioned and dynamic arrays
WAS
an issue. It uses dimensioned arrays and matread/matwrite, but the way it
was designed, none of the "short-comings" you mention are really an issue.
Every file in the system has an abbreviated name. For example, the
abbreviation for the customer master file (CUST.MST) is CM. There is a
utility program that selects every 'D' item from the dictionary, and
builds
an $INCLUDE file for all or selected files, named "DIM.(filename)" (eg
DIM.CUST.MST).
This DIM.xx file is included in every program that needs to access the
customer master file, and includes the following statements:
DIM D.CM(X) ; MAT D.CM = '' ;* Where X = number of fields in the file
EQU CM.CUST.NAME TO D.CM(1)
... And so forth for every field in the file
NOWHERE in any of the code is the customer name referenced as D.CM(1) or
CM<1> or anything similar. It is ALWAYS referenced as CM.CUST.NAME.
Sub-valued fields are refenced as CM.ADDR<1,x>, for example.
As new fields are added to the file, the inserts are re-created. Because
"extra" fields are stored as a dynamic array in D.CM(0), programs that
don't
use the new fields don't need to be recompiled.
Yes, I realize that the same thing can be accomplished with dynamic arrays
(ie EQU CM.CUST.NAME TO CM<1>), but as I mentioned in the beginning of
this
post, this software was originally written back when there WAS a
performance
difference between using dimensioned vs dynamic arrays ... At least that's
what the conventional wisdom told us at the time.
Larry Hiscock
Western Computer Services
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/