Chuck,
Ask and ye shall receive!
As my primary mission was to determine the disparity in dynamic array
processing between Universe and UniData (hardware vs. software), I
really didn't get into the testing for efficiency (although I may work
on that, just out of curiosity and completeness).
The final version of the test program came down to building - in memory
- a 30,000 element, 1 million character dynamic array. The build, using
<-1>, was comparably quick on both systems: less than 1 second on each;
indicate both platforms handle the appending to a dynamic array equally
well.
The next step was to test the extraction of data from the array.
Initial tests using a sequential extraction of every element - using
<x>, not REMOVE - indicated that Universe did a MUCH better job of
handling this; something on the order of a couple seconds for Universe,
and 3 minutes for Unidata (note: the initial testing program mimiced a
program written that brought this to the forefront, which actually did
two extractions of each field.) I tried randomizing the retrieval of
elements to reduce the advantage of internal tracking that Universe had,
as well as trying to retrieve the data as <1> then <elements/2 + 1>.
While both narrowed the gap, it was not dramatic enough to provide a
convincing argument that the "problem" was software, rather than a
hardware/configuration issue.
My final testing of the extraction process made use of the following loop:
0015: UPPERLIMIT = DCOUNT(INSTRING,@FM)
0016: FOR M = 1 TO UPPERLIMIT
0017: PULLSTRING = INSTRING
0018: OUTSTRING<-1> = PULLSTRING<M>
0019: NEXT M
This, I felt, would eliminate most of the benefits of whatever internal
array position tracking that Universe was performing. With this test,
the processing of the above loop took 54 seconds on the Universe
platform, and 1 minute 40 seconds (100 seconds) on Unidata; much closer
in performance! I think the remaining difference can be attributed to a
combination of the database platform, as well as the differences in the
hardware platform (Universe: dual-proc Intel running Linux; Unidata:
6-proc p570 under AIX; yes, the p570 has more processing horsepower in
total, but, as neither database nor the testing program are
parallelized, more processors don't affect the performance of this
individual program, just the difference in individual power each
processor type.)
In summary: the original program which brought this up took a couple
seconds to run under Universe, and 38 minutes under Unidata. The first
test program, using sequential extraction of each element, took a
couple seconds under Universe, and 3 minutes under Unidata (60 times
slower). The final testing, using the aforementioned program loop, took
54 seconds on Universe, and 100 seconds on Unidata (just under two times
slower.)
I hope some have found this information helpful. If I get the chance,
I'll try to do a more thorough testing of the various dynamic array
extraction methods (EXTRACT vs REMOVE, primarily) on each platform to
help identifiy good programming practices.
Drew
Stevenson, Charles wrote:
Drew,
I sure do hope you report back on what you find, & your final
resolution.
It's been an interesting thread. Academic for us, painful for you.
When you do the <i> vs <n/2+i> extraction test, you might want to
compare it to
a loop-remove method, too. On UV, sequential <i> extractions are about
as fast as loop-remove when attribute marks are the only system
delimiter used.
Chuck Stevenson
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/
--
----------------------------------------------------------------------
Drew Henderson "There are two types of people -
Dir. for Computer Center Operations those who do the work and those
[EMAIL PROTECTED] who take the credit. Try to be
in the first group, there is
110 Ginger Hall less competition."
Morehead State University Indira Ghandi
Morehead, KY 40351
Phone: 606/783-2445 Fax: 606/783-5078
----------------------------------------------------------------------
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/