I do not know what is causing the peformance problem with Unidata and indexed fields, using large key. In the case of this particular file, there are translated fields that have to be built separately from the other 'D' type fields. The field in questions, that I was selecting on, was a date field. The date range was one day. What I ended up doing was a for/next loop selecting on each day in the date range. Dates in this file are fairly densly used, so there are not that many days that have no data. Regardless, the direct select is much faster than the date range.

This is very fast code in Unidata:

SUBROUTINE SELECT.DATE.RANGE(START,LAST,FILE)

DATELIST=""
FOR DATE=START TO LAST
   SELECTINDEX "DATE",DATE FROM FILE TO 1
   READLIST LIST FROM 1ELSE LIST =""
   IF LIST ELSE CONTINUE
   DATE.LIST<-1>=LIST
NEXT

FORMLIST LIST TO 0

RETURN

However, I would prefer to do this:

SELECT FILE WITH DATE >= 'START' AND DATE <= 'LAST'


----- Original Message ----- From: "Brian Leach" <[email protected]>
To: "'U2 Users List'" <[email protected]>
Sent: Sunday, November 08, 2009 8:37 AM
Subject: Re: [U2] Unidata - Key Stucture - Compound Keys or Sequetial


Hi Brad

Interesting, but I wonder about another possibility..

As a partial field the select would be using an I-Descriptor or
V-Descriptor. I wonder whether the overhead on that is the problem: if you
crated a V-Type to access the date field in the new file (e.g. using
EXTRACT(fieldname,0,0) ) would that show the same difference in performance,
rather than the data structure?

Just wondering..

Brian

-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of BraDav
Sent: 07 November 2009 20:10
To: U2 Users List
Subject: [U2] Unidata - Key Stucture - Compound Keys or Sequetial

Subject: Keys, Large Transaction Files,

Just recently ran into an interesting phenomenon - I was working with a file with compound keys and the selects over a date range were atrocious. I copy
the data to a new file, using sequential keys and the selects averages
200-2000X faster (for the doubters, I have to say is the actual # were
something like 197X to 2070x, the second # being as second select after the data was cached). The avg length of key on the file was 32 characters. The
avg length of a sequential keys was about 5 characters.  The fields was a
'date' field.  The field was indexed.  The range of the select was 2 days.
It seems there's a Unidata threshold large key sizes exceed with indexing
that kills peformance.

Also, sequential keys hash the best.  I managed a file with 80M records at
another site and had no problems with file sizing or overflow.

Brad


_______________________________________________
U2-Users mailing list
[email protected]
http://listserver.u2ug.org/mailman/listinfo/u2-users
No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 9.0.698 / Virus Database: 270.14.55/2489 - Release Date: 11/08/09
07:37:00

_______________________________________________
U2-Users mailing list
[email protected]
http://listserver.u2ug.org/mailman/listinfo/u2-users

_______________________________________________
U2-Users mailing list
[email protected]
http://listserver.u2ug.org/mailman/listinfo/u2-users

Reply via email to