Thanks Ray and all of those who replied to my questions. I wound up choosing the 64bit option. For the record, I am in favor of using distributed files. I have many situations where I use them. However, I really didn't have the time to come up with a good algorithm to achieve even distribution. The file is made up of 50 transaction chunks of inventory history. One item can have numerous records. It looks like
Internal Part# = sequential number assigned at item creation time. Field 4 of the master record is a counter of history records. So, if part# 1, field 4 = 2, I would have... Key = 1*1 <1> Stockroom ] (MV associated to 1, Max of 50) <2> Trans Qty ] (MV associated to 1, Max of 50) <3> Trans Uom ] (MV Associated to 1, Max of 50) ... <25> Tran Type] (MV Associated to 1, Max of 50) Key = 1*2 <1> Stockroom ] (MV associated to 1, Max of 50) <2> Trans Qty ] (MV associated to 1, Max of 50) <3> Trans Uom ] (MV Associated to 1, Max of 50) ... <25> Tran Type] (MV Associated to 1, Max of 50) When I have time, I will be changing to something like 15 or 20 transactions per record to decrease the record sizes. I am also going to be changing to a distributed file so that maintenance becomes less time consuming. Ray, you mentioned changing to a separation of 32 to get around performance hits when accessing the file. I thought that the maximum recommended separation was 16? Has this changed? Thanks again to all who responded in my moment of need. Scott > Dynamic files are also subject to the 2GB limit. The internals of static hashed > and dynamic hashed files are exactly the same, except for the location of > secondary group buffers. The decision about growing and shrinking the number of > primary group buffers in dynamic files is external to the file structure, but > requires that the secondary group buffers are in a separate file (OVER.30) so > that the primary group buffers (in DATA.30) can increase. > > So dynamic files are not a solution to the 2GB problem. > > You may have been confusing them with distributed files. A distibuted file is > primarily a logical entity that acts as an "umbrella", containing one or more > (static or dynamic) hashed files called part files. The individual part files > can be accessed as usual. To define a distributed file there must be some > attribute of the key in the part files that can be used to make the decision > about which part file that record belongs in. This may require a bit of design > work, and reallocation of records to correct part files before defining the > distributed file. > > You can have as many part files as you desire in a distributed file. However, > each part file remains limited to 2GB if 32-bit addressing remains in place. > The individual part files are managed (inspected, RESIZE, etc) as usual. > > Hashed files with 64-bit addressing can go over the 2GB limit. Yoy can convert > your hashed file to 64-bit addressing with RESIZE (RESIZE filename * * * USING > dirpath). The theoretical upper limit is approximately 19 million TB, but some > operating systems restrict you to 1 million TB. Applying 64-bit addressing does > not absolve you from the responsibility of periodic RESIZE of hashed files, and > much larger files will clearly take longer. You will also find that UVFIXFILE > does not support files with 64-bit addressing; you will need to get your head > around the new file fixing tool. > > With records that size I'd also be looking at the separation figure. It's a > really awkward record size for storing in hashed files. You need a large > separation (perhaps 32); otherwise many - most - of your records will be treated > as oversized, incurring an I/O penalty when accessing them. For Dynamic files, > the best you can achieve is 4KB groups, which mitigates against this choice. > > ----- Original Message ----- > From: [EMAIL PROTECTED] > Date: Thu, 29 Jan 2004 19:28:16 +0000 > To: [EMAIL PROTECTED] (u2-Users) > Subject: [UV] Resize - Dynamic or 64 bit? > > Hi all, > > UV 9.6 / HPUX 11 > > I have a hashed file approaching the 2 gig limit. I need some help determining > whether to go with the dynamic or 64 bit option. > > Here are some specifics. > > The file is our inventory history file which is, as you can imagine, used > heavily. Approximately 85 percent of the records are 4k in size. > > I have always frowned upon using dynamic files because they seem to be slower > compared to hashed. Maybe because I have never attempted to figure out how to > tune them. > > Can anyone give me the pros/cons of using the 64 bit versus dynamic option? > > Thanks in advance, > > Scott > _______________________________________________ > u2-users mailing list > [EMAIL PROTECTED] > http://www.oliver.com/mailman/listinfo/u2-users > > > _______________________________________________ > u2-users mailing list > [EMAIL PROTECTED] > http://www.oliver.com/mailman/listinfo/u2-users -- u2-users mailing list [EMAIL PROTECTED] http://www.oliver.com/mailman/listinfo/u2-users
