ameij11515169
22282244 12580680 11 13 RU 21074 ameij11515184
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Andre Meij
Sent: Saturday, March 10, 2007 10:51 AM
To: u2-users@listserver.u2ug.org
Subject: RE: [U2] Dynamic files
11515184
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Andre Meij
Sent: Saturday, March 10, 2007 10:51 AM
To: u2-users@listserver.u2ug.org
Subject: RE: [U2] Dynamic files, big transactions
Rick, Charles, et others,
Thank you for the quick
for the help, it is very much appreciated.
Regards,
Andre Meij
Innovate-IT
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Stevenson, Charles
Sent: Saturday, March 10, 2007 7:04 AM
To: u2-users@listserver.u2ug.org
Subject: RE: [U2] Dynamic files, big
Hi Andre
It is hard to suggest a best approach without understanding your
application. Usually if you are going to lock 250 records at the same time
in the same file, you should be considering escalating to locking the File.
This is a bigger problem for RDBMS, which is why they prefer
Andre,
This seemed very strange, since normally 250 record keys would never
hash into the same group of a dynamic file.
The exception might be if you created a new part file and then sought
to lock and add a large group of records at one time. The new
dynamic part file would have a
UniVerse's CONFIGURE.FILE does nothing apart from update the file header. You
need RESIZE filename * * * USING directorypath to actually effect an immediate
change.
---
u2-users mailing list
u2-users@listserver.u2ug.org
To unsubscribe please visit http://listserver.u2ug.org/
Andre,
I'm with Rick. He suggested new partfile. But maybe some kind of
queue or workfile, that routinely gets flushed, merging to modulo 1.
And maybe zero length record or very small, so that 250 ids all land in
the same group? Is group size 4KB?
What does that have to do with the lock table
On Tuesday 31 January 2006 22:27, Timothy Snyder wrote:
For the large file in its dynamic form, is most of the space consumed by
the dat* or the over* files?
dat files - I know I'm wasting space.
If the former, you may just be wasting
space. If the latter, you have some file configuration
I would recommend keeping your files static right up to the point that they
hit 2 gig. A well sized static file will run faster and have less overhead
in my opinion. Also, if you do go the dynamic route, I would recommend
resizing them once a year as well.
-Original Message-
From: [EMAIL
Sorry if this is a bit thick, I'm a UV guy rather than a UD guy. But when you
said I've tried changing split/merge loads from the default of 60/40 to 20/10
wouldn't that make the file much bigger?
I don't know if UD works differently but in UV this would mean a new group was
created every
- Original Message -
From: [EMAIL PROTECTED]
To: u2-users@listserver.u2ug.org
Sent: Tuesday, January 31, 2006 9:15 PM
Subject: [U2] Dynamic Files
...
However, the few files I have moved to dynamic hashing are rediculous in
size. I'm obviously setting some file parameters wrong, but
My 2 Cents,
Split load, and merge load are set to 60/40 by default for KEYONLY
dynamic files. That is to say when 60% of the group is filled with
keys, it can split. (Note that actual condition on when the spilt
occurs takes more things into consideration). So splitting of groups
will create
This may help (or perhaps not)
In teaching Unidata courses I have heard several comments over the last year
or so about large dynamic files that have apparently stopped working
correctly. The two that I have been able to examine in any detail both
showed that the system appeared to be
Dynamic vs Hash Statistics
UniData Version 6.0.600-PE
Test Date: Wednesday, September 14, 2005
The following benchmarks are from a small SA file of 4,476 records.
Process
Dynamic
Hash
Under
Hash
Good
Hash
Over
Deleting
357
304
327
761
Adding
430
471
414
901
At 10:16 AM 2/1/2006, you wrote:
How you've gone from 75MB to 4.3GB I can't explain though!!
You don't say what version of UniData you are on but I am going to
make an educated guess that it is an older release. I am also going
to guess that you used the default hash type when you created
James Cowell wrote on 02/01/2006 11:16:36 AM:
Sorry if this is a bit thick, I'm a UV guy rather than a UD guy.
But when you said I've tried changing split/merge loads from the
default of 60/40 to 20/10 wouldn't that make the file much bigger?
I don't know if UD works differently but in UV
However, the few files I have moved to dynamic hashing are rediculous in
size. I'm obviously setting some file parameters wrong, but would like
insight from anyone who has good luck...
The file STC.HIST as a dynamic file takes up 4.3Gig of disk space. It
has around 944,000 records, a
[EMAIL PROTECTED] wrote on 01/31/2006 09:15:09 PM:
The file STC.HIST as a dynamic file takes up 4.3Gig of disk space. It
has
around 944,000 records, a blocksize of 1024 but a modulo of 4,000,000+
When I convert this to a static file, I can properly size it with a
modulo
of around 94,000
Jeffrey,
For dynamic files with records where the record size is large relative to
the key size, I would go with KEYONLY rather than KEYDATA. We have had
clients whose files (as KEYDATA) have split repeatedly with large records,
so that a file that was 7 gig on Monday would be 19 gig by
19 matches
Mail list logo