2,500,000 empties - then your key values are not hashing well under
type 18.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of roy
Sent: Wednesday, October 17, 2007 9:52 AM
To: u2-users@listserver.u2ug.org
Subject: [U2] Size of key question
_
From
/write operation
Ross Ferris
Stamina Software
Visage Better by Design!
-Original Message-
From: [EMAIL PROTECTED] [mailto:owner-u2-
[EMAIL PROTECTED] On Behalf Of roy
Sent: Wednesday, 17 October 2007 11:52 PM
To: u2-users@listserver.u2ug.org
Subject: [U2] Size of key question
: Wednesday, 17 October 2007 3:45 AM
To: u2-users@listserver.u2ug.org
Subject: RE: [U2] Size of Key Question
Random reads and updates on a file with ~2 million records. I
separated
the
reads and writes to a separate program that only does this processing
to
no
avail.
Topas shows 100% disk usage
_
From: Roy Beard [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 16, 2007 10:33 PM
To: 'u2-users@listserver.u2ug.org'
Subject: File key question
Wow!
I got so many ideas from this group that I thought I would give some more
Information.
This file is a distributed file in 2
My thought on seeing the keys is that the key is being used for things
that should be in the data, not the key... My feeble 2-bits.
Karl
On Wed, 2007-10-17 at 09:51 -0400, roy wrote:
_
From: Roy Beard [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 16, 2007 10:33 PM
To:
_
From: Roy Beard [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 16, 2007 12:17 PM
To: 'u2-users@listserver.u2ug.org'
Subject: Size of Key Question
Can someone comment on what effect if any the length of the key has on the
speed of disk access? The software I am working with has
I wouldn't expect much difference in file access speed with long record
keys versus short keys. What are you doing with the file that seems
slow? -- i.e. random reads of individual records, updates, sequential
selects and processing, etc. If the slowness is seen in an application
program, are
]
[mailto:[EMAIL PROTECTED] On Behalf Of Jeff Fitzgerald
Sent: Tuesday, October 16, 2007 1:14 PM
To: u2-users@listserver.u2ug.org
Subject: RE: [U2] Size of Key Question
I wouldn't expect much difference in file access speed with long record
keys versus short keys. What are you doing with the file
are
affected.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jeff Fitzgerald
Sent: Tuesday, October 16, 2007 1:14 PM
To: u2-users@listserver.u2ug.org
Subject: RE: [U2] Size of Key Question
I wouldn't expect much difference in file access speed
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of john reid
Sent: Tuesday, October 16, 2007 2:17 PM
To: u2-users@listserver.u2ug.org
Subject: Re: [U2] Size of Key Question
and the FILE.STAT?
File name = SALES-HIST-BR1
File type
In general, the main problem with large, compound keys is that said keys do
not hash well; and by hash well I mean that they do not hash to proximate
groups, as for example, sequential numeric keys would.
There is read-ahead logic and RAM in your disk drive(s). There is read-ahead
logic and RAM
To: u2-users@listserver.u2ug.org
Subject: RE: [U2] Size of Key Question
File name = SALES-HIST-BR1
File type = 18
Number of groups in file (modulo) = 317
Separation = 1
Number of records
-i.html
http://www.ibm.com/developerworks/edu/dm-dw-dm-0611baldridge-i.html
Registration is required, but free.
Subject: RE: [U2] Size of Key Question Date: Tue, 16 Oct 2007 19:07:18
-0400 From: [EMAIL PROTECTED] To: u2-users@listserver.u2ug.org This is a
pretty ugly file! Here's what I see: 1
Jeff F. will certainly have better critique, but it appears that the key
structure and hash-algorithm aren't very well suited to each other.
You have 883,026 records in 3,000,017 groups, and one of the groups has 7,417
records in it, so you have at least 2,124,407 empty groups.
I believe every
-
From: [EMAIL PROTECTED] [mailto:owner-u2-
[EMAIL PROTECTED] On Behalf Of roy
Sent: Wednesday, 17 October 2007 3:45 AM
To: u2-users@listserver.u2ug.org
Subject: RE: [U2] Size of Key Question
Random reads and updates on a file with ~2 million records. I
separated
the
reads and writes to a separate
15 matches
Mail list logo