Actually, if you look at the minimum mod of the dynamic files, you'll see that
they've grown from, for example, 337 groups to 545,607, so it's likely that
they're not empty at all. Splits are a very expensive operation. You can speed
these files up a little by oversizing them, but I'd be surprised if this were
the problem.

AIX caching can be roughly controlled, but I'd be surprised if this was the
problem, either.

Let's see the following at AIX:

vmstat -v

ioo -a | grep maxpgahead

maxpgahead should be at 128, especially if you're sequentially processing
large files. This is yet another reason to minimize overflow; contiguous is
faster.

How are you copying? i.e., basic read from file a, write to file b, universe
select & copy, aix cp...?

vmstat -v should be taken at two points in time, say an hour apart at a busy
time. During a copy would be ideal. If any of the "I/O blocked with no xbuf",
values are increasing, that would indicate a problem.

Do you have nmon running? It would be interesting to see if your load is
spread among all disks, or are you hammering on just a few? Maybe there's a
sysadmin around who could verify that your disks are actually striped.

What is the path you're copying the file(s) across? Fibre? 10 Base T?
> Subject: RE: [U2] UniVerse and/or AIX caching> Date: Fri, 24 Oct 2008
10:43:36 +1100> From: [EMAIL PROTECTED]> To: [email protected]> >
Thanks Louis> > It would probably take hours, but "STATS" option would be
good... Do> they really need to be dynamic files? Is the data largely static?
Does> it grow often? Can you separate non-current data and utilise
distributed> files perhaps?> > I'm guessing that the large dynamic files are
largely empty> space/groups?> > Regards> David> > > -----Original
Message-----> From: [EMAIL PROTECTED]>
[mailto:[EMAIL PROTECTED] On Behalf Of Louie Bergsagel>
Sent: Friday, 24 October 2008 9:38 AM> To: [email protected]>
Subject: Re: [U2] UniVerse and/or AIX caching> > Several files, but these are
representative:> >ANALYZE.FILE ALPHA (reads & writes)> File name
.................. ALPHA> Pathname ................... ALPHA> File type
.................. DYNAMIC> Hashing Algorithm .......... GENERAL> No. of
groups (modulus) .... 545607 current ( minimum 337 )> Large record size
.......... 1619 bytes> Group size ................. 2048 bytes> Load factors
............... 80% (split), 50% (merge) and 80% (actual)> Total size
................. 1,379,911,680 bytes> > ANALYZE.FILE BETA (reads & writes)>
File name .................. BETA> Pathname ................... BETA> File
type .................. DYNAMIC> Hashing Algorithm .......... GENERAL> No. of
groups (modulus) .... 233098 current ( minimum 1607 )> Large record size
.......... 3257 bytes> Group size ................. 4096 bytes> Load factors
............... 80% (split), 50% (merge) and 80% (actual)> Total size
................. 1,197,477,888 bytes> > >file.stat DELTA (reads only)> File
name = DELTA> File type = 18> Number of groups in file (modulo) = 99013>
Separation = 4> Number of records = 460434> Number of physical bytes =
240564224> Number of data bytes = 171851992> > Average number of records per
group = 4.6502> Average number of bytes per group = 1735.6508> Minimum number
of records in a group = 2> Maximum number of records in a group = 6> > Average
number of bytes per record = 373.2391> Minimum number of bytes in a record =
176> Maximum number of bytes in a record = 2304> > Average number of fields
per record = 109.8444> Minimum number of fields per record = 108> Maximum
number of fields per record = 231> > Groups 25% 50% 75% 100% 125% 150% 175%
200%> full> 0 1948 28741 49890 17083 1315 33 3> > >FILE.STAT ECHO (reads
only)> File name = ECHO> File type = 18> Number of groups in file (modulo) =
16453> Separation = 4> Number of records = 55176> Number of physical bytes =
42856448> Number of data bytes = 28404000> > Average number of records per
group = 3.3536> Average number of bytes per group = 1726.3721> Minimum number
of records in a group = 1> Maximum number of records in a group = 5> > Average
number of bytes per record = 514.7890> Minimum number of bytes in a record =
80> Maximum number of bytes in a record = 6400> > Average number of fields per
record = 155.9890> Minimum number of fields per record = 47> Maximum number of
fields per record = 366> > Groups 25% 50% 75% 100% 125% 150% 175% 200%> full>
95 1617 4452 5861 3406 861 145 16> > On Thu, Oct 23, 2008 at 2:05 PM, Hona,
David S <[EMAIL PROTECTED]>> wrote:> > > What are the file stats for this UV
file? Just curious.> -------> u2-users mailing list>
[email protected]> To unsubscribe please visit
http://listserver.u2ug.org/> -------> u2-users mailing list>
[email protected]> To unsubscribe please visit
http://listserver.u2ug.org/
_________________________________________________________________
See how Windows connects the people, information, and fun that are part of
your life.
http://clk.atdmt.com/MRT/go/msnnkwxp1020093175mrt/direct/01/
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to