You've gotten some good input, but I'd like to recommend dumping Raid 5
altogether. With several clients I've seen significant performance
increases from switching from (recommended) Raid 5 arrays to what may be
referred to as Raid 10 (Raid 0+1/1+0) arrays. This type of array mirrors
pairs of striped disk arrays (or vice versa) and will definately perform
better than a Raid 5 array. I don't recall specific numbers, but do
remember clients smiling after making the change where frowns which your
client apparently now have.

Raid 5 uses a parity 'disk' which requires multiple reads and writes for
each disk I/O operation. Removing that overhead alone will boost
performance by as much as 8 to 20 percent, depending on the
configuration and how many spindles are affected.

Also, it might be worth the time to change the uvconfig parameters that
affect SELECTs starting with the location of UVTEMP as previously
recommended. I always do that for my clients as part of an install so if
the tmp files need to be removed, /tmp doesn't have to be messed with.
Another one is the SELBUF setting, which determines how much of a select
is done in memory before going out to disk. Setting this higher can
improve performance especially where disk I/O is suffering. FSEMNUM,
GSEMNUM and PSEMNUM are other settings you may want to adjust to see if
disk I/O will improve.

There is one other issue with AIX you may be experiencing. For some
reason, some sessions attached through TCP/IP links 'go away' and leave
the session open. I have to use the TANDEM function to check those
processes. They are usually at some prompt cycling through whatever they
are doing as though someone is sitting at a keyboard with a book leaning
on the ENTER key causing it to repeat over and over.  AIX has the 'w'
command which shows CPU time chunks each user has. There are 3 columns
and if the 2nd or 3rd column is high and the first is 0 then that
session is a candidate for having problems. One major clue something
like this is happening is if the results of the uptime command show
numbers unusually high.

But all in all, you've gotten some good suggestions already and I may be
up in the night, so to speak.


On Thu, 2004-04-08 at 13:18, Kevin Vezertzis wrote:
> Thanks for all of the are some of our 'knowns'...
> 1.)  Application files have all been analyzed and sized correctly.
> 2.)  IBM U2 support analyzed Universe files, locking, swap space and all
> have been adjusted accordingly or were 'ok'.
> 3.)  We are running RAID 5, with 8G allocated for Universe
> 4.)  We are already running nmon, which is how we identified the paging
> faults and high disk I/O
> 4.)  Attached you will find the following:
>               smat -s
>               LIST.READU EVERY
>               PORT.STATUS
>               Uvconfig
>               Nmon (verbose and disk)
>               Vmtune
> I know this is a lot of data, but it is a mix of what each of you have
> suggested.  Thanks again for all of the help.
> Kevin
> -----Original Message-----
> On Behalf Of Kevin Vezertzis
> Sent: Thursday, April 08, 2004 12:08 PM
> Subject: Performance
> We are looking for some insight from anyone that has experienced
> performance degradation in UV, as it relates to the OS.  We are running
> UV 10.0.14 on AIX 5.1.we are having terrible 'latency' within the
> application.  This is a recent conversion from D3 to UV and our client
> is extremely disappointed with the performance.  We've had IBM hardware
> support and Universe support in on the box, but to no avail..we are
> seeing high paging faults and very highly utilized disk space.  Any
> thoughts or suggestions?
> Thanks,
> Kevin
> ______________________________________________________________________
> -- 
> u2-users mailing list
Karl L. Pearson
Director of IT,
ATS Industrial Supply
Direct: 801-978-4429
Toll-free: 888-972-3182 x29
Fax: 801-972-3888

u2-users mailing list

Reply via email to