UniObjects

2004-04-19 Thread Kevin Vezertzis
We are having a UniObject dilemma and wanted to see if anyone has had a
similar problem and/or resolution.  When making a call into Universe,
via UniObjects, we are seeing a session limit of 10 sessions.
Obviously, this is the 10 spawn max on enterprise or ip-based Universe
licenses.  We were under the impression that when license 1 had spawned
10 sessions, then we would roll to license 2 and so forth, until we
reached the max licenses available.  Has anyone come across this
problem?
 
Thanks,
Kevin
 
 
 
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Performance

2004-04-09 Thread Kevin Vezertzis
Thanks to everyone for the performance suggestions...I will report to
the board as soon as we resolve it.

Kevin



Kevin D. Vezertzis
Project Manager
Cypress Business Solutions, LLC.
678.494.9353  ext. 6576  Fax  678.494.9354
 
[EMAIL PROTECTED]
Visit us at www.cypressesolutions.com
 
 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Stevenson, Charles
Sent: Friday, April 09, 2004 4:24 PM
To: U2 Users Discussion List
Subject: RE: Performance

Kevin,
When you finally get this solved,  let us know what the answer was.  I
am sure all responders would be interested.

re. /tmp: I've seen marginal but not incredible inprovement moving
UVTEMP onto our EMC storage rather than on the system's local disk
(/tmp)

re. file sizing: since you are porting from D3, I assume yoou made
everything type 18, which is the standard Pick hashing algorithm?  That
ought to behave about the same as it did on D3.   How about Separation?
Does D3 have that concept?  I don't think Jeff mentioned it.  For most
files you want to set separation such that you get integer number of
groups for each OS disk read.  If a sigle disk read grabs 8K, then
separation 4 (512*4= 2K/group) means filescans will ask for a group, the
OS will read in 4 groups, and the next 3 times the process asks for the
next group, it's probably still sitting in memory.  So if the OS does
read 8K at a time, separations of 1,2,4,8,16,12 make sense, depending on
the nature of the records.  4 is typical.

re. locks:  I notice the lock table is pretty small, and there are a
lots of 'CLR.OM.LOCK proceses.  Is this one of those PICK aps where
people developed their own record locking scheme because they didn't
trust PICK's record lock handling?  If so,  maybe that is a source of
ineffeciency.  It's not clear how that would manifest itself with
paging, though.

What about loading programs into shared memory?  Do you have an
absolutely huge program that many users use?  By default they each load
their own copy of the object file into their private memory.  But you
can change that so only one copy is loaded.  The same with small utility
routines that get called by everyone throughout the day.  Load them once
in shared memory,  then all users will run off that copy.   Again, we're
talking incremental, not incredible, performance improvements.

I'm grasping here.  I'm sure IBM's Hdwr, AIX,  U2 support has gone
through all this already.  You will post the answer once you know it,
won't you?
 
cds
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Performance

2004-04-08 Thread Kevin Vezertzis

Thanks for all of the posts...here are some of our 'knowns'...

1.)  Application files have all been analyzed and sized correctly.
2.)  IBM U2 support analyzed Universe files, locking, swap space and all
have been adjusted accordingly or were 'ok'.
3.)  We are running RAID 5, with 8G allocated for Universe
4.)  We are already running nmon, which is how we identified the paging
faults and high disk I/O

4.)  Attached you will find the following:
smat -s
LIST.READU EVERY
PORT.STATUS
Uvconfig
Nmon (verbose and disk)
Vmtune

I know this is a lot of data, but it is a mix of what each of you have
suggested.  Thanks again for all of the help.

Kevin



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Kevin Vezertzis
Sent: Thursday, April 08, 2004 12:08 PM
To: [EMAIL PROTECTED]
Subject: Performance

We are looking for some insight from anyone that has experienced
performance degradation in UV, as it relates to the OS.  We are running
UV 10.0.14 on AIX 5.1.we are having terrible 'latency' within the
application.  This is a recent conversion from D3 to UV and our client
is extremely disappointed with the performance.  We've had IBM hardware
support and Universe support in on the box, but to no avail..we are
seeing high paging faults and very highly utilized disk space.  Any
thoughts or suggestions?
 
Thanks,
Kevin
File access State  Netnode Owner Collisions Retries
Semaphore #   1 00 0  0   0
Semaphore #   2 00 0  0   0
Semaphore #   3 00 0  0   0
Semaphore #   4 00 0  0   0
Semaphore #   5 00 0  0   0
Semaphore #   6 00 0  0   0
Semaphore #   7 00 0  0   0
Semaphore #   8 00 0  0   0
Semaphore #   9 00 0  0   0
Semaphore #  10 00 0  0   0
Semaphore #  11 00 0  0   0
Semaphore #  12 00 0  0   0
Semaphore #  13 00 0  0   0
Semaphore #  14 00 0  0   0
Semaphore #  15 00 0  0   0
Semaphore #  16 00 0  0   0
Semaphore #  17 00 0  0   0
Semaphore #  18 00 0  0   0
Semaphore #  19 00 0  0   0
Semaphore #  20 00 0  0   0
Semaphore #  21 00 0  0   0
Semaphore #  22 00 0  0   0
Semaphore #  23 00 0  0   0

Group accessState  Netnode Owner Collisions Retries
Semaphore #   1 00 0 34  34
Semaphore #   2 00 0 13  13
Semaphore #   3 00 0  6   6
Semaphore #   4 00 0 21  21
Semaphore #   5 00 0 10  10
Semaphore #   6 00 0 12  12
Semaphore #   7 00 0 12  12
Semaphore #   8 00 0 43  43
Semaphore #   9 00 0  7   7
Semaphore #  10 00 0  9   9
Semaphore #  11 00 0 11  11
Semaphore #  12 00 0 10  10
Semaphore #  13 00 0 11  11
Semaphore #  14 00 0 16  16
Semaphore #  15 00 0 10  10
Semaphore #  16 00 0 11  11
Semaphore #  17 00 0 17  17
Semaphore #  18 00 0 12  12
Semaphore #  19 00 0 19  19
Semaphore #  20 00 0  5   5
Semaphore #  21 00 0 22  22
Semaphore #  22 00 0  8   8
Semaphore #  23 00 0 34  34
Semaphore #  24 00 0  5   5
Semaphore #  25 00 0 10  10
Semaphore #  26 00 0 11  11
Semaphore #  27 00 0 15  15
Semaphore #  28 00 0 21  21
Semaphore #  29 00 0 12  12
Semaphore #  30 00 0 41  41
Semaphore #  31 00 0  7   7
Semaphore #  32 00 0 49  49
Semaphore #  33 00 0  9   9
Semaphore #  34 00 0 25  25
Semaphore #  35 00 0 13  13
Semaphore #  36 00 0 10  10
Semaphore #  37 00 0  6   6
Semaphore #  38 00 0 11  11