Andre,

This seemed very strange, since normally 250 record keys would never hash into the same group of a dynamic file.

The exception might be if you created a new part file and then sought to lock and add a large group of records at one time. The "new" dynamic part file would have a modulus of 1, and all of the records that hashed into that file would be locked in the same group until some of them were written, and the file split to a larger modulus.

If your algorithm for the Distributed file could cause this situation, then the solution may be to create the new parts with a MINIMUM.MODULUS value large enough to split out the record keys into separate groups (~23? Bigger is better).

CONFIGURE.FILE MINIMUM.MODULUS ... (in the Prime/PI Open version) would accept a keyword of IMMEDIATE to force the splitting of groups. Universe lacks this option, so you should specify MINIMUM.MODULUS at the time that you create the part file.

-Rick Nuckolls
Lynden Inc.

On Mar 9, 2007, at 10:39 AM, Andre Meij wrote:

Hi,



We have a highly technical problem with universe related to the locking
tables and their configuration:





We have a big application running on Universe 10.1 (Solaris). This
application is build on Distributed Dynamic files. Some of the keys are auto
numbers; others are supplied by external systems.



With our current settings we can have a maximum of 250 locks in one group; this means most certainly for auto numbers we can only lock records 1000 to 1249 without getting an abort and a rollback because a new lock cannot be
acquired.



This 250th lock cannot be acquired because all these locks fall within the
same lock group which is limited to 250 locks.



I know of 2 uvconfig parameters that define this locking behavior however the maximum settings of these settings are limited by the maximum size of
the shared memory segment.



# GLTABSZ - sets the number of group lock entries

GLTABSZ 250

# RLTABSZ - sets the number of read lock entries

RLTABSZ 250



Our testing indicated that these numbers cannot be raised any higher (due to
the shared memory limit).





This all means that I cannot lock more than 250 records in one transaction; this is unfortunately not always enough, we occasionally have to implement
some extensive tricks to circumvent this.



I very much would like to see this resolved on a Universe level so that the
programmers can stop worrying about this.





Anyone who has experience or knows of someone with experience, please help :-). Maybe you have knowledge of this problem yourself or know of somebody
within IBM who could help us resolve this.







Regards,



Andre Meij

Innovate-IT
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to