I've seen records at least an order of magnitude larger than that. Unix & Windows. Megabytes. I want to say much larger than that, but I can't verify that and when numbers get that big, my brain can't get around them. ( e.g., $1,000,000,000,000: www.pagetutor.com/trillion/index.html )

I don't know of absolute limits, but performance takes a dive, of course: lock contention, frequent updates, inappropriate indexes, selections w/o indexing poorly sized files (they should be type 30, if they must exist. The large records get isolated in their own overflow). In general, U2 responds well to size abuse. Performance degradation is fairly linear & responds well to throwing more hardware at it. But at some point degradation curve becomes geometric.

I'm curious about whether your scenario matches the usual: i.e., an application that has outgrown its original design, with a multi-valued association in that gets updated frequently through the lfe of the record. The original design intended a few mv entires, until the business operation it represents gets closed. At some point someone decided to use the association for some additional purpose. Some records remain small, some grow, resulting in a very lumpy file. Many times it's the users trying to respond quickly to a changing business need, w/o waiting for IT to do the enhancement (they invent new status codes, etc.). Sometimes it's programmers enamored by the siren call of mv nesting.

Adam Taylor wrote:
Hey all,

Can anyone tell me what (if any) the current record size limit is in UV 10.2?
We've currently found ourselves in a situation where certain fields in a data
file will contain 100K+ characters and we would like to be proactive about any
potential problems if they exist.
-------
u2-users mailing list
[email protected]
To unsubscribe please visit http://listserver.u2ug.org/

Reply via email to