From: "Patrick Roehl" <[email protected]>
Sent: Wednesday, 25 August 2010 3:40 AM

The application is an exit to CA IDMS.  An IDMS exit should not issue
operating system calls (e.g. I/O, SVCs, etc.) because anything that could
cause a wait would cause the whole IDMS system to wait.

The data coming in are database keys and the application needs to know, in
a particular run-unit, which keys are being seen for the first time and
which are new.

What's the difference?  A key being seen for the first time surely is new?

 New keys will get further processing.  Because this is a
database exit, efficiency is very important.

An IDMS database is divided in areas and database keys within a particular
area will fall within a defined numeric range.  By adding up the range for
all included areas (plus some other factors) the maximum number of space
needed for the bit string can be determined.

Because of these factors, and the ideas brought out in this discussion,
the bit string idea seems like an excellent fit.

How is that going to be efficient in terms of time?

Reply via email to