We are currently trying to record CDC information from within the data capture 
exit (DFSFLGX0) in
our IMS database, by (synchronously) writing the data capture log records to a 
logstream.  This is
still in a development phase, but it is already obvious to us that we have a 
potential performance
bottleneck on our hands.  One possibilility would be to capture changed data at 
log-archival time in
a separate batch process (not via the exit), but our preference is to be able 
to "stream" the CDC on
a near-real-time basis if we can swing it.

We are trying to decide if we want to attach a sub-task within the IMS address 
space to periodically
write those log records asynchronously to the logstream (after they have been 
saved in a memory
buffer by the exit), or if we should set up a service provider in a separate 
address space that
would write out the records passed to it by the exit via cross memory services 
(PC-ss routine).  And
if we are already going to the trouble of setting up our own service provider, 
would the logstream
be superfluous or unnecessarily expensive in resources, and writing to our own 
dataset (sequential
or maybe an LDS?) be preferable?

Anybody want to venture any thoughts on which method would be the way to go?  
Or maybe suggest a
third alternative?

John Krew

(P.S. My apologies to language purists who claim that, etymologically speaking, 
there cannot be more
than one alternative.
I have noticed that you have to be careful how you write on this list . <g> )

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to