On Thu, 23 Mar 2006 15:13:57 -0500, Gil, Victor x28091 wrote:

>Good afternoon, IBM-MAIN
>
>We'd like to be able to prevent certain "confidential" fields in production
>files from being revealed to "unauthorized" users while still allowing
>access to the rest of the record. From the users prospective these files
>are read-only and are accessed through TSO, batch or CICS for testing or
>comparison purposes.
>
>The total volume of such files is huge and changes daily, so cloning them
>and altering the sensitive fields is not an option. The only other option
>we can think of is to develop an in-house method of intercepting and
>altering records while they are being read, transparently to the
>application.
>
>Here's what we've researched so far:
>
>- In CICS this should be easily achievable through the file control exit.
>The exit would look up the dataset in a table and if found, apply a
>correspondent "rule".
>- In batch we would implement a subsystem that would intercept each
>[sequential] I/O and alter the record using the very same rules.
>
>What do we do in TSO? Generally, how do we intercept records of a
>dynamically allocated file?
>
>There is a system-wide dynalloc input validation exit, IEFDB401, and it
>might be able to add "SUBSYS=..." to the DYNALLOC requests, but this would
>severe overtax all other dynamic allocations in the shop.
>
>Appreciate all and any ideas, as crazy as they might sound
>-Victor-


Victor,

I wonder just how "huge" the total volume of files really is.  (Are we
talking trillions of bytes?  Tens-of-thousands of files accessed daily?)

Why not consider splitting these files into a confidential and non-
confidential file pair?  The advantage of splitting the files into a
confidential field file and a non-confidential field file is obvious:
security should be straightforward.

You could stitch the common I/O routine into the GET routine address via a
subsystem (as you've suggested) or via an OPEN front-end (as, I believe,
John suggested), or via a BatchPipes stage (e.g., JOIN).

The BP subsystem approach could be used to stitch the files together on
reads and split them apart on writes.  BP with appropriate fittings should
be able to accomplish both of those things with a comparitively minimal
amount of programming on your part.  The BP pipelines stages would not
necessarily require a pipe writer + reader pair of jobs; you would use the
pipe fitting SUBSYS JCL to insert the stages onto the DD statement(s) and
avoid writing nearly all of that nasty authorized code to filter your
confidential (and no-doubt customer critical) files.

The issue I'm largely ignoring is getting the JCL inserted... but that
might be fairly simple with some unique pipe stages to perform dynamic
allocation (although you might have problems doing DYNALLOC at some points
in the processes).  (I don't recall ever trying to do that w/ BP.)  It
might require JCL changes but mass JCL changes aren't much of a challenge.
(Besides, if your users need access to the secret stuff they'll go along
with the change.)

But these "confidential fields" -- won't programs that don't get to see
them miss them in the records at all?  You have procedural languages
processing those files, right?  Don't you think the COBOL (et al) routines
will be even a little upset that they are not seeing what they expect where
they expect it?

--
Tom Schmidt
Madison, WI

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to