McKown, John wrote:
-----Original Message-----
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Dennis Trojak
Sent: Friday, March 03, 2006 12:53 PM
To: [email protected]
Subject: Re: Unusual FTP request.


How about a conditional step after the FTP that checks return code GT
zero. We do that and send an e-mail via SMTP to the Production team
along with any "special" instructions.
Dennis..

Possible, but unlikely. The programmers are really wanting something so
that they are not "in the loop" at all about ftp. That is, they don't
want the responsibility to put the FTP step in the job, to check the RC
and send mail, or anything else. They want the "ftp team" to set up all
of that, including any userid/password requirements, maintaining the IP
address of the server, maintaining the ftp statements, etc. From what I
get, they want to say something like: "I'm going to create dataset XYZ.
It needs to be ftp'ed to server ABC, into subdirectory DEF, and given
the name GHI. You figure out what needs to be done to get the ftp to
work and set it up independantly of my job." Then, when XYZ is created,
the ftp automagically occurs without anything in their JCL. Similar to a
dataset trigger in CA-7. I.e. they really want out of the business of
data transfer beyond the initial "put this dataset on that server and
give it this name in this subdirectory". Should anything change after
that (e.g. the dataset should go to another server, the server file name
or subdirectory should change), they don't even want to know about it.
That would be the responsibility of the "ftp team" to update the ftp
process (whatever it turns out to be).

NFS/SMB has been mentioned in another post. I have done an NFS import of
a UNIX subdirectory onto the z/OS system. It works quite well. However,
the same problem occurs. If the job terminates trying to copy to the
NFS/SMB share, the programmer would get called and they don't want to
be. They would still want "someone else" to do the NFS/SMB copy function
and be responsible for any problems with it. So, ftp or NFS/SMB it is
basically all the same to them. They don't want anything related to the
copying in any process for which they are responsible. And they really
don't want to set up a second job to do the ftp/NFS work either.

I know that sounds like they are being lazy, but they have had such
problems with this - again due mainly to server problems - that they are
frustrated and just want OUT! It is one thing to get called about a
problem you can fix. It is another thing to get calls for a problem that
is outside your ability to fix or even diagnose properly.

Well - off to the annual company meeting. Such fun.

--
John McKown
Senior Systems Programmer
UICI Insurance Center
Information Technology
...

Some thoughts:
Application folks are already executing a step in their jobs to do the transfer and somehow supplying the required data (XYZ, ABC, DEF, GHI) to accomplish the transfer. Why is is not possible to change that step into one that saves the transfer parameters in some form (database tables, GDG GDS, etc.) and then do an in stream demand to your job scheduler for a separate companion production job that is named to make it clear it is the responsibility of the FTP group, and which runs independently of the application job stream? That companion job would extract the parameters and initiate the transfer. This process could even save a copy of the dataset under another name "owned" by the companion FTP job so retention time of the original dataset wouldn't be an issue if transmission were delayed for some reason.

You might be able to adapt the approach we use for handling asynchronous batch job processing requests initiated by some of our CICS applications: CICS generates a trivial batch job which stages the job run parameters into a jobstream-related GDS and demands execution (via ZEKE in our case) of a production job stream. The demanded production job stream begins with a batch REXX step that determines the oldest GDG generation of its "staging" GDG, copies it to a working dataset, deletes that oldest generation, and then uses the parameters in the working dataset to drive the batch process. ZEKE prevents multiple instances of the demanded job from running at the same time, but keeps track of how many instances are pending. The GDG protocol used insures that each instance of the jobstream will run with its own appropriate set of parameters in FIFO order, as long as you can insure that the total number of pending jobs can never exceed the GDG limit. If an instance of the jobstream fails, the job scheduler keeps pending instances of the jobstream delayed until the failing jobstream is resolved.

An approach we have taken for batch-initiated FTP going outside the company (which means both failure and security are more of a problem) is to offload the actual FTP transfer to a local server (with encryption support) but have a CICS-based monitor which holds the data (in HFS files) and controls the process, including retransmission and failure reporting. There is a batch job interface which stages the data into an HFS directory owned by the CICS transmission application and initiates the request by posting parameters into a DB2 table. At that point the batch application relinquishes responsibility for the actual transmission to CICS. This seems to work fairly well, but took time and resources to implement.

--
Joel C. Ewing, Fort Smith, AR        [EMAIL PROTECTED]

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to