On Fri, 3 Mar 2006 08:51:43 -0600, McKown, John
<[EMAIL PROTECTED]> wrote:

>One of our applications people came up with a, uh, unusual request
>today. We use ftp to transfer data from the z/OS system to various,
>internal, ftp servers. Currently, this is done by adding an ftp step to
>the end of the job which creates the data set to transfer. This usually
>works. However, there have been cases where the ftp step ends with a bad
>return code due to various problems on the remote (server) side.

<snip>
>
>What the programmer would like would be for the production job to simply
>create the dataset which is to be ftp'ed.


>Whenever a dataset with this high level qualifier is created,
>"something" triggers a process (job, started task, other) which is
>passed the name of the dataset just created. This process then does some
>sort of "look up" on the name of the dataset just created and generates
>the appropriate ftp commands, which are "somehow" passed to an "ftp
>processor". If the "ftp processor" has a problem, then the "ftp team"
>would be alerted that an ftp failed. The "ftp team" would be able to
>look at the ftp output and hopefully determine what failed, why, and
>then fix it. This would releave the normal programmers from being
>called.

<snip>

>Has anybody heard of any "process" which could do such a thing? There
>are two restrictions: (1) No money is budgetted for this; and (2) Tech
>Services doesn't want to be responsible for writing any code because we
>just don't have the time to support yet another "application".

<snip>

I once did something similar at a shop, but the process was not FTP, it
was NFS.  There was an OS/2 box connected to the lan that had the HLQ
NFS mounted on the mainframe.  I wrote REXX code on the OS/2 box that
was started at boot time.  The driver code included a SLEEP function
that woke up every 30 minutes to do the transfers, which were nothing
more than COPY commands since the MVS files were NFS mounted.

There was no failure processing however, but I guess space would have
been the only issue.

You could do as you say want with most scheduling packages.  Let
the application create the data set and the scheduling package
then fires off a job based on the data set creation.  If the job
(FTP) fails, then I guess the schedulers / operators could contact
the appropriate people based on documention in the job or whatever.

But...

In your case, since people want to be notified anyway, why not
tack on an SMTP step to send an email when you get a bad
return code.  Of course this requires pagers that work via email
addresses.  Some authomation packages can dial pagers also, so that
is another option if you add a step that can trigger automation
via WTO etc.  Then you don't have to wait for an operator or
scheduler to notice the job has a problem and contact someone.

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group
mailto: [EMAIL PROTECTED]
Systems Programming expert at http://expertanswercenter.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to