One additional "against" (until the problem is solved):

The knowledge of how to structure recovery from abends in one (or more) pipe 
stages is not well known in application circles.  Most batch recovery today 
involves stopping any successor jobs (usually via scheduler dependency on 
successful completion of the abending job(s)), then resolving the program 
abend(s) and rerunning the failing job(s).

How does one structure a multi-hundred stage pipeline to enable easy and rapid 
recovery from an abend at any stage?  I've been in the mainframe programming 
business since 1972 and I would not know how to structure such a beast.  
Restarting from the beginning is not an option with the increasingly gargantuan 
volumes of data we must process and the tighter and tighter SLA's our customers 
demand.

Peter

-----Original Message-----
From: IBM Mainframe Discussion List <[email protected]> On Behalf Of 
Hobart Spitz
Sent: Friday, September 17, 2021 3:56 PM
To: [email protected]
Subject: The Business Case for Pipes in the z/OS Base (was: Re: REXX - 
Interpret or Value - Which is better?)

IMHO, the Business Cases on Pipes in the z/OS Base are as follows.  (Pipes is 
already available as part of BatchPipes.)

The case *for *Pipes in the z/OS base.:

   1. Development costs would drop for customers, vendors, and IBM, since
   everyone could use Pipes in their software.
   2. Hardware usage would drop for customers.  In addition to avoiding
   I/O, Pipes uses a record address-and-length descriptor.  A record can
   flow from stage to stage with the only data movement being for changed
   records.  Potential data needed by a stage could have already been in the
   working set and/or cache-loaded by the previous stage.  (A methodology for
   identifying the cost/benefits by JOB and application would allow the best
   to be reworked first.  Thus Pipes would pay for itself in the shortest
   amount of time.)
   3. Product efficiency for vendors (IBM and others) would improve.
   (Arguably it's the other side of the coin in #2.)
   4. Tight integration with REXX, CLIST and JCL.
   5. Portability to and from z/VM.  This breaks down differently for
   different groups:
      - Customers: Cheaper porting to/from z/OS.  (Porting to other IBM
      Series is expensive and time-consuming, AFAIK.)
      - Vendors:  Write once for both platforms.
      - IBM:  Rather than customers moving to non-IBM platforms, when z/OS
      or z/VM don't meet their needs, those customers would have another option
      to stay with IBM.
   6. You can process both RecFM F records and RecFM V records with the
   same stages.
   7. Pipes can be used on both EXEC and DD JCL statements.  This is
   primarily for REXX-a-phobes.  Pipe commands in REXX are amazing; I've used
   the combination on both z/OS and z/VM (and predecessors).  Pipes with
   CLISTs is almost as good, AFAIK.
   8. Increased competitiveness for IBM hardware and software.  This would
   especially apply to UNIX customers who have exceeded the capabilities of
   their platforms.
   9. CMS/TSO Pipes is better than UNIX piping, and REXX is better than C.
   With today's processors, C's performance advantage over REXX is not
   significant, and dwarfed by low developer productivity (your bullet, your
   gun, your foot) of C.  Strategically using Pipes with REXX, can give better
   performance that UNIX piping and C.
   10. Since both C++ and Pipes are internally pointer based, you could get
   similar benefits by using OO techniques exclusively.  How are you going to
   convert a COBOL and JCL application to C++?  Not as easily as going to REXX
   and Pipes.
   11. z/OS is a file-structure-aware operating system.  It does not
   use embedded control characters in text files or impose record boundary
   knowledge in binary files.  Pipes reads and writes byte stream data by
   converting to and from record address-and-length descriptor format.  This
   means that, in most cases, any sequence of stages perform equally well on
   RecFM F, RecFM V, byte stream text data and deblocked UNIX binary files.
   12. Addresses staff shortages due to baby-boomer retirements and CoViD
   impacts.
   13. Reduces batch window requirements.  With less I/O, JOBs finish
   faster.
   14. It's my understanding that there are people inside IBM who are
   behind Pipes going into the z/OS base.  Pipes is part of the z/VM base.
   Can z/OS be that far behind?
   15. Is consistent with policies for combating global warming of
   customers, vendors and the public.  Fewer CPU cycles wasted means less heat
   to be dissipated and less electricity to be produced.  The UNIX stream and
   C string models are obsolete in this light; every character must go through
   the CPU to get to the end of a record or string.   Not so in Pipes or SAM
   access methods.  Rarely do you see a UNIX command with more than a dozen
   filters; Pipes commands go into the 100s or 1000s of stages.  In general,
   the more stages the better the efficiency, since you are doing more work
   with the same amount of I/O.  If one reads the tea leaves, it seems
   inevitable that there will be some kind of restriction on activities that
   cause global warming; whether it's a tax, regulation, cap-and-trade, etc.,
   I can't guess.  The point is that it is likely that many heavy resource
   using applications and processes will have to be converted to Pipes-like
   mechanisms, on all platforms to fight global warming.  The question is, do
   you want to start now when it can be done as part of a well thought-out
   cost-effective plan or do it in a rush later, when the costs and risks have
   ballooned?

The case *a**gainst *Pipes, IMHO:

   1. Short term loss of revenue to due better hardware utilization for the
   same work.  Because of improved competitiveness the dip should be short
   lived if it happens at all.  Not all customers will embrace Pipes at the
   same rate or at the same time.  On the other hand, there is always the
   chance that a few successes turn into a snowball.
   2. From the customer perspective, converting JCL and COBOL to REXX and
   Pipes may not appear to justify the expense and/or the skills to do so may
   be lacking.  If a mechanism could be created to convert JCL to Pipes on the
   fly that may change the picture.  It's not obvious, but most sequential
   datasets used to pass data from step to step could be replaced with a pipe,
   provided you can come up with a way to start all steps in parallel, giving
   each their own TIOT and other control blocks.  Giving each step it's own
   address space would require cross address space addressing.  Blocks of
   addresses would have to be reserved for each pipe stage step, and be
   addressable, read only, by other pipe stage steps.  I'm not up on all the
   latest addressing mechanisms, but perhaps data-spaces would be applicable
   for this; only data needs to be addressable between step stages.
   3. AFAIK, the newer versions of Pipes are not SMP/E ready.
   4. The funding and staffing are not available.  This is probably due to
   misguided politics, including pro-UNIX and hard-code LE factions.
   5. The Pipe command is not well known or well marketed as part of
   BatchPipes.  This should change.
   6. There is not enough customer management push or technical
   understanding of the enormous advantages

The last points in each section may be the most important. Push from customers 
concerned about global warming may be the straw that gets IBM to do what is 
right.

Action items for anyone who is interested:

   - Make your data center, budgeting and other management aware of how
   they can save resources, time and money by installing BatchPipes.
   - Bug your IBM representative information on the products BatchPipes and
   TSO Pipelines.  Too many of them don't even know what they are.
   - Get a free trial installation of BatchPipes.
   - Code up some Pipes and non-pipes timing and resource comparisons
   Publish the results.  If nothing else, try sorting a stem (many long
   records), once the usual way and once by concatenation a null string each
   time you assign a record to a new position.  The latter should force the
   original copy to be garbage collected when the time comes.
   - If you are not familiar with Pipes, go to
   
https://urldefense.com/v3/__http://vm.marist.edu/*pipeline/__;fg!!Ebr-cpPeAnfNniQ8HSAI-g_K5b7VKg!bxj_ut52_44H20uov1K2pGP76gQfD55t3r7YTL0V_mugfiTU81dCXFxgTXQk_8ILclUBvw$
  .  There is lots of documentation and
   code.  It's all free.
   - Put in a requirement to package the REXX compiler together with Pipes
   and/or put the compiler in the z/OS base.  You don't get UNIX without a C
   compiler.  You don't get z/VM without the REXX compiler.  Why does z/OS not
   include the REXX compiler.  (Do I smell an LE monkey wrench?)
   - Request that TSO Pipes (a.k.a. BatchPipesWorks) be broken off from
   BatchPipes and made available standalone.  IMHO, the inter-JOB piping
   capability of BatchPipes was a kludgy mistake, confirmed by the low
   customer interest.  Intra-JOB capability can be done with the TSO Pipe
   command.
   - Request that BatchPipes works run without a sub-system where the
   inter-JOB functionality is not needed.
   - Get OOREXX and play with pipe.rex and usepipe.rex.  Compare the
   pipe.rex performance to a non-pipe equivalent.

While I'm on the subject, kudos to Mike Cowlishaw (who created REXX) and John 
Hartmann (who created Pipes).

OREXXMan
Would you rather pass data in move mode (*nix piping) or locate mode
(Pipes) or via disk (JCL)?  Why do you think you rarely see *nix commands with 
more than a dozen filters, while Pipelines specifications are commonly over 
100s of stages, and 1000s of stages are not uncommon.
IBM has been looking for an HLL for program products; REXX is that language.


On Wed, Sep 15, 2021 at 7:40 AM Paul Gilmartin < 
[email protected]> wrote:

> On Wed, 15 Sep 2021 05:50:13 -0500, Lionel B. Dyck wrote:
>
> >z/OS REXX does have a PIPES but it is only available currently as 
> >part of
> SmartBatch (or is it BatchPipes).
> >
> >There is an RFE asking for PIPES for TSO that is the highest voted 
> >RFE at
> the moment with 237 votes.
> >
> >
> https://urldefense.com/v3/__https://www.ibm.com/developerworks/rfe/exe
> cute?use_case=viewRfe&CR_ID=47699__;!!Ebr-cpPeAnfNniQ8HSAI-g_K5b7VKg!b
> xj_ut52_44H20uov1K2pGP76gQfD55t3r7YTL0V_mugfiTU81dCXFxgTXQk_8IRsicN1A$
> >
> >I'm not holding my breath for IBM to do it - if they really wanted to
> enhance the z/OS  REXX experience they would but that doesn't seem to 
> be their goal.
> >
> TANSTAAFL.  What's the business case for IBM?
>
--

This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to