On 29 January 2018 at 20:16, Paul Gilmartin < [email protected]> wrote:
> On 2018-01-29, at 11:55:56, Seymour J Metz wrote: > > > While the DOS I/O was very device dependent, there was the DTFDI with > limited device independence. > > > Insofar as "device independence" means restricting every device > type to the capabilities of a card reader/punch. > > CMS is similarly limited. Pipelines adds some flexibility. > > I knew we could drag this on into February ;-) Indeed, traditional CMS programs all have their own logic to identify data sources, though we can access Shared File System directories as if it were a mini disk and have most programs handle the data there. Exploitation of FILEDEF and NAMEDEF is minimal, as far as I know. CMS Pipelines allows programs to be chained together like stdin and stdout let you do on UNIX. It comes with a suite of efficient built-in programs and provides a programming framework to write your own (REXX) programs operate on input and output streams. CMS Pipelines goes beyond UNIX pipes with a multi-stream pipeline topology and coordinated error handling to write real world applications with pipes. https://en.wikipedia.org/wiki/CMS_Pipelines When you write your business logic as a pipeline (even when done as monolithic piece of procedural REXX logic) the same logic can be used independent of where the data resides. This is also convenient during development and testing of applications because you don't have to run the logic against some test data or capture intermediate results of the process. And if you have CMS Pipelines on z/OS, you can run the same business logic there and just provide a small wrapper to identify the data sources. Sir Rob the Plumber
