Re: History of Mainframe Cloud

2017-01-12 Thread Jack J. Woehr

Lizette Koehler wrote:

Multiple users were capable of accessing a central computer through dumb
terminals, whose only function was to provide access to the mainframe.


"A Conversation with Michael Cowlishaw", Dr. Dobb's Journal, March 1, 1996
http://www.drdobbs.com/a-conversation-with-michael-cowlishaw/184409842

"I suspect in ten years you may not be able to easily draw a distinction 
between what's a mainframe and what's not.
I've been wishing for some years that someone would take the mechanics of a PC, 
which belch out heat and noise,
and put them in a room miles away from where people are sitting, so that all 
you'd have on your desk would be
the input and output devices you need.

"That's essentially what the mainframes gave you, in that they consolidated the 
disk drives and the power supplies
and the central processing units in one place and people had something very 
simple on their desks.

"I'm not the first to point it out, but the World Wide Web browsers of today are 
effectively dumb terminals"

- Michael Cowlishaw

--
Jack J. Woehr # Science is more than a body of knowledge. It's a way of
www.well.com/~jax # thinking, a way of skeptically interrogating the universe
www.softwoehr.com # with a fine understanding of human fallibility. - Carl Sagan

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Git: Re: New free / open source z/OS tools from Dovetailed Technologies

2017-01-12 Thread David Crayford

On 13/01/2017 5:56 AM, Frank Swarbrick wrote:

Can you give a bit more detail on how you are utilizing this Ant SSH process?  
I don't know anything about Ant.  I am thinking/wondering how it might be used 
in conjunction with Git for source code management of z/OS COBOL programs (and 
jobs).  I know that Rocket is releasing a z/OS version of Git, but it still 
seems to me that for the main part of development (that is, other than the move 
of code to production) it would be useful to use Eclipse to update the source 
code on the local workstation and just send it (perhaps using this Ant SSH 
interface?) to z/OS to be compiled.  Or something like that!


I'm using Rockets git port for active development now it's GA (in fact I 
was when it was still in beta). We use SMB network shares which are 
mounted on Windows where I use Slickedit for C++ and HLASM. We run 
bitbucket locally so the git servers are running on Windows servers. To 
share code between different systems using git is easy. Once you've 
cloned the repos just keep them in sync but doing "git push" on one 
system and "git pull" on the other to fetch the updates. We've had code 
review meetings where bugs are being fixed on different systems and then 
pushed and pulled right there and then! All our Java development is done 
using Eclipse on Windows. When we want to deploy it on z/OS we just use 
the push/pull method and then build using maven. Git has revolutionized 
the way we work. We also run Jira which has bitbucket integration. We 
manage our projects using weekly agile sprints from a backlog using Jira 
tasks. Each task is attached to a git branch in bitbucket (press of a 
button). When the code is complete we use pull requests which double up 
as code reviews (pull requests and merges are done by different members 
of the team). We can then track every line of code to a Jira task.


Git has become indispensable. I've suggest to management that we should 
pay Rocket for support.







Thanks,

Frank


From: IBM Mainframe Discussion List  on behalf of Kirk Wolf 

Sent: Thursday, January 12, 2017 1:26 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: New free / open source z/OS tools from Dovetailed Technologies

Today we have released several new free / open source tools on our website:
   See the new "Community" page at: http://dovetail.com
Dovetailed Technologies, LLC
dovetail.com
Using Co:Z SFTP-server, systems with OpenSSH or another SFTP compatible client 
may transfer files directly to z/OS datasets, controlling all aspects of 
dataset ...



Some highlights:

*Ant SSH* is a set of enhanced SSH tasks for Apache Ant.  For many years we
have used a workstation IDE (Eclipse) as a code editor for  C/C++, Java,
and some Assembler.  We find this tool to be indispensable since it allows
us with one button click in a couple of seconds to upload any dirty source
files and run an incremental build (make) on z/OS.   These custom Ant tasks
can be used with any (or no) IDE.

*ncdu* is a curses based user interface for navigating Unix file systems,
which can be very useful when trying to cull unused files from a highly
populated file system. The main interface makes it easy to quickly see
which directories contain the heaviest disk usage and the navigational
model is very intuitive, making it easy to traverse and delete files and
directories that are no longer needed.

One thing that we have done is to include our source project for *ncdu* that
includes a z/OS Makefile and Ant build.xml script that uses Ant-SSH.  This
serves as a complete demonstration of using a workstation IDE to develop
for z/OS.

Kirk Wolf
Dovetailed Technologies
http://dovetail.com
Dovetailed Technologies, LLC
dovetail.com
Using Co:Z SFTP-server, systems with OpenSSH or another SFTP compatible client 
may transfer files directly to z/OS datasets, controlling all aspects of 
dataset ...



*Join us at SHARE in San Jose*
  Thursday, March 09, 10:00 AM - 11:00 AM
  Finding the Needle in a Haystack - Diagnosing Common OpenSSH Problems

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: AW: What is subsystem function code No. 39 and what does it do?

2017-01-12 Thread Tony Harminc
On 12 January 2017 at 04:10, Giliad Wilf
<00d50942efa9-dmarc-requ...@listserv.ua.edu> wrote:
> This macro only describes a layout of an extension to the SSOB for the 
> service requested.
> Yet, the "Using the Subsystem Interface" manual neither lists this function 
> with services you can request, nor does it list it with services you can 
> provide when writing your own subsystem.

This topic has come up a good handful of times on this list over the
last 20 years or so, and searching the archives will pay off.

Copies of GG66-3131 seem to be rarer than hen's teeth, but a few
people have one. (I don't.)

There is some closely related though fairly high-level doc in the
description of the Log Stream Subsystem Exit IXGSEXIT in the MVS
Installation Exits, currently SA23-1381-02. Unfortunately IBM does not
seem to provide source code for the default exit.

And Howard Gilbert's GPSAM in CBT file 290 from 1982(!), though it
implements only the Open and Close calls, should still work on current
systems with little or no modification. Its doc and code make
excellent reading even if you don't use the program itself.

This is just another thing that IBM has effectively de-documented over
the years. There used to be a manual OS/VS2 TSO Guide to Writing a
Terminal Monitor Program or a Command Processor, GC28-0648, and a
later XA edition, MVS/XA TSO Guide to Writing a Terminal Monitor
Program or a Command Processor, GC28-1295, that explained in detail
how to write a TMP. That was decommitted in the early ESA era, when
the TMP and the knowledge needed to write one went OCO. Similarly the
subsytem interface, once documented, has not been completely so for
many years.

The Using the Subsystem Interface book has for a long time not
documented even those very well known function codes like Open, Close,
Allocate; indeed it appears it is not possible to write your own JES
using this book alone.

This is the way we live.

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SDB and Program Object Library

2017-01-12 Thread Steve Smith
I think that SDB can't decide on a BLKSIZE when LRECL=0.  In any case, for
a Program Object PDSE, the BLKSIZE is nearly irrelevant.  PDSEs always use
a physical record length of 4K.  For non-PO PDSEs, the nominal BLKSIZE is
faked up for you as needed.

Use SDB (0) when you have an LRECL, and 32K-8 when you don't.  I do think
it would be nice if ISPF handled that, but there may be situations I
haven't thought of.

sas

On Thu, Jan 12, 2017 at 7:10 PM, Paul Gilmartin <
000433f07816-dmarc-requ...@listserv.ua.edu> wrote:

> On Wed, 11 Jan 2017 18:24:57 -0700, Lizette Koehler wrote:
>
> >I just used Option 3.2 and allocated a PDSE with LRECL 80 and Blksize 0
> with Version 2 (z/OS V2.1)
> >
> >No issues - it took it fine and the Blksize is 32760.
> >
> Did you populate it by copying a load module to a program object?
>
> FWIW, I deleted and re-allocated, again with BLKSIZE=0 and used /bin/cp to
> copy
> a load module.  No errors, and it changed BLKSIZE to 4096.
>
> This points a finger at ISPF Move/Copy Utility.
>
> -- gil
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>



-- 
sas

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SDB and Program Object Library

2017-01-12 Thread Paul Gilmartin
On Wed, 11 Jan 2017 18:24:57 -0700, Lizette Koehler wrote:

>I just used Option 3.2 and allocated a PDSE with LRECL 80 and Blksize 0  with 
>Version 2 (z/OS V2.1)
>
>No issues - it took it fine and the Blksize is 32760.
> 
Did you populate it by copying a load module to a program object?

FWIW, I deleted and re-allocated, again with BLKSIZE=0 and used /bin/cp to copy
a load module.  No errors, and it changed BLKSIZE to 4096.

This points a finger at ISPF Move/Copy Utility.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Cobol and PreCompile for CICS and DB2

2017-01-12 Thread Tom Ross
>Does anyone remember when the Cobol Compiler could do the PreCompile
>function for CICS or DB2 without running the actual standalone
>step for Precompile?

Yes, it started with COBOL V2R2 in 2000 with the SQL compiler option,
then was continued in 2001 with the CICS option in COBOL V3R1.

Both have been supported and extended in every release since!

Cheers,
TomR  >> COBOL is the Language of the Future! <<

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SDB and Program Object Library

2017-01-12 Thread retired mainframer
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Paul Gilmartin
> Sent: Thursday, January 12, 2017 1:21 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: SDB and Program Object Library
> 
> On 2017-01-12 06:51, Allan Staller wrote:
> > Bad ACS routuines?
> >
> Do these ACS routines operate
> o at data set creation?
> o at OPEN?
> o Both?
> 
> Is there a way for an affected user to determine what effect these ACS
> routines have?  Something like a "verbose" option?

You can determine the effects after the fact.  LISTCAT will display the names 
of the SMS classes assigned to the dataset.  You can then list the attributes 
of each class.

At our site, we allowed users to specify the names of the classes they wanted 
if the defaults in ACS were not desired.  This is under control of some RACF 
profiles.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SDB and Program Object Library

2017-01-12 Thread Cieri, Anthony

It may not be your ACS routines.

I can reproduce this behavior on a "test" LPAR when allocating a PDSE 
in ISPF 3.2 with RECFM=U.  I get the warning message to either DELETE or KEEP 
the dataset. If I keep it, I can't copy Load modules or program objects into 
then newly created D/S.

If I allocate the PDSE with either a Fixed or Variable RECFM, then 
BLKSIZE of 0 (or BLANK) is accepted. In the cases of RECFM = FB or VB, SDB 
produces an optimum block size for my device type.


  

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Paul Gilmartin
Sent: Thursday, January 12, 2017 4:47 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: SDB and Program Object Library

On 2017-01-12 14:23, Allan Staller wrote:
> Allocation
>   
> Do these ACS routines operate
> o at data set creation?
> o at OPEN?
> o Both?
> 
"Allocation" is dismayingly ambiguous.  But I assume you mean allocation as in 
DISP=NEW ratner than allocation as in DISP=OLD.

Data Set Utility tells me:
 Data Set Name . : User.LOADE
 Specified data set has zero block size.

 The data set allocated contains inconsistent attributes as indicated by the  
message displayed above. Prior to allocating a managed data set, ISPF cannot  
always determine if the attributes are inconsistent. The data class used when  
allocating the data set may contain inconsistent attributes, or the attributes  
you specified on the allocation panel may conflict with those defined in the  
data class. This panel gives you the opportunity to delete this data set. If  
you keep the data set, other ISPF functions, such as edit, move, or copy, may  
not be able to use the data set.

But I keep it.  Then Data Set List says:
 DSLIST - Data Sets Matching User.**.LOAD*   Row 1 of 9
 Command ===>  Scroll ===> CSR

 Command - Enter "/" to select action Dsorg  Recfm  Lrecl  Blksz
 ---
  User.LOADE   PO-E  U  0  0
  User.LOADU   POU256  19069

and Move/Copy Utility says:
 COPY From User.LOADUInvalid block size
 Command ===>

 To Other Partitioned or Sequential Data Set:
Name  . . . . . . . LOADE
Volume Serial . . .   (If not cataloged)
 
\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510
 \u2502 Block size of data set must not be zero. \u2502
  F1=Help  F 
\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518
 ard  F8=Forward

... so, why doesn't SDB fix it?

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: delete old datasets

2017-01-12 Thread retired mainframer
Do you really want to delete a migrated dataset after only 1 year?  Migration 
has already freed the disk space.  At our site, we frequently needed to restore 
datasets that were several years old when an old customer would return to the 
fold.

Do you really want to delete the backup copy of a dataset based on time and not 
number of copies?  The purpose of a backup is to allow a recovery when the 
primary is damaged.  If you delete your only backup after a year because the 
dataset has not been modified, how will you recover it?

What tool are you using for backup and migration?  If it is HSM, there are SMS 
parameters you can set to control when the backup and migration copies are 
deleted.  ASM2 from CA has similar capabilities built in.  I expect that FDR 
does also but I have never used that product.

And speaking of SMS, it can also control the deletion of primary datasets on 
disk.

Unless you free the tape in your tape management system, the only practical 
effect of deleting a tape dataset is removing its entry from the catalog if 
there was one.  How you free the tape will depend on which tape management 
system you use.

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Ron Thomas
> Sent: Thursday, January 12, 2017 12:57 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: delete old datsets
> 
> Looks like this is not pulling migrated datasets . So how we can also 
> included migrated
> datasets in to this list of datasets to get deleted ? Thanks!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: New free / open source z/OS tools from Dovetailed Technologies

2017-01-12 Thread Kirk Wolf
On Thu, Jan 12, 2017 at 3:56 PM, Frank Swarbrick <
frank.swarbr...@outlook.com> wrote:

> ...

it would be useful to use Eclipse to update the source code on the local
> workstation and just send it (perhaps using this Ant SSH interface?) to
> z/OS to be compiled.  Or something like that!
>

Exactly like that.

We use our workstation IDE's connection and tools for connecting to an
managing our SCCS.
You could certainly connect your workstation IDE to git.

Ant-SSH lets you write an Ant XML script to upload (only the changed) files
and run z/OS commands like "make" to build, on the same ssh connection.
We use a desktop ssh key agent so that our scripts don't need passwords or
user prompts.

If you are already using Eclipse, try these steps:

-  download Ant-SSH to your workstation, and import it as a general project
in Eclipse
-  follow the Ant-SSH README  to
add the ant-ssh/lib/*.jars to your Eclipse Ant classpath
-  test by customizing/running the ant-ssh/test/test1.xml test script
 (right-click "Run as / Ant Build")

- download the ncdu zip file to your workstation
- follow the ncdu README :
- customize build.properties per the README
- run Ant build.xml to upload and build ncdu on z/OS using ant-ssh

Kirk Wolf
Dovetailed Technologies
http://dovetail.com

>
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: delete old datsets

2017-01-12 Thread Gibney, Dave
Properly implement SMS. And then expiration happens according to business need.

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> On Behalf Of Ron Thomas
> Sent: Thursday, January 12, 2017 11:28 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: delete old datsets
> 
> Hi . Could some one let me know how to delete old datasets
> SPXR3V.TEST.*say  1 year old  old using a job ?  I need to get this scheduled 
> as
> a monthly process so that storage  gets freed.  Thanks!
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send email to
> lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: History of Mainframe Cloud

2017-01-12 Thread Anne & Lynn Wheeler
re:
http://www.garlic.com/~lynn/2017.html#21 History of Mainframe Cloud

Les sent me this CP40/CMS presentation that he gave at '82 SEAS meeting,
and let me scan, OCR and put it up 
http://www.garlic.com/~lynn/cp40seas1982.txt

a copy is also in the appendix of Melinda's (neuall.pdf) VM history
paper. Some years ago, I converted Melinda's postscript to pdf & kindle
formats and she added them to her site. and earlier, from long ago and
far away at princeton
http://web.archive.org/web/20050924051057/http://www.princeton.edu/~melinda/index.html

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: New free / open source z/OS tools from Dovetailed Technologies

2017-01-12 Thread Frank Swarbrick
Can you give a bit more detail on how you are utilizing this Ant SSH process?  
I don't know anything about Ant.  I am thinking/wondering how it might be used 
in conjunction with Git for source code management of z/OS COBOL programs (and 
jobs).  I know that Rocket is releasing a z/OS version of Git, but it still 
seems to me that for the main part of development (that is, other than the move 
of code to production) it would be useful to use Eclipse to update the source 
code on the local workstation and just send it (perhaps using this Ant SSH 
interface?) to z/OS to be compiled.  Or something like that!


Thanks,

Frank


From: IBM Mainframe Discussion List  on behalf of 
Kirk Wolf 
Sent: Thursday, January 12, 2017 1:26 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: New free / open source z/OS tools from Dovetailed Technologies

Today we have released several new free / open source tools on our website:
  See the new "Community" page at: http://dovetail.com
Dovetailed Technologies, LLC
dovetail.com
Using Co:Z SFTP-server, systems with OpenSSH or another SFTP compatible client 
may transfer files directly to z/OS datasets, controlling all aspects of 
dataset ...



Some highlights:

*Ant SSH* is a set of enhanced SSH tasks for Apache Ant.  For many years we
have used a workstation IDE (Eclipse) as a code editor for  C/C++, Java,
and some Assembler.  We find this tool to be indispensable since it allows
us with one button click in a couple of seconds to upload any dirty source
files and run an incremental build (make) on z/OS.   These custom Ant tasks
can be used with any (or no) IDE.

*ncdu* is a curses based user interface for navigating Unix file systems,
which can be very useful when trying to cull unused files from a highly
populated file system. The main interface makes it easy to quickly see
which directories contain the heaviest disk usage and the navigational
model is very intuitive, making it easy to traverse and delete files and
directories that are no longer needed.

One thing that we have done is to include our source project for *ncdu* that
includes a z/OS Makefile and Ant build.xml script that uses Ant-SSH.  This
serves as a complete demonstration of using a workstation IDE to develop
for z/OS.

Kirk Wolf
Dovetailed Technologies
http://dovetail.com
Dovetailed Technologies, LLC
dovetail.com
Using Co:Z SFTP-server, systems with OpenSSH or another SFTP compatible client 
may transfer files directly to z/OS datasets, controlling all aspects of 
dataset ...



*Join us at SHARE in San Jose*
 Thursday, March 09, 10:00 AM - 11:00 AM
 Finding the Needle in a Haystack - Diagnosing Common OpenSSH Problems

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SDB and Program Object Library

2017-01-12 Thread Paul Gilmartin
On 2017-01-12 14:23, Allan Staller wrote:
> Allocation
>   
> Do these ACS routines operate
> o at data set creation?
> o at OPEN?
> o Both?
> 
"Allocation" is dismayingly ambiguous.  But I assume you mean
allocation as in DISP=NEW ratner than allocation as in DISP=OLD.

Data Set Utility tells me:
 Data Set Name . : User.LOADE
 Specified data set has zero block size.

 The data set allocated contains inconsistent attributes as indicated by the
 message displayed above. Prior to allocating a managed data set, ISPF cannot
 always determine if the attributes are inconsistent. The data class used when
 allocating the data set may contain inconsistent attributes, or the attributes
 you specified on the allocation panel may conflict with those defined in the
 data class. This panel gives you the opportunity to delete this data set. If
 you keep the data set, other ISPF functions, such as edit, move, or copy, may
 not be able to use the data set.

But I keep it.  Then Data Set List says:
 DSLIST - Data Sets Matching User.**.LOAD*   Row 1 of 9
 Command ===>  Scroll ===> CSR

 Command - Enter "/" to select action Dsorg  Recfm  Lrecl  Blksz
 ---
  User.LOADE   PO-E  U  0  0
  User.LOADU   POU256  19069

and Move/Copy Utility says:
 COPY From User.LOADUInvalid block size
 Command ===>

 To Other Partitioned or Sequential Data Set:
Name  . . . . . . . LOADE
Volume Serial . . .   (If not cataloged)
 
\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510
 \u2502 Block size of data set must not be zero. \u2502
  F1=Help  F 
\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518
 ard  F8=Forward

... so, why doesn't SDB fix it?

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SDB and Program Object Library

2017-01-12 Thread Farley, Peter x23353
AFAIK, ACS code and allocation results are hidden from ordinary users except in 
the observed allocation itself.  Again AFAIK, only storage admins have access 
to any logging (if any) that ACS routines may do.

 My unfortunate experience has been that ordinary users are not 
considered smart enough to see or understand what storage admins do. 

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Paul Gilmartin
Sent: Thursday, January 12, 2017 4:21 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: SDB and Program Object Library

On 2017-01-12 06:51, Allan Staller wrote:
> Bad ACS routuines?
>  
Do these ACS routines operate
o at data set creation?
o at OPEN?
o Both?

Is there a way for an affected user to determine what effect these ACS
routines have?  Something like a "verbose" option?

> 
> Is there any rationale to this logic, that BLKSIZE=0, which is supposed to 
> defer the choice to SDB, fails, while the absurd BLKSIZE=1 succeeds and SDB 
> operates?
> 

Thanks,
gil

--


This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SDB and Program Object Library

2017-01-12 Thread Lizette Koehler
Actually, I might suggest using TEST under the ISMF function in ISPF.

But for verbose, you would need to get your friendly Storage Admin to add WRITE 
Statements to the SMS routines.

Lizette

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: delete old datsets

2017-01-12 Thread Lizette Koehler
Then as others have pointed out

Catalog Search Interface to create a list of potential datasets.
A second process to parse the information and build your

IDCAMS - DELETE   or DFDSS Delete

HSM BDELETE or HDEL commands

Or you could look at a tool that could do this more easily.

There is not one tool in z/OS to do it all at this time.

Lizette

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SDB and Program Object Library

2017-01-12 Thread Allan Staller
Allocation

On 2017-01-12 06:51, Allan Staller wrote:
> Bad ACS routuines?
>  
Do these ACS routines operate
o at data set creation?
o at OPEN?
o Both?




Is there a way for an affected user to determine what effect these ACS routines 
have?  Something like a "verbose" option?


NO

> 
> Is there any rationale to this logic, that BLKSIZE=0, which is supposed to 
> defer the choice to SDB, fails, while the absurd BLKSIZE=1 succeeds and SDB 
> operates?
> 

Thanks,
gil

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN


::DISCLAIMER::


The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and 
other defects.




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SDB and Program Object Library

2017-01-12 Thread Paul Gilmartin
On 2017-01-12 06:51, Allan Staller wrote:
> Bad ACS routuines?
>  
Do these ACS routines operate
o at data set creation?
o at OPEN?
o Both?

Is there a way for an affected user to determine what effect these ACS
routines have?  Something like a "verbose" option?

> 
> Is there any rationale to this logic, that BLKSIZE=0, which is supposed to 
> defer the choice to SDB, fails, while the absurd BLKSIZE=1 succeeds and SDB 
> operates?
> 

Thanks,
gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: delete old datsets

2017-01-12 Thread Ron Thomas
Hi Lizette - At this point of time it is fine for me to get deleted  DISK & HSM 
Migrated datasets .  Thanks!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: delete old datsets

2017-01-12 Thread Scott Barry
Consider using IBM DCOLLECT for primary-DASD/DFHSM-migrated dataset inventory 
and couple that (output) with now DFSORT-supported utility reporting to filter 
and generate any required post-processing batch-execution commands.

Scott Barry
SBBWorks, Inc.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: delete old datasets

2017-01-12 Thread Feller, Paul
Depending on what you are trying to do you might consider writing some REXX 
code that uses the catalog search interface (IGGCSI00) program.  You could then 
create the needed jobs to delete the datasets.

Thanks..

Paul Feller
AGT Mainframe Technical Support


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Ron Thomas
Sent: Thursday, January 12, 2017 14:57
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: delete old datsets

Looks like this is not pulling migrated datasets . So how we can also included 
migrated datasets in to this list of datasets to get deleted ? Thanks!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: delete old datsets

2017-01-12 Thread Lizette Koehler
Ron,

Let me know if I understand

To delete OLD Datasets that reside on 
TAPE
DISK
HSM Migrated
HSM Backup

Does this cover it?  If it does, you basically are asking to clean up any 
dataset that meets a criteria of your chosing.

You probably will need to use the CATALOG SEARCH INTERFACE, to build a list of 
datasets, where they reside and either last access date or creation date.

Then use another tool to build the appropriate process to do the DELETE.

TAPE - USE CA1 Utilities (If CA1 Shop)
DASD - IDCAMS or DFDSS
HSM Migrated/Backups - HSM commands

Does this cover it?

Lizette

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: delete old datsets

2017-01-12 Thread Ron Thomas
Looks like this is not pulling migrated datasets . So how we can also included 
migrated datasets in to this list of datasets to get deleted ? Thanks!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: delete old datsets

2017-01-12 Thread PINION, RICHARD W.
I'll chime in, as another poster said, FDREPROT is a good tool
for reporting on all sorts of conditions dealing with data sets.
FDREPORT can report on DASD as well as tape data sets. In addition,
FDREPORT can generate whatever kind of utility JCL/control cards
you prefer to use for deleting the old data sets.  If some of those
data sets have been migrated, you can use FDREPROT to handle those
with HSM HDELETE commands.  Thereby, eliminating the need to have 
them recalled, only to be deleted. 

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Steve Smith
Sent: Thursday, January 12, 2017 3:39 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: delete old datsets

I recommend CR+ from Rocket Software, self-serving as that is.  It would make 
this fairly easy.

sas

On Thu, Jan 12, 2017 at 3:16 PM, Ron Thomas  wrote:

> Ok thanks . so whether this utility will delete Tape and archived 
> datasets ? Thanks!
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send 
> email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>



--
sas

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN
FIRST TENNESSEE

Confidentiality notice: 
This e-mail message, including any attachments, may contain legally privileged 
and/or confidential information. If you are not the intended recipient(s), or 
the employee or agent responsible for delivery of this message to the intended 
recipient(s), you are hereby notified that any dissemination, distribution, or 
copying of this e-mail message is strictly prohibited. If you have received 
this message in error, please immediately notify the sender and delete this 
e-mail message from your computer.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: delete old datsets

2017-01-12 Thread Steve Smith
I recommend CR+ from Rocket Software, self-serving as that is.  It would
make this fairly easy.

sas

On Thu, Jan 12, 2017 at 3:16 PM, Ron Thomas  wrote:

> Ok thanks . so whether this utility will delete Tape and archived datasets
> ? Thanks!
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>



-- 
sas

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


AW: What is subsystem function code No. 39 and what does it do?

2017-01-12 Thread Peter Hunkeler
>Is there any discussion per function code No. 39 somewhere?



>From an old "orange book" titles "The Subsystem Interface in MVS/SP Version 3" 
>(GG66-3131-00):


Function codes 7, 16, 17, 18, 19, 38, and 39 are the codes to build a subsystem 
that supports the SUBSYS= keyword in DD allocations. At conversion time, 
function 38 (Converter) is called once for each DD to verify the parameters on 
the SUBSYS= keyword. At step initiation, function 39 (Allocation Group) is 
called once for all DDs in the step specifying this subsystem on the SUBSYS= 
keyword. Functions 16 and 17 are the OPEN and CLOSE routines, resp. Function 7 
(Unallocation) is called once for each DD statement at unallocation. Functions 
18 and 19 support Checkpoint and Restart, resp.


I don't know where writing your own subsystem to suppport SUBSYS= allocations 
is documented these days. It is not in the z/OS MVS Using the Subsystem 
Interface manual for whatever reason.


--
Peter Hunkeler


--
Peter Hunkeler

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


New free / open source z/OS tools from Dovetailed Technologies

2017-01-12 Thread Kirk Wolf
Today we have released several new free / open source tools on our website:
  See the new "Community" page at: http://dovetail.com

Some highlights:

*Ant SSH* is a set of enhanced SSH tasks for Apache Ant.  For many years we
have used a workstation IDE (Eclipse) as a code editor for  C/C++, Java,
and some Assembler.  We find this tool to be indispensable since it allows
us with one button click in a couple of seconds to upload any dirty source
files and run an incremental build (make) on z/OS.   These custom Ant tasks
can be used with any (or no) IDE.

*ncdu* is a curses based user interface for navigating Unix file systems,
which can be very useful when trying to cull unused files from a highly
populated file system. The main interface makes it easy to quickly see
which directories contain the heaviest disk usage and the navigational
model is very intuitive, making it easy to traverse and delete files and
directories that are no longer needed.

One thing that we have done is to include our source project for *ncdu* that
includes a z/OS Makefile and Ant build.xml script that uses Ant-SSH.  This
serves as a complete demonstration of using a workstation IDE to develop
for z/OS.

Kirk Wolf
Dovetailed Technologies
http://dovetail.com

*Join us at SHARE in San Jose*
 Thursday, March 09, 10:00 AM - 11:00 AM
 Finding the Needle in a Haystack - Diagnosing Common OpenSSH Problems

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: delete old datsets

2017-01-12 Thread Ron Thomas
Ok thanks . so whether this utility will delete Tape and archived datasets ? 
Thanks!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: EXTERNAL: History of Mainframe Cloud

2017-01-12 Thread John McKown
On Thu, Jan 12, 2017 at 1:44 PM, Phil Smith  wrote:

> >VM's roots are in CP-67, which goes back to 1968 or so.
>
> And CP-40 before that. See http://www.leeandmelindavarian.com/
> Melinda/neuvm.pdf for the real scoop!
>
> There were giants in those days...
> ...in which class I don't consider myself, but for laughs, look at PDF
> page 107 for a VERY old picture of me!
>

​Hum, that'd be an OLDER picture of a NEWER you?​



> --
> ...phsiii
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>



-- 
There’s no obfuscated Perl contest because it’s pointless.

—Jeff Polk

Maranatha! <><
John McKown

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Problems installing Suse Linux under zpdt.

2017-01-12 Thread Mark Post
>>> On 1/12/2017 at 04:17 AM, Itschak Mugzach  wrote: 
> I am trying to install Suse linux 12 SP2 on a zPDT native.I using
> instruction from a redbook titled The Virtualization Cookbook for SLES
> 11 which has instructions for SLES 12 as well.
> 
> The problem I face is a message ** No repository found. I tried the FTP
> installation and later Hard Drrive method as well. None works for me.

The Hard Drive method won't work on a mainframe, even an emulated one.  Since 
your installing SLES12 SP2, if zPDT supports it, _and_ you have your zPDT set 
up properly, you _should_ be able to use the HMC DVD drive as the installation 
source.

> There is no installation log, so I can't figure is a real FTP is needed
> (VSFTP is running on the base Linux).

If the HMC DVD method doesn't work, then yes you'll need a real FTP server 
somewhere that the Linux system can access it.

> 
> Any idea why the installer doesn't find the materials in base directory?

This is likely to be due to having an incorrect network setup.  After you get 
the "No repository found" message, you can enter "x" as a response, and then 3 
(I believe) to start a shell.  Once in the shell you can display the networking 
information with ifconfig, route, etc.  I don't think there's a ping command in 
the installation initrd, but I could be wrong.

I think you'll find it much (MUCH) easier to do your first install as a z/VM 
guest.  If you set up the virtual machine with the same (virtual) addresses as 
will be present in the native "LPAR" install, you should be able to transfer 
your work to that environment quite easily later on.

If you have further problems, feel free to contact me off list, or post to the 
Linux-390 mailing list where most of the mainframe Linux folks hang out.


Mark Post

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: EXTERNAL: History of Mainframe Cloud

2017-01-12 Thread Phil Smith
>VM's roots are in CP-67, which goes back to 1968 or so.

And CP-40 before that. See http://www.leeandmelindavarian.com/Melinda/neuvm.pdf 
for the real scoop!

There were giants in those days...
...in which class I don't consider myself, but for laughs, look at PDF page 107 
for a VERY old picture of me!
--
...phsiii

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: delete old datsets

2017-01-12 Thread Robert2 Gensler
Hi Ron,

If you are licensed for DFSMSdss you could use our logical data set DUMP
with the DELETE keyword.  It can select data sets based on physical
characteristics, including creation date as reported in the VTOC.  In this
case your output dump data set would be a DUMMY DD.

Something like:
//STEP EXEC PGM=ADRDSSU
//SYSPRINT DD   SYSOUT=A <-- might need to modify that
//DDDUMMY  DD   DUMMY
//SYSINDD   *
  DUMP DELETE DS(INCL(SPXR3V.TEST.**) BY(CREDT, GT, (*,-365))) OUTDD
(DDDUMMY)
/*

I would suggest testing with TYPRUN=NORUN first to make sure DSS is
selecting the right set of data sets you want to delete:
//STEP EXEC PGM=ADRDSSU,PARM='TYPRUN=NORUN'

Thanks,
Robert Gensler
DFSMSdss Architecture and Development
Tucson, AZ
1-720-349-5211
rgen...@us.ibm.com

IBM Mainframe Discussion List  wrote on
01/12/2017 02:28:00 PM:

> From: Ron Thomas 
> To: IBM-MAIN@LISTSERV.UA.EDU
> Date: 01/12/2017 02:28 PM
> Subject: delete old datsets
> Sent by: IBM Mainframe Discussion List 
>
> Hi . Could some one let me know how to delete old datasets
> SPXR3V.TEST.*say  1 year old  old using a job ?  I need to get this
> scheduled as a monthly process so that storage  gets freed.  Thanks!
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: delete old datsets

2017-01-12 Thread Carmen Vitullo
Better yet! 
I've not played with ADRDSSU for so long I didn't know it had the functionality 
very nice 


- Original Message -

From: "Todd Burrell"  
To: IBM-MAIN@LISTSERV.UA.EDU 
Sent: Thursday, January 12, 2017 1:33:51 PM 
Subject: Re: delete old datsets 

Try playing around with DFDSS with JCL like this and DEFINITELY test with 
TYPRUN=NORUN before running it live. 

//CATCLEAN EXEC PGM=ADRDSSU,PARM='TYPRUN=NORUN', 
// REGION=8M 
//SYSUT2 DD DUMMY 
//*YSU03 DD DISP=SHR,UNIT=SYSDA,VOL=SER=TSOS03 
//SYSIN DD *,DCB=BLKSIZE=80 
DUMP - 
OUTDD(SYSUT2) - 
DATASET (INCLUDE (BF19.**) - 
BY ((CREDT,LE,*,-1))) DELETE 
/* 
//SYSOUT DD SYSOUT=* 
//SYSPRINT DD SYSOUT=* 
//SYSUDUMP DD DUMMY 

You can change the INCLUDE statement to be what you want and then change the BY 
parameter to be -365 instead of -1. 
The dummy output will make sure the datasets are deleted when it finishes. 

-Original Message- 
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Ron Thomas 
Sent: Thursday, January 12, 2017 2:28 PM 
To: IBM-MAIN@LISTSERV.UA.EDU 
Subject: delete old datsets 

Hi . Could some one let me know how to delete old datasets SPXR3V.TEST.*say 1 
year old old using a job ? I need to get this scheduled as a monthly process so 
that storage gets freed. Thanks! 

-- 
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN 



This email transmission and any accompanying attachments may contain CSX 
privileged and confidential information intended only for the use of the 
intended addressee. Any dissemination, distribution, copying or action taken in 
reliance on the contents of this email by anyone other than the intended 
recipient is strictly prohibited. If you have received this email in error 
please immediately delete it and notify sender at the above CSX email address. 
Sender and CSX accept no liability for any damage caused directly or indirectly 
by receipt of this email. 


-- 
For IBM-MAIN subscribe / signoff / archive access instructions, 
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: delete old datsets

2017-01-12 Thread Burrell, Todd
Try playing around with DFDSS with JCL like this and DEFINITELY test with 
TYPRUN=NORUN before running it live. 

//CATCLEAN EXEC  PGM=ADRDSSU,PARM='TYPRUN=NORUN', 
//  REGION=8M 
//SYSUT2 DD DUMMY 
//*YSU03 DD DISP=SHR,UNIT=SYSDA,VOL=SER=TSOS03
//SYSIN DD *,DCB=BLKSIZE=80   
 DUMP  -  
 OUTDD(SYSUT2) -  
 DATASET (INCLUDE (BF19.**) - 
BY ((CREDT,LE,*,-1)))   DELETE
/*
//SYSOUT   DD SYSOUT=*
//SYSPRINT  DD SYSOUT=*   
//SYSUDUMP  DD DUMMY  

You can change the INCLUDE statement to be what you want and then change the BY 
parameter to be -365 instead of -1.  
The dummy output will make sure the datasets are deleted when it finishes. 

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Ron Thomas
Sent: Thursday, January 12, 2017 2:28 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: delete old datsets

Hi . Could some one let me know how to delete old datasets SPXR3V.TEST.*say  1 
year old  old using a job ?  I need to get this scheduled as a monthly process 
so that storage  gets freed.  Thanks!

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN



This email transmission and any accompanying attachments may contain CSX 
privileged and confidential information intended only for the use of the 
intended addressee. Any dissemination, distribution, copying or action taken in 
reliance on the contents of this email by anyone other than the intended 
recipient is strictly prohibited. If you have received this email in error 
please immediately delete it and notify sender at the above CSX email address. 
Sender and CSX accept no liability for any damage caused directly or indirectly 
by receipt of this email.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: delete old datsets

2017-01-12 Thread Carmen Vitullo
if you have a DASD reporting tool, and a report writer (SAS or WPS) you can 
run, for example an FDR report based on your criteria, feed the results to SAS 
for example and create the idcams or iehprog control cards and call that 
program from SAS or WPS 
I've done this batch on a weekly basis before 
Carmen 
- Original Message -

From: "Ron Thomas"  
To: IBM-MAIN@LISTSERV.UA.EDU 
Sent: Thursday, January 12, 2017 1:28:00 PM 
Subject: delete old datsets 

Hi . Could some one let me know how to delete old datasets SPXR3V.TEST.*say 1 
year old old using a job ? I need to get this scheduled as a monthly process so 
that storage gets freed. Thanks! 

-- 
For IBM-MAIN subscribe / signoff / archive access instructions, 
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


delete old datsets

2017-01-12 Thread Ron Thomas
Hi . Could some one let me know how to delete old datasets SPXR3V.TEST.*say  1 
year old  old using a job ?  I need to get this scheduled as a monthly process 
so that storage  gets freed.  Thanks!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VSAM: Why MAXLRECL?

2017-01-12 Thread Allan Staller
Write the records to thee total size(including reserved).

Example. 500 bytes of data and 500 bytes "reserved"

If  the record written is 1000 bytes long, but you are only using the 1st 500 
bytes, up to 500 bytes of additional information could be added to the record, 
without having to alter the physical dataset.



Are you suggesting to actually write the records to be the size including the 
"reserved" data, or to write the records of the size currently needed but have 
MAXLRECL defined so that records larger than are currently used can be written 
in the future?  I was thinking the latter, but I want to clarify what you are 
suggesting.



::DISCLAIMER::


The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and 
other defects.



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: History of Mainframe Cloud

2017-01-12 Thread Anne & Lynn Wheeler
stars...@mindspring.com (Lizette Koehler) writes:
> https://www.ibm.com/blogs/cloud-computing/2014/03/a-brief-history-of-cloud-compu
> ting-3/
>
> After some time, around 1970, the concept of virtual machines (VMs)
> was created.

mid-60s, some of the CTSS people went to 5th flr to do MULTICS
... others went to the ibm science center on the 4th flr and did
cp40/cms (after having done the hardware modifications to add virtual
memory to 360/40). cp40/cms morphs into cp67/cms when 360/67 that came
standard with virtual memory becomes available in 1967. 

transition to online available 7x24 was an issue since machine useage
offshift from home was quite variable ... and ibm mainframe even sitting
idle (but available) was quite expensive. lots of work was done on cp67
to support darkroom, unattended operation ... to minimize offshift costs
when (especially initally) there was little useage (but in order to
promote offshift useage, system had to be up 7x24).

this was also in the days when systems were rented ... and monthly
charges was based on the "system meter" that ran whenever the processor
and/or (any) channel was executing (processor and all channel activity
had to be idle for at least 400ms before system meter stopped). cp67 did
some special programming so that channel would go idle ... but instantly
wake up whenever there was any arriving characters ... further reducing
offshift costs when idle. Trivia: long after the shift from rent/leased
to sales ... MVS still had a timer task that woke up every 400ms (making
sure that system meter never stopped).

The science center offered online service both internal and to
students/staff/professors at local univ. in cambridge area. One of the
issues was highest security since some of the internal users were Armonk
business planners which had loaded the most valuable and sensitive
corporate data on the system ... and the system was also being used by
non-employees from local universities. CP67/CMS was also being used by
various gov. agencies. I was undergraduate at the time but doing
extenive CP67/CMS changes ... and would even periodically get requests
from IBM for enhancements. I didn't know about it at the time, but some
of the (security related) requests from IBM may have actually originated
from gov. agencies (although I didn't find out about them until much
later). gone 404, but still lives free at the wayback machine:
http://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

even before I graduated, Boeing hired me full time to help with the
formation of Boeing Computer Serivces ... consolidate all of
dataprocessing in an independent business unit to better monetize the
investment (including offering services to non-Boeing entities). Boeing
Renton had somewhere around $300M (late 60s $$s) in IBM mainframes
... 360/65s were arriving faster than they could be installed ... and
were in the processor of replicating Renton datacenter at Paine Field
(for disaster survival).

late 70s/early 80s customers bought large number of 4300s. datacenter
4300 clusters had more aggregate processing & i/o than high-end
mainframes at significant lower cost, smaller footprint, lower power
useage and environmental requirements. large customers also had orders
of 4300 in hundreds at a time for placing out in departmental areas
(sort of the leading edge of the distributed computing tsunami). In
1979, I was asked to do 4341 benchmarks for LLNL that was looking at
getting 70 4341s for compute farm ... a leading edge of the coming
cluster supercomputers.

modern large cloud operators will have several megadatacenters, each one
with hundreds of thousands of systems (and millions of processors)
staffed by 80-100 people ... to meet elastic, on-demand computing. They
claim that they assemble their own systems for 1/3rd the cost of systems
from brand name vendors (and server chips for cloud megadatacenters
exceed number going to brand name server vendors) ... enabling
provisioning for enormous elastic ondemand ... system costs have been so
radically reduced that power have increasingly became major
cost. They've worked extensively with chip makers so that they can have
systems where power/cooling drops to zero when idle ... but are "instant
on" as needed for on-demand use (going way beyond what was done for
cp67/cms in the 60s).

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: EXTERNAL: History of Mainframe Cloud

2017-01-12 Thread Charles Mills
VM's roots are in CP-67, which goes back to 1968 or so.

Charles

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of Jerry Whitteridge
Sent: Thursday, January 12, 2017 9:59 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: EXTERNAL: History of Mainframe Cloud

H - Pretty sure z/VM and its predecessors predated VMWare !

Jerry Whitteridge
Manager Mainframe Systems & Storage
Albertsons - Safeway Inc.
925 738 9443
Corporate Tieline - 89443

If you feel in control
you just aren't going fast enough.




-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
Behalf Of Lizette Koehler
Sent: Thursday, January 12, 2017 10:56 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: EXTERNAL: History of Mainframe Cloud

https://www.ibm.com/blogs/cloud-computing/2014/03/a-brief-history-of-cloud-c
ompu
ting-3/


When we think of cloud computing, we think of situations, products and ideas
that started in the 21st century. This is not exactly the whole truth. Cloud
concepts have existed for many years. Here, I will take you back to that
time.

It was a gradual evolution that started in the 1950s with mainframe
computing.

Multiple users were capable of accessing a central computer through dumb
terminals, whose only function was to provide access to the mainframe.
Because of the costs to buy and maintain mainframe computers, it was not
practical for an organization to buy and maintain one for every employee.
Nor did the typical user need the large (at the time) storage capacity and
processing power that a mainframe provided. Providing shared access to a
single resource was the solution that made economical sense for this
sophisticated piece of technology.

After some time, around 1970, the concept of virtual machines (VMs) was
created.

Using virtualization software like VMware, it became possible to execute one
or more operating systems simultaneously in an isolated environment.
Complete computers (virtual) could be executed inside one physical hardware
which in turn can run a completely different operating system.

The VM operating system took the 1950s' shared access mainframe to the next
level, permitting multiple distinct computing environments to reside on one
physical environment. Virtualization came to drive the technology, and was
an important catalyst in the communication and information evolution.

In the 1990s, telecommunications companies started offering virtualized
private network connections.

Historically, telecommunications companies only offered single dedicated
point-to-point data connections. The newly offered virtualized private
network connections had the same service quality as their dedicated services
at a reduced cost. Instead of building out physical infrastructure to allow
for more users to have their own connections, telecommunications companies
were now able to provide users with shared access to the same physical
infrastructure.

The following list briefly explains the evolution of cloud computing:

. Grid computing: Solving large problems with parallel computing

. Utility computing: Offering computing resources as a metered service

. SaaS: Network-based subscriptions to applications

. Cloud computing: Anytime, anywhere access to IT resources delivered
dynamically as a service

Now, let's talk a bit about the present.

http://www.softlayer.com/ is one of the largest global providers of cloud
computing infrastructure.

IBM already has platforms in its portfolio that include private, public and
hybrid cloud solutions. The purchase of SoftLayer guarantees an even more
comprehensive
http://www.ibm.com/blogs/cloud-computing/2014/02/what-is-infrastructure-as-a
-ser
vice-iaas/ solution. While many companies look to maintain some applications
in data centers, many others are moving to public clouds.

Even now, the purchase of bare metal can be modeled in commercial cloud (for
example, billing by usage or put another way, physical server billing by the
hour). The result of this is that a bare metal server request with all the
resources needed, and nothing more, can be delivered with a matter of hours.

In the end, the story is not finished here. The evolution of cloud computing
has only begun.





Lizette Koehler

statistics: A precise and logical method for stating a half-truth
inaccurately

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email
to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

 Warning: All e-mail sent to this address will be received by the corporate
e-mail system, and is subject to archival and review by someone other than
the recipient. This e-mail may contain proprietary information and is
intended only for the use of the intended recipient(s). If the reader of
this message is not the intended recipient(s), you are notified that you
have received this 

Re: Cobol and PreCompile for CICS and DB2

2017-01-12 Thread Frank Swarbrick
We use CALL 'CBLTDLI' rather than EXEC DLI, but if we did use EXEC DLI I agree 
that this forcing of NODYNAM would be quite an irritant!  Sounds like an RFE 
might be in order.


As for NORENT, I don't think we've used that since before COBOL II!  (In other 
words, before my time.)


Frank


From: IBM Mainframe Discussion List  on behalf of 
Dale R. Smith 
Sent: Thursday, January 12, 2017 9:43 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Cobol and PreCompile for CICS and DB2

On Thu, 12 Jan 2017 09:15:37 -0600, David Magee  
wrote:

>The last time I looked into doing that we had problems because certain 
>scenarios were not supported using the >integrated translator during the 
>compile step.  EXCI applications are what come to mind off the top of my head. 
>Have all >the restrictions been lifted at this point? If so, at what levels of 
>COBOL and CICS T/S?

The following Technote says that you can Compile an EXCI COBOL program with the 
Integrated Translator and it should run OK.  You will get an informational 
Compile message saying it's not supported, but it should Compile and run 
correctly.

http://www-01.ibm.com/support/docview.wss?uid=swg21661391
IBM Use of integrated CICS translator for EXCI batch programs - United 
States
www-01.ibm.com
When compiling code with option CICS(EXCI) using the integrated translator, you 
recieve message DFH7006I W the EXCI option is not supported by the integrated 
translator. However, the application program seems to work okay. So you would 
like to know, can an external CICS interface (EXCI) batch program compiled with 
the integrated translator be used without problems?



My main beef with the Integrated Translator is that it does not support Batch 
IMS (EXEC DLI) programs the same way the separate Translator does.  The 
Integrated Translator always assumes your programs contain EXEC CICS commands 
and forces NODYNAM and RENT Compiler options on.  Normally our Batch programs 
are Compiled with DYNAM and NORENT.  It's possible that despite the initial 
overhead, there may be some benefit to using RENT for Batch COBOL programs on 
modern hardware.  I have not tried to measure the difference yet.  So we could 
probably live with RENT.  NODYNAM would be a problem though since most of the 
programs use "Call 'program'" static calls so they would all have to be changed 
to dynamic calls to use the Integrated Translator.  I only have access to 
Enterprise COBOL V4.2, but I don't believe it has changed for V5 or V6 of 
Enterprise COBOL.

Lizette, I believe the Integrated Translator, (EXEC CICS, EXEC DLI),  and the 
Integrated PreCompiler (EXEC SQL) were added in Enterprise COBOL V3.1 and 
enhanced with several releases after that.

--
Dale R. Smith

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VSAM: Why MAXLRECL?

2017-01-12 Thread Frank Swarbrick
Are you suggesting to actually write the records to be the size including the 
"reserved" data, or to write the records of the size currently needed but have 
MAXLRECL defined so that records larger than are currently used can be written 
in the future?  I was thinking the latter, but I want to clarify what you are 
suggesting.


Thanks!

Frank


From: IBM Mainframe Discussion List  on behalf of 
Allan Staller 
Sent: Thursday, January 12, 2017 6:49 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: VSAM: Why MAXLRECL?

In IBM parlance this is a "reserved" field.
Define the LREcL to the max needed
Double that
Add 10%

This is the LRECL you define.
Go back to the original LRECL and define what you need.
Define the remainder as "reserved for future use".

In that way, you only have to change the record defintions, not the file.

HTH,


Since the logging is only for troubleshooting purposes I decided to simply live 
with truncation of the end of the messages.  But had the MAXLRECL for this file 
already been 32761, or some similar fairly large number I simply would have 
changed the program that logs the messages to not truncate at 500 bytes.

My point is, it seems to me that MAXLRECL is a fairly "artificial limit", 
unless there is a actually a good reason that you'd want a MAXLRECL that is at 
least close to the, well, maximum record length.




::DISCLAIMER::


The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and 
other defects.



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: EXTERNAL: History of Mainframe Cloud

2017-01-12 Thread Jerry Whitteridge
H - Pretty sure z/VM and its predecessors predated VMWare !

Jerry Whitteridge
Manager Mainframe Systems & Storage
Albertsons - Safeway Inc.
925 738 9443
Corporate Tieline - 89443

If you feel in control
you just aren't going fast enough.




-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Lizette Koehler
Sent: Thursday, January 12, 2017 10:56 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: EXTERNAL: History of Mainframe Cloud

https://www.ibm.com/blogs/cloud-computing/2014/03/a-brief-history-of-cloud-compu
ting-3/


When we think of cloud computing, we think of situations, products and ideas 
that started in the 21st century. This is not exactly the whole truth. Cloud 
concepts have existed for many years. Here, I will take you back to that time.

It was a gradual evolution that started in the 1950s with mainframe computing.

Multiple users were capable of accessing a central computer through dumb 
terminals, whose only function was to provide access to the mainframe. Because 
of the costs to buy and maintain mainframe computers, it was not practical for 
an organization to buy and maintain one for every employee. Nor did the typical 
user need the large (at the time) storage capacity and processing power that a 
mainframe provided. Providing shared access to a single resource was the 
solution that made economical sense for this sophisticated piece of technology.

After some time, around 1970, the concept of virtual machines (VMs) was created.

Using virtualization software like VMware, it became possible to execute one or 
more operating systems simultaneously in an isolated environment. Complete 
computers (virtual) could be executed inside one physical hardware which in 
turn can run a completely different operating system.

The VM operating system took the 1950s' shared access mainframe to the next 
level, permitting multiple distinct computing environments to reside on one 
physical environment. Virtualization came to drive the technology, and was an 
important catalyst in the communication and information evolution.

In the 1990s, telecommunications companies started offering virtualized private 
network connections.

Historically, telecommunications companies only offered single dedicated 
point-to-point data connections. The newly offered virtualized private network 
connections had the same service quality as their dedicated services at a 
reduced cost. Instead of building out physical infrastructure to allow for more 
users to have their own connections, telecommunications companies were now able 
to provide users with shared access to the same physical infrastructure.

The following list briefly explains the evolution of cloud computing:

. Grid computing: Solving large problems with parallel computing

. Utility computing: Offering computing resources as a metered service

. SaaS: Network-based subscriptions to applications

. Cloud computing: Anytime, anywhere access to IT resources delivered 
dynamically as a service

Now, let's talk a bit about the present.

http://www.softlayer.com/ is one of the largest global providers of cloud 
computing infrastructure.

IBM already has platforms in its portfolio that include private, public and 
hybrid cloud solutions. The purchase of SoftLayer guarantees an even more 
comprehensive 
http://www.ibm.com/blogs/cloud-computing/2014/02/what-is-infrastructure-as-a-ser
vice-iaas/ solution. While many companies look to maintain some applications in 
data centers, many others are moving to public clouds.

Even now, the purchase of bare metal can be modeled in commercial cloud (for 
example, billing by usage or put another way, physical server billing by the 
hour). The result of this is that a bare metal server request with all the 
resources needed, and nothing more, can be delivered with a matter of hours.

In the end, the story is not finished here. The evolution of cloud computing 
has only begun.





Lizette Koehler

statistics: A precise and logical method for stating a half-truth inaccurately

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

 Warning: All e-mail sent to this address will be received by the corporate 
e-mail system, and is subject to archival and review by someone other than the 
recipient. This e-mail may contain proprietary information and is intended only 
for the use of the intended recipient(s). If the reader of this message is not 
the intended recipient(s), you are notified that you have received this message 
in error and that any review, dissemination, distribution or copying of this 
message is strictly prohibited. If you have received this message in error, 
please notify the sender immediately.



History of Mainframe Cloud

2017-01-12 Thread Lizette Koehler
https://www.ibm.com/blogs/cloud-computing/2014/03/a-brief-history-of-cloud-compu
ting-3/


When we think of cloud computing, we think of situations, products and ideas
that started in the 21st century. This is not exactly the whole truth. Cloud
concepts have existed for many years. Here, I will take you back to that time.

It was a gradual evolution that started in the 1950s with mainframe computing.

Multiple users were capable of accessing a central computer through dumb
terminals, whose only function was to provide access to the mainframe. Because
of the costs to buy and maintain mainframe computers, it was not practical for
an organization to buy and maintain one for every employee. Nor did the typical
user need the large (at the time) storage capacity and processing power that a
mainframe provided. Providing shared access to a single resource was the
solution that made economical sense for this sophisticated piece of technology.

After some time, around 1970, the concept of virtual machines (VMs) was created.

Using virtualization software like VMware, it became possible to execute one or
more operating systems simultaneously in an isolated environment. Complete
computers (virtual) could be executed inside one physical hardware which in turn
can run a completely different operating system.

The VM operating system took the 1950s' shared access mainframe to the next
level, permitting multiple distinct computing environments to reside on one
physical environment. Virtualization came to drive the technology, and was an
important catalyst in the communication and information evolution.

In the 1990s, telecommunications companies started offering virtualized private
network connections.

Historically, telecommunications companies only offered single dedicated
point-to-point data connections. The newly offered virtualized private network
connections had the same service quality as their dedicated services at a
reduced cost. Instead of building out physical infrastructure to allow for more
users to have their own connections, telecommunications companies were now able
to provide users with shared access to the same physical infrastructure.

The following list briefly explains the evolution of cloud computing:

. Grid computing: Solving large problems with parallel computing

. Utility computing: Offering computing resources as a metered service

. SaaS: Network-based subscriptions to applications

. Cloud computing: Anytime, anywhere access to IT resources delivered
dynamically as a service

Now, let's talk a bit about the present.

http://www.softlayer.com/ is one of the largest global providers of cloud
computing infrastructure.

IBM already has platforms in its portfolio that include private, public and
hybrid cloud solutions. The purchase of SoftLayer guarantees an even more
comprehensive
http://www.ibm.com/blogs/cloud-computing/2014/02/what-is-infrastructure-as-a-ser
vice-iaas/ solution. While many companies look to maintain some applications in
data centers, many others are moving to public clouds.

Even now, the purchase of bare metal can be modeled in commercial cloud (for
example, billing by usage or put another way, physical server billing by the
hour). The result of this is that a bare metal server request with all the
resources needed, and nothing more, can be delivered with a matter of hours.

In the end, the story is not finished here. The evolution of cloud computing has
only begun. 





Lizette Koehler

statistics: A precise and logical method for stating a half-truth inaccurately

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Is it really possible to migrate Mainframe to Cloud ??

2017-01-12 Thread Lizette Koehler
So, is not the mainframe a "CLOUD" before Cloud was used?  I seem to remember 
some papers/articles extolling the wonders of the Mainframe as a Cloud before 
cloud

http://www-03.ibm.com/systems/z/solutions/hybrid-cloud/index.html

https://www.comparethecloud.net/articles/mainframe-the-ultimate-cloud-platform/

http://www.eweek.com/cloud/ibm-takes-the-power-of-the-mainframe-to-the-cloud.html

https://www.ibm.com/blogs/cloud-computing/2016/04/10-steps-to-understanding-your-it-before-moving-to-cloud/

https://www.cloudave.com/1664/cloud-computing-from-ibm-technology-and-practice/
  Peter Hedges is Asia/Pacific Sales leader for Systemx/BladeTechnology from 
IBM, he presented at today's IBM Insight Forum 2009 around the emergence of new 
business models based on cloud computing. He presented a keynote first given by 
Jinzy Zhu, executive of IBM cloud labs for the AsiaPac region.


http://www.ttwin.com/blog/279-mainframes-cloud-computing-similarities-differences

I will have to find my older notes.  This is all I could find for now.

Lizette


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of R.S.
> Sent: Thursday, January 12, 2017 9:47 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Is it really possible to migrate Mainframe to Cloud ??
> 
> Yes, it is!
> The only trick is to put a mainframe in a cloud.
> 
> BTW: There is no cloud. It's just someone else's computer.
> 
> --
> Radoslaw Skorupka
> Lodz, Poland
> 
> 
> 
> 
> 
> 
> ---
> Treść tej wiadomości może zawierać informacje prawnie chronione Banku
> przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być
> jedynie jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś
> adresatem niniejszej wiadomości lub pracownikiem upoważnionym do jej
> przekazania adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie,
> rozprowadzanie lub inne działanie o podobnym charakterze jest prawnie
> zabronione i może być karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo,
> prosimy niezwłocznie zawiadomić nadawcę wysyłając odpowiedź oraz trwale usunąć
> tę wiadomość włączając w to wszelkie jej kopie wydrukowane lub zapisane na
> dysku.
> 
> This e-mail may contain legally privileged information of the Bank and is
> intended solely for business use of the addressee. This e-mail may only be
> received by the addressee and may not be disclosed to any third parties. If
> you are not the intended addressee of this e-mail or the employee authorized
> to forward it to the addressee, be advised that any dissemination, copying,
> distribution or any other similar activity is legally prohibited and may be
> punishable. If you received this e-mail by mistake please advise the sender
> immediately by using the reply facility in your e-mail software and delete
> permanently this e-mail including any copies of it either printed or saved to
> hard drive.
> 
> mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 Warszawa,
> www.mBank.pl, e-mail: kont...@mbank.pl
> Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru
> Sądowego, nr rejestru przedsiębiorców KRS 025237, NIP: 526-021-50-88.
> Według stanu na dzień 01.01.2016 r. kapitał zakładowy mBanku S.A. (w całości
> wpłacony) wynosi 168.955.696 złotych.
> 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Is it really possible to migrate Mainframe to Cloud ??

2017-01-12 Thread Mitch Mccluhan
 Absolutely!!

 

Mitch McCluhan
mitc...@aol.com

 

 

-Original Message-
From: Peter 
To: IBM-MAIN 
Sent: Thu, Jan 12, 2017 10:02 am
Subject: Is it really possible to migrate Mainframe to Cloud ??

Hi

Just found this link thought I can share

https://www.linkedin.com/pulse/yes-you-can-migrate-your-mainframe-cloud-stephen-orban


Peter

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Is it really possible to migrate Mainframe to Cloud ??

2017-01-12 Thread Itschak Mugzach
This is not a mainframe anymore. What's the difference between that and
some tens of re-hosting / re-engineering solutions? on premise or cloud,
this is the same solution with new name. some of the solutions sounds like
a mainframe, but they don't walk and sound like.
I am not saying re-hosting is bad, but can't see the r/evolution.

ITschak



ITschak Mugzach
Z/OS, ISV Products and Application Security & Risk Assessments Professional

2017-01-12 18:46 GMT+02:00 R.S. :

> Yes, it is!
> The only trick is to put a mainframe in a cloud.
>
> BTW: There is no cloud. It's just someone else's computer.
>
> --
> Radoslaw Skorupka
> Lodz, Poland
>
>
>
>
>
>
> ---
> Treść tej wiadomości może zawierać informacje prawnie chronione Banku
> przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być
> jedynie jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś
> adresatem niniejszej wiadomości lub pracownikiem upoważnionym do jej
> przekazania adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie,
> rozprowadzanie lub inne działanie o podobnym charakterze jest prawnie
> zabronione i może być karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo,
> prosimy niezwłocznie zawiadomić nadawcę wysyłając odpowiedź oraz trwale
> usunąć tę wiadomość włączając w to wszelkie jej kopie wydrukowane lub
> zapisane na dysku.
>
> This e-mail may contain legally privileged information of the Bank and is
> intended solely for business use of the addressee. This e-mail may only be
> received by the addressee and may not be disclosed to any third parties. If
> you are not the intended addressee of this e-mail or the employee
> authorized to forward it to the addressee, be advised that any
> dissemination, copying, distribution or any other similar activity is
> legally prohibited and may be punishable. If you received this e-mail by
> mistake please advise the sender immediately by using the reply facility in
> your e-mail software and delete permanently this e-mail including any
> copies of it either printed or saved to hard drive.
>
> mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 Warszawa,
> www.mBank.pl, e-mail: kont...@mbank.pl
> Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego
> Rejestru Sądowego, nr rejestru przedsiębiorców KRS 025237, NIP:
> 526-021-50-88. Według stanu na dzień 01.01.2016 r. kapitał zakładowy mBanku
> S.A. (w całości wpłacony) wynosi 168.955.696 złotych.
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Is it really possible to migrate Mainframe to Cloud ??

2017-01-12 Thread R.S.

Yes, it is!
The only trick is to put a mainframe in a cloud.

BTW: There is no cloud. It's just someone else's computer.

--
Radoslaw Skorupka
Lodz, Poland






---
Treść tej wiadomości może zawierać informacje prawnie chronione Banku 
przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być jedynie 
jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś adresatem 
niniejszej wiadomości lub pracownikiem upoważnionym do jej przekazania 
adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie, rozprowadzanie 
lub inne działanie o podobnym charakterze jest prawnie zabronione i może być 
karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo, prosimy niezwłocznie 
zawiadomić nadawcę wysyłając odpowiedź oraz trwale usunąć tę wiadomość 
włączając w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is 
intended solely for business use of the addressee. This e-mail may only be 
received by the addressee and may not be disclosed to any third parties. If you 
are not the intended addressee of this e-mail or the employee authorized to 
forward it to the addressee, be advised that any dissemination, copying, 
distribution or any other similar activity is legally prohibited and may be 
punishable. If you received this e-mail by mistake please advise the sender 
immediately by using the reply facility in your e-mail software and delete 
permanently this e-mail including any copies of it either printed or saved to 
hard drive.

mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 Warszawa, 
www.mBank.pl, e-mail: kont...@mbank.pl
Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru 
Sądowego, nr rejestru przedsiębiorców KRS 025237, NIP: 526-021-50-88. 
Według stanu na dzień 01.01.2016 r. kapitał zakładowy mBanku S.A. (w całości 
wpłacony) wynosi 168.955.696 złotych.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SDB and Program Object Library

2017-01-12 Thread R.S.
IMHO it is plain stupid, but "it works as documented". Dot. End of 
discussion. Any documented stupidity is authorised to remain unchanged.


I wish I would have SDB for DSORG=PO,RECFM=U with the value of 32760. 
All exceptions could be solved with providing explicit BLKSIZE value.
BTW: I can't remember any case when I created DSORG=PO,RECFM=U dataset 
and not provided 32760.


--
Radoslaw Skorupka
Lodz, Poland






---
Treść tej wiadomości może zawierać informacje prawnie chronione Banku 
przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być jedynie 
jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś adresatem 
niniejszej wiadomości lub pracownikiem upoważnionym do jej przekazania 
adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie, rozprowadzanie 
lub inne działanie o podobnym charakterze jest prawnie zabronione i może być 
karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo, prosimy niezwłocznie 
zawiadomić nadawcę wysyłając odpowiedź oraz trwale usunąć tę wiadomość 
włączając w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is 
intended solely for business use of the addressee. This e-mail may only be 
received by the addressee and may not be disclosed to any third parties. If you 
are not the intended addressee of this e-mail or the employee authorized to 
forward it to the addressee, be advised that any dissemination, copying, 
distribution or any other similar activity is legally prohibited and may be 
punishable. If you received this e-mail by mistake please advise the sender 
immediately by using the reply facility in your e-mail software and delete 
permanently this e-mail including any copies of it either printed or saved to 
hard drive.

mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 Warszawa, 
www.mBank.pl, e-mail: kont...@mbank.pl
Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru 
Sądowego, nr rejestru przedsiębiorców KRS 025237, NIP: 526-021-50-88. 
Według stanu na dzień 01.01.2016 r. kapitał zakładowy mBanku S.A. (w całości 
wpłacony) wynosi 168.955.696 złotych.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Cobol and PreCompile for CICS and DB2

2017-01-12 Thread Dale R. Smith
On Thu, 12 Jan 2017 09:15:37 -0600, David Magee  
wrote:

>The last time I looked into doing that we had problems because certain 
>scenarios were not supported using the >integrated translator during the 
>compile step.  EXCI applications are what come to mind off the top of my head. 
>Have all >the restrictions been lifted at this point? If so, at what levels of 
>COBOL and CICS T/S?

The following Technote says that you can Compile an EXCI COBOL program with the 
Integrated Translator and it should run OK.  You will get an informational 
Compile message saying it's not supported, but it should Compile and run 
correctly.

http://www-01.ibm.com/support/docview.wss?uid=swg21661391

My main beef with the Integrated Translator is that it does not support Batch 
IMS (EXEC DLI) programs the same way the separate Translator does.  The 
Integrated Translator always assumes your programs contain EXEC CICS commands 
and forces NODYNAM and RENT Compiler options on.  Normally our Batch programs 
are Compiled with DYNAM and NORENT.  It's possible that despite the initial 
overhead, there may be some benefit to using RENT for Batch COBOL programs on 
modern hardware.  I have not tried to measure the difference yet.  So we could 
probably live with RENT.  NODYNAM would be a problem though since most of the 
programs use "Call 'program'" static calls so they would all have to be changed 
to dynamic calls to use the Integrated Translator.  I only have access to 
Enterprise COBOL V4.2, but I don't believe it has changed for V5 or V6 of 
Enterprise COBOL.

Lizette, I believe the Integrated Translator, (EXEC CICS, EXEC DLI),  and the 
Integrated PreCompiler (EXEC SQL) were added in Enterprise COBOL V3.1 and 
enhanced with several releases after that.

-- 
Dale R. Smith 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBMLink and ShopZ

2017-01-12 Thread Leonardo DePaivaFerreiraVaz
I had to clear my cookies for ibm.com yesterday, maybe that can help, you can 
test by starting a private browser session; if that works, clear your cookies.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Thomas Dunlap
Sent: Thursday, January 12, 2017 10:50 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: IBMLink and ShopZ

Having trouble trying to logon onto both IBMLink and ShopZ.  When I enter user 
ID and password then hit "Signin" it just sits there, no movement at all.


--
Regards,
Thomas DunlapChief Technology Officert...@themisinc.com
Themis,  Inc.http://www.themisinc.com908 400-6485

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Is it really possible to migrate Mainframe to Cloud ??

2017-01-12 Thread Allan Staller
Without reading the article, the mainframe is an ideal cloud platform.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Peter
Sent: Thursday, January 12, 2017 10:02 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Is it really possible to migrate Mainframe to Cloud ??

Hi

Just found this link thought I can share

https://www.linkedin.com/pulse/yes-you-can-migrate-your-mainframe-cloud-stephen-orban


Peter

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN


::DISCLAIMER::


The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and 
other defects.




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Is it really possible to migrate Mainframe to Cloud ??

2017-01-12 Thread Peter
Hi

Just found this link thought I can share

https://www.linkedin.com/pulse/yes-you-can-migrate-your-mainframe-cloud-stephen-orban


Peter

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


IBMLink and ShopZ

2017-01-12 Thread Thomas Dunlap
Having trouble trying to logon onto both IBMLink and ShopZ.  When I 
enter user ID and password then hit "Signin" it just sits there, no 
movement at all.



--
Regards,
Thomas DunlapChief Technology Officert...@themisinc.com
Themis,  Inc.http://www.themisinc.com908 400-6485

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Cobol and PreCompile for CICS and DB2

2017-01-12 Thread David Magee
The last time I looked into doing that we had problems because certain 
scenarios were not supported using the integrated translator during the compile 
step.  EXCI applications are what come to mind off the top of my head. Have all 
the restrictions been lifted at this point? If so, at what levels of COBOL and 
CICS T/S?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SDB and Program Object Library

2017-01-12 Thread Allan Staller
Bad ACS routuines?


In order to copy a load module library to a progam object library I used ISPF 
Data Set Utility to allocate the new library with DSNTYPE=LIBRARY.  Believing 
that "SDB knows best," I left BLKSIZE blank.  ISPF warned me.  I told it, 
"Trust me."  I went to Move/Copy Utility and tried the copy.  It failed: can't 
deal with BLKSIZE=0.
Curious, I deleted and re-created with BLKSIZE=1(!?) and retried the copy.  
Succeeded, and as a side effect set BLKSIZE to 4096.

Is there any rationale to this logic, that BLKSIZE=0, which is supposed to 
defer the choice to SDB, fails, while the absurd BLKSIZE=1 succeeds and SDB 
operates?



::DISCLAIMER::


The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and 
other defects.




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VSAM: Why MAXLRECL?

2017-01-12 Thread Allan Staller
In IBM parlance this is a "reserved" field.
Define the LREcL to the max needed
Double that 
Add 10%

This is the LRECL you define.
Go back to the original LRECL and define what you need.
Define the remainder as "reserved for future use".

In that way, you only have to change the record defintions, not the file.

HTH,


Since the logging is only for troubleshooting purposes I decided to simply live 
with truncation of the end of the messages.  But had the MAXLRECL for this file 
already been 32761, or some similar fairly large number I simply would have 
changed the program that logs the messages to not truncate at 500 bytes.

My point is, it seems to me that MAXLRECL is a fairly "artificial limit", 
unless there is a actually a good reason that you'd want a MAXLRECL that is at 
least close to the, well, maximum record length.




::DISCLAIMER::


The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and 
other defects.



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Cobol and PreCompile for CICS and DB2

2017-01-12 Thread Ralph Robison
Hi Lizette, 

Not sure if you are looking for a calendar 'when' or release level.  The linked 
article shows it back to CICS TS 2.2.  The Announcement Letter shows that 2.2's 
availability date was January 25, 2002.  

Ralph

enterprisesystemsmedia.com/article/cics-integrated-translator

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Cobol and PreCompile for CICS and DB2

2017-01-12 Thread Klan, Rob (RET-DAY)

"The integrated translator function requires IBM COBOL for OS/390 and VM 
Version 2 Release 2, with "PTF UQ52879 (APAR PQ45462)" or Enterprise COBOL for 
z/OS and OS/390 Version 3. Note, however, that the COBOL3 translator option 
must be active".

...what fun that was changing all the compile procedures.


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Lizette Koehler
Sent: Thursday, January 12, 2017 7:10 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Cobol and PreCompile for CICS and DB2

Yes, exactly.

I remember in the past you had two steps for compile for DB2 or CICS Cobol 
programs.  One to translate the SQL or CICS EXEC statements to COBOL and then 
the Compile step itself.

I was wondering when that was changed so the Cobol Compiler could do it all in 
one step without the pre-compiler step.

Lizette


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] 
> On Behalf Of Bill Woodger
> Sent: Thursday, January 12, 2017 12:40 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Cobol and PreCompile for CICS and DB2
> 
> I'm not quite sure what you are asking. Do you mean, which release of 
> COBOL first included the integrated CICS translator within the 
> compiler itself? Or are you asking something else?
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Problems installing Suse Linux under zpdt.

2017-01-12 Thread Itschak Mugzach
Radoslaw.

I tried mostly ftp as this is the recommended and explained way. My idea was to 
install it in zpdt basic mode and then to start it under zvm which is up and 
running. The problem is that the installer does find the materials.

ITschak


נשלח מה-iPad שלי

‫ב-12 בינו׳ 2017, בשעה 14:47, ‏‏R.S. ‏ כתב/ה:‬

> W dniu 2017-01-12 o 10:17, Itschak Mugzach pisze:
>> I am trying to install Suse linux 12 SP2 on a zPDT native.I using
>> instruction from a redbook titled The Virtualization Cookbook for SLES
>> 11 which has instructions for SLES 12 as well.
>> 
>> The problem I face is a message ** No repository found. I tried the FTP
>> installation and later Hard Drrive method as well. None works for me.
>> 
>> There is no installation log, so I can't figure is a real FTP is needed
>> (VSFTP is running on the base Linux).
>> 
>> Any idea why the installer doesn't find the materials in base directory?
> 
> zPDT is an emulator, but I'm not sure about HMC/SE facilities emulation. In 
> order to install Linux "native" (in LPAR) you have to use HMC/SE facilities - 
> "Load from..." task and DVD or ftp.
> 
> -- 
> Radoslaw Skorupka
> Lodz, Poland
> 
> 
> 
> 
> 
> 
> ---
> Treść tej wiadomości może zawierać informacje prawnie chronione Banku 
> przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być 
> jedynie jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś 
> adresatem niniejszej wiadomości lub pracownikiem upoważnionym do jej 
> przekazania adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie, 
> rozprowadzanie lub inne działanie o podobnym charakterze jest prawnie 
> zabronione i może być karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo, 
> prosimy niezwłocznie zawiadomić nadawcę wysyłając odpowiedź oraz trwale 
> usunąć tę wiadomość włączając w to wszelkie jej kopie wydrukowane lub 
> zapisane na dysku.
> 
> This e-mail may contain legally privileged information of the Bank and is 
> intended solely for business use of the addressee. This e-mail may only be 
> received by the addressee and may not be disclosed to any third parties. If 
> you are not the intended addressee of this e-mail or the employee authorized 
> to forward it to the addressee, be advised that any dissemination, copying, 
> distribution or any other similar activity is legally prohibited and may be 
> punishable. If you received this e-mail by mistake please advise the sender 
> immediately by using the reply facility in your e-mail software and delete 
> permanently this e-mail including any copies of it either printed or saved to 
> hard drive.
> 
> mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 Warszawa, 
> www.mBank.pl, e-mail: kont...@mbank.pl
> Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru 
> Sądowego, nr rejestru przedsiębiorców KRS 025237, NIP: 526-021-50-88. 
> Według stanu na dzień 01.01.2016 r. kapitał zakładowy mBanku S.A. (w całości 
> wpłacony) wynosi 168.955.696 złotych.
> 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Problems installing Suse Linux under zpdt.

2017-01-12 Thread Joseph Reichman
Call 18004267378 IBM 

Or post your problem to z1...@yahoogroups.com 

Joe Reichman
8045 Newell St Apt 403
Silver Spring MD 20910
Home (240) 863-3965
Cell (917) 748 -9693

> On Jan 12, 2017, at 7:47 AM, R.S.  wrote:
> 
> W dniu 2017-01-12 o 10:17, Itschak Mugzach pisze:
>> I am trying to install Suse linux 12 SP2 on a zPDT native.I using
>> instruction from a redbook titled The Virtualization Cookbook for SLES
>> 11 which has instructions for SLES 12 as well.
>> 
>> The problem I face is a message ** No repository found. I tried the FTP
>> installation and later Hard Drrive method as well. None works for me.
>> 
>> There is no installation log, so I can't figure is a real FTP is needed
>> (VSFTP is running on the base Linux).
>> 
>> Any idea why the installer doesn't find the materials in base directory?
> 
> zPDT is an emulator, but I'm not sure about HMC/SE facilities emulation. In 
> order to install Linux "native" (in LPAR) you have to use HMC/SE facilities - 
> "Load from..." task and DVD or ftp.
> 
> -- 
> Radoslaw Skorupka
> Lodz, Poland
> 
> 
> 
> 
> 
> 
> ---
> Treść tej wiadomości może zawierać informacje prawnie chronione Banku 
> przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być 
> jedynie jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś 
> adresatem niniejszej wiadomości lub pracownikiem upoważnionym do jej 
> przekazania adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie, 
> rozprowadzanie lub inne działanie o podobnym charakterze jest prawnie 
> zabronione i może być karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo, 
> prosimy niezwłocznie zawiadomić nadawcę wysyłając odpowiedź oraz trwale 
> usunąć tę wiadomość włączając w to wszelkie jej kopie wydrukowane lub 
> zapisane na dysku.
> 
> This e-mail may contain legally privileged information of the Bank and is 
> intended solely for business use of the addressee. This e-mail may only be 
> received by the addressee and may not be disclosed to any third parties. If 
> you are not the intended addressee of this e-mail or the employee authorized 
> to forward it to the addressee, be advised that any dissemination, copying, 
> distribution or any other similar activity is legally prohibited and may be 
> punishable. If you received this e-mail by mistake please advise the sender 
> immediately by using the reply facility in your e-mail software and delete 
> permanently this e-mail including any copies of it either printed or saved to 
> hard drive.
> 
> mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 Warszawa, 
> www.mBank.pl, e-mail: kont...@mbank.pl
> Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru 
> Sądowego, nr rejestru przedsiębiorców KRS 025237, NIP: 526-021-50-88. 
> Według stanu na dzień 01.01.2016 r. kapitał zakładowy mBanku S.A. (w całości 
> wpłacony) wynosi 168.955.696 złotych.
> 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Problems installing Suse Linux under zpdt.

2017-01-12 Thread R.S.

W dniu 2017-01-12 o 10:17, Itschak Mugzach pisze:

I am trying to install Suse linux 12 SP2 on a zPDT native.I using
instruction from a redbook titled The Virtualization Cookbook for SLES
11 which has instructions for SLES 12 as well.

The problem I face is a message ** No repository found. I tried the FTP
installation and later Hard Drrive method as well. None works for me.

There is no installation log, so I can't figure is a real FTP is needed
(VSFTP is running on the base Linux).

Any idea why the installer doesn't find the materials in base directory?


zPDT is an emulator, but I'm not sure about HMC/SE facilities emulation. 
In order to install Linux "native" (in LPAR) you have to use HMC/SE 
facilities - "Load from..." task and DVD or ftp.


--
Radoslaw Skorupka
Lodz, Poland






---
Treść tej wiadomości może zawierać informacje prawnie chronione Banku 
przeznaczone wyłącznie do użytku służbowego adresata. Odbiorcą może być jedynie 
jej adresat z wyłączeniem dostępu osób trzecich. Jeżeli nie jesteś adresatem 
niniejszej wiadomości lub pracownikiem upoważnionym do jej przekazania 
adresatowi, informujemy, że jej rozpowszechnianie, kopiowanie, rozprowadzanie 
lub inne działanie o podobnym charakterze jest prawnie zabronione i może być 
karalne. Jeżeli otrzymałeś tę wiadomość omyłkowo, prosimy niezwłocznie 
zawiadomić nadawcę wysyłając odpowiedź oraz trwale usunąć tę wiadomość 
włączając w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is 
intended solely for business use of the addressee. This e-mail may only be 
received by the addressee and may not be disclosed to any third parties. If you 
are not the intended addressee of this e-mail or the employee authorized to 
forward it to the addressee, be advised that any dissemination, copying, 
distribution or any other similar activity is legally prohibited and may be 
punishable. If you received this e-mail by mistake please advise the sender 
immediately by using the reply facility in your e-mail software and delete 
permanently this e-mail including any copies of it either printed or saved to 
hard drive.

mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 Warszawa, 
www.mBank.pl, e-mail: kont...@mbank.pl
Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru 
Sądowego, nr rejestru przedsiębiorców KRS 025237, NIP: 526-021-50-88. 
Według stanu na dzień 01.01.2016 r. kapitał zakładowy mBanku S.A. (w całości 
wpłacony) wynosi 168.955.696 złotych.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: VSAM: Why MAXLRECL?

2017-01-12 Thread R.S.

W dniu 2017-01-11 o 22:30, Frank Swarbrick pisze:

Is there a downside to always defining VSAM files with a MAXLRECL of 32761, 
which seems to be the largest value for this parm for an UNSPANNED dataset?
Besides CISZ and other performance issues there is another big issue: 
the application.
Let's say you application ABC is designe to process records up to 3000B. 
Obviously you can use longer MAXRECL (the same for PS VB), but imagine 
someone inserted longer record. How? Using another application or utility.
What application ABC will do? Probably just fail. When? When it get 
longer record. It can be during (end of) batch, or anytime - it depends.


So, treat MAXLRECL as a protection.


My €0.02

--
Radoslaw Skorupka
Lodz, Poland






---
Tre tej wiadomoci moe zawiera informacje prawnie chronione Banku 
przeznaczone wycznie do uytku subowego adresata. Odbiorc moe by jedynie 
jej adresat z wyczeniem dostpu osób trzecich. Jeeli nie jeste adresatem 
niniejszej wiadomoci lub pracownikiem upowanionym do jej przekazania 
adresatowi, informujemy, e jej rozpowszechnianie, kopiowanie, rozprowadzanie 
lub inne dziaanie o podobnym charakterze jest prawnie zabronione i moe by 
karalne. Jeeli otrzymae t wiadomo omykowo, prosimy niezwocznie 
zawiadomi nadawc wysyajc odpowied oraz trwale usun t wiadomo 
wczajc w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is 
intended solely for business use of the addressee. This e-mail may only be 
received by the addressee and may not be disclosed to any third parties. If you 
are not the intended addressee of this e-mail or the employee authorized to 
forward it to the addressee, be advised that any dissemination, copying, 
distribution or any other similar activity is legally prohibited and may be 
punishable. If you received this e-mail by mistake please advise the sender 
immediately by using the reply facility in your e-mail software and delete 
permanently this e-mail including any copies of it either printed or saved to 
hard drive.

mBank S.A. z siedzib w Warszawie, ul. Senatorska 18, 00-950 Warszawa, 
www.mBank.pl, e-mail: kont...@mbank.pl
Sd Rejonowy dla m. st. Warszawy XII Wydzia Gospodarczy Krajowego Rejestru 
Sdowego, nr rejestru przedsibiorców KRS 025237, NIP: 526-021-50-88. 
Wedug stanu na dzie 01.01.2016 r. kapita zakadowy mBanku S.A. (w caoci 
wpacony) wynosi 168.955.696 zotych.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Cobol and PreCompile for CICS and DB2

2017-01-12 Thread Lizette Koehler
Yes, exactly.

I remember in the past you had two steps for compile for DB2 or CICS Cobol 
programs.  One to translate the SQL or CICS EXEC statements to COBOL and then 
the Compile step itself.

I was wondering when that was changed so the Cobol Compiler could do it all in 
one step without the pre-compiler step.

Lizette


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Bill Woodger
> Sent: Thursday, January 12, 2017 12:40 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Cobol and PreCompile for CICS and DB2
> 
> I'm not quite sure what you are asking. Do you mean, which release of COBOL
> first included the integrated CICS translator within the compiler itself? Or
> are you asking something else?
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: any user of client web enablement toolkit here?

2017-01-12 Thread David Crayford

On 12/01/2017 7:31 PM, Itschak Mugzach wrote:

I wrote HTTP protocol driver in REXX and it works fine. I also wrote an
assembler program that make use of the toolkit. When I try it for Windows
server, it work fine. When I try it to POST to linux (both Apache),it
requires LE and returns a SIGPIPE. Since then, it is impossible to access
the server form the mainframe (but it works from a browser).  Any Idea why?



You may have a bug in your code. SIGPIPE happens when you try to send to 
a closed socket. Check that you have not closed the socket or
are sending an invalid payload where the server will not send a response 
and closes the socket. Have you turned on logging on your Linux apache 
servers?

If not do so and take a look at the logs.






--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Problems installing Suse Linux under zpdt.

2017-01-12 Thread Itschak Mugzach
Thanks. This may take some time as I need moderator improvement. Meanwhile
I tried to connect to the ftp server with sam einfo and it worked. The
problem is only with the installer. I am sure this is a user error, but
can't find the reason because there is no log created for the installer.

ITschak


ITschak Mugzach
Z/OS, ISV Products and Application Security & Risk Assessments Professional

On Thu, Jan 12, 2017 at 1:10 PM, Joseph Reichman 
wrote:

> There is a z1090 yahoo usergroup
>
> You can also open up a PMR
>
> Joe Reichman
> 8045 Newell St Apt 403
> Silver Spring MD 20910
> Home (240) 863-3965
> Cell (917) 748 -9693
>
> > On Jan 12, 2017, at 4:17 AM, Itschak Mugzach  wrote:
> >
> > I am trying to install Suse linux 12 SP2 on a zPDT native.I using
> > instruction from a redbook titled The Virtualization Cookbook for SLES
> > 11 which has instructions for SLES 12 as well.
> >
> > The problem I face is a message ** No repository found. I tried the FTP
> > installation and later Hard Drrive method as well. None works for me.
> >
> > There is no installation log, so I can't figure is a real FTP is needed
> > (VSFTP is running on the base Linux).
> >
> > Any idea why the installer doesn't find the materials in base directory?
> >
> > ITschak
> >
> > --
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


any user of client web enablement toolkit here?

2017-01-12 Thread Itschak Mugzach
I wrote HTTP protocol driver in REXX and it works fine. I also wrote an
assembler program that make use of the toolkit. When I try it for Windows
server, it work fine. When I try it to POST to linux (both Apache),it
requires LE and returns a SIGPIPE. Since then, it is impossible to access
the server form the mainframe (but it works from a browser).  Any Idea why?




-- 

*| **Itschak Mugzach | Director | SecuriTeam Software | *

*|* *Email**: i_mugz...@securiteam.co.il **|* *Mob**: +972 522 986404 **|*
*Skype**: ItschakMugzach **|* *Web**: www.Securiteam.co.il  **|*

*|** IronSphere Platform* *|** An IT GRC for Legacy systems* *| Automated
Security Readiness Reviews (SRR) **|*

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Problems installing Suse Linux under zpdt.

2017-01-12 Thread Joseph Reichman
There is a z1090 yahoo usergroup 

You can also open up a PMR 

Joe Reichman
8045 Newell St Apt 403
Silver Spring MD 20910
Home (240) 863-3965
Cell (917) 748 -9693

> On Jan 12, 2017, at 4:17 AM, Itschak Mugzach  wrote:
> 
> I am trying to install Suse linux 12 SP2 on a zPDT native.I using
> instruction from a redbook titled The Virtualization Cookbook for SLES
> 11 which has instructions for SLES 12 as well.
> 
> The problem I face is a message ** No repository found. I tried the FTP
> installation and later Hard Drrive method as well. None works for me.
> 
> There is no installation log, so I can't figure is a real FTP is needed
> (VSFTP is running on the base Linux).
> 
> Any idea why the installer doesn't find the materials in base directory?
> 
> ITschak
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Problems installing Suse Linux under zpdt.

2017-01-12 Thread Itschak Mugzach
I am trying to install Suse linux 12 SP2 on a zPDT native.I using
instruction from a redbook titled The Virtualization Cookbook for SLES
11 which has instructions for SLES 12 as well.

The problem I face is a message ** No repository found. I tried the FTP
installation and later Hard Drrive method as well. None works for me.

There is no installation log, so I can't figure is a real FTP is needed
(VSFTP is running on the base Linux).

Any idea why the installer doesn't find the materials in base directory?

ITschak

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: AW: What is subsystem function code No. 39 and what does it do?

2017-01-12 Thread Giliad Wilf
On Wed, 11 Jan 2017 11:05:35 +0100, Roland Schiradin  
wrote:

>SYS1.MODGEN(IEFSSAG) is the related macro for this function code. HTH 
>

Thank you. 
This macro only describes a layout of an extension to the SSOB for the service 
requested.
Yet, the "Using the Subsystem Interface" manual neither lists this function 
with services you can request, nor does it list it with services you can 
provide when writing your own subsystem.
   
Is there any discussion per function code No. 39 somewhere?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Who's using Java 8

2017-01-12 Thread David Crayford

On 12/01/2017 3:17 PM, Timothy Sipples wrote:

Have you looked at Liberty Profile yet?


Yes. It's attractive but there are licensing issues.


Your customers with CICS TS 5+ for
z/OS or WAS 8.5+ for z/OS licensing already have Liberty Profile licensing,
and some of them probably already use Liberty. If you think one or more of
your target customers won't have any of those products then just ask IBM
about OEM licensing if you haven't already. (It can't hurt to ask, at
least.) There's also a blanket Liberty runtime license (<=2GB heap) that
might be appropriate to your needs, but (again) ask IBM.


Are CICS customers allowed to run WLP outside of CICS?


Liberty Profile is obviously going to have the greatest z/OS affinities
(features, supportability, performance, security, etc.) So if it's there,
or easily obtainable, why not? I'd avoid needless complexity if you can.


Well I would agree on supportability because if we ship a web server 
then we have to support it. The good thing
is that it's open source so we've got the source code to do just that if 
the vendor is too slow to fix problems. Feature wise I can
quite easily put together my own stack using open source Java libraries. 
A quick squiz at the WLP lib directory shows it's pretty much
using the same set of open source libraries. And there's a lot of stuff 
in there I don't need that's just bloat. Our server serves a REST API

and a single page application so there's not need for JSPs etc.

Security is interesting. Kirk shared dovetails Tomcat SAF realm code 
which was much appreciated.  Using that as a base it's trivial to write 
a Jetty login module that uses SAF authentication.
It shouldn't be too difficult to implement thread level security using 
the Java SAF packages although I'm not sure if I want to do that.




Timothy Sipples
IT Architect Executive, Industry Solutions, IBM z Systems, AP/GCG/MEA
E-Mail: sipp...@sg.ibm.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN